CN115063436B - Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection - Google Patents

Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection Download PDF

Info

Publication number
CN115063436B
CN115063436B CN202210615877.6A CN202210615877A CN115063436B CN 115063436 B CN115063436 B CN 115063436B CN 202210615877 A CN202210615877 A CN 202210615877A CN 115063436 B CN115063436 B CN 115063436B
Authority
CN
China
Prior art keywords
point cloud
depth
group
point
segmentation threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210615877.6A
Other languages
Chinese (zh)
Other versions
CN115063436A (en
Inventor
殷春
王胤泽
谭旭彤
陈凯
朱丹丹
王文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210615877.6A priority Critical patent/CN115063436B/en
Publication of CN115063436A publication Critical patent/CN115063436A/en
Application granted granted Critical
Publication of CN115063436B publication Critical patent/CN115063436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a large-area weak texture workpiece scanning point cloud segmentation method based on depth area projection, which comprises the steps of firstly constructing a depth distribution histogram based on overall scanning point cloud depth distribution, calculating an initial segmentation threshold value and approximate average depth values of each group of sub-set point clouds, then carrying out mean iteration on depth intervals in the depth distribution histogram, and selecting refined depth intervals so as to reduce calculated amount and improve segmentation rapidness; and then further refining and dividing the selected and thinned depth interval, traversing and calculating the inter-class variance values of different thresholds by adopting a maximum inter-class variance method, selecting a dividing threshold according to the maximum inter-class variance value to solve the problem of ambiguity of characteristic information, and improving the reliability of the dividing threshold, thereby improving the accuracy of division, and finally realizing the division of the independent scanning point cloud through three-view projection of the front and rear scenic point clouds according to the condition that the boundaries of the front and rear scenic point clouds, which are obtained by the dividing threshold, are unknown due to the shielding of shooting objects. The method solves the problem of ambiguity of the characteristic information, reduces the calculated amount, and can accurately and rapidly acquire independent scanning point clouds of different workpieces.

Description

Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection
Technical neighborhood
The invention belongs to the technical field of point cloud segmentation, and particularly relates to a large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection.
Background
Along with the development of industrial society, the engineering detection requirement is also continuously improved, so that a high-precision three-dimensional (Three Dimensional, abbreviated as 3D) scanning reconstruction technology becomes a current research hot spot. The method has the advantages of high detection speed, high measurement precision, high anti-interference performance and the like due to the application of structured light in the active three-dimensional scanning reconstruction technology, and has wide application in the technical fields of reverse engineering, medical imaging, cultural relics restoration and the like.
In the projection scanning shooting process of the actual engineering, the fringe pattern of the structured light completely covers the surface of the workpiece to be detected. Due to the limitation of scenes, for adjacent workpieces to be detected, the situation that the workpieces are blocked due to too close distance to each other is unavoidable, and finally, the aliasing of the target workpiece scanning point cloud and other workpiece scanning point clouds generated through three-dimensional matching is caused. When any main scanning point cloud in the three-dimensional point scanning cloud is processed independently, if other workpiece scanning point clouds with different or same space intervals exist, the area where the target workpiece scanning point cloud is located cannot be divided accurately, and partial point clouds of the other workpiece scanning point clouds can be included for synchronous processing, so that additional calculation of the other workpiece scanning point clouds is caused, and meanwhile, the pertinence and the accuracy of a processing algorithm are reduced.
Point cloud segmentation is the most advantageous embodiment of a three-dimensional image relative to a two-dimensional image. The point cloud segmentation refers to a process of dividing a point cloud into a plurality of mutually disjoint subsets according to the characteristics of the point cloud data, such as spatial position, geometric relationship, texture information and the like. Only if the point clouds of different main bodies are separated, the point cloud data of the target area can be obtained, so that the independent processing is convenient. The conventional point cloud segmentation method generally classifies types of generated point cloud data based on characteristic information such as colors or three-dimensional coordinates, and has a certain short plate when the segmentation object is a simple workpiece with weak texture or repeated texture characteristics. For example, when a conventional region growing algorithm segments a simple workpiece scanning point cloud, the following problems are encountered: because the surface features of the simple workpiece scanning point cloud are not obvious or are not easy to distinguish, the division of the boundary region of the main body scanning point cloud which is finally divided possibly has great ambiguity, and the accurate independent workpiece scanning point cloud cannot be obtained; secondly, because the amount of point cloud data obtained through active stereo matching is huge, the workload of point cloud processing is increased by traversing calculation among adjacent points one by one, and the overall efficiency of an algorithm is further reduced.
In summary, the method for dividing the scanning point cloud for the workpiece with weak characteristics has certain requirements and needs.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection, which solves the problem of ambiguity of characteristic information and reduces the calculated amount so as to accurately and rapidly acquire independent scanning point clouds of different workpieces.
In order to achieve the above object, the present invention provides a method for dividing a scanning point cloud of a large-area weak texture workpiece based on depth region projection, comprising:
(1) Constructing a point cloud depth distribution histogram and calculating an approximate average depth value of each group of sub-set point clouds and an initial point cloud segmentation threshold
1.1 For the three-dimensional point cloud reconstruction, the whole scanning point cloud comprising two workpieces firstly takes a value range [ T min,Tmax ] of the depth of the three-dimensional point, and all three-dimensional points of the whole scanning point cloud are determined according to the depth values of the three-dimensional points so as toDividing group distances into I groups of subset point clouds for group distances and the like, taking the right boundary value of a depth interval where the subset point clouds are located as a segmentation threshold value of the subset point clouds, sequentially increasing and expressing as T 1,T2,...,TI, expressing as g 1,g2,g3...,gI the corresponding subset point clouds, expressing as [ T i-1,Ti ] the depth interval corresponding to the I groups of subset point clouds, expressing the total point number of the I groups of subset point clouds by p i, and taking i=1, 2,3, I, taking the segmentation threshold value as a horizontal axis coordinate value and taking the total point number of each group of subset point clouds as a vertical axis coordinate value of the corresponding depth interval of the group, thus obtaining a point cloud depth distribution histogram;
1.2 For the i-th subset point cloud, the approximate average depth value of the depth interval corresponding to the i-th subset point cloud is:
1.3 Calculating a depth average m g of the whole scanning point cloud:
Selecting a segmentation threshold with the smallest phase difference with the depth mean m g as an initial point cloud segmentation threshold S0;
(2) Iterating the point cloud segmentation threshold value, and selecting a depth interval for further refinement segmentation
2.1 Initializing the iteration number k=0;
2.2 Classifying the I-group subset point cloud g 1,g2,g3...,gI as an approximate average of the I-th group subset point cloud g i) If the point cloud segmentation threshold is smaller than or equal to the point cloud segmentation threshold S k, classifying the point cloud into a foreground point cloud group G L, and when the approximate average value/>, of the i-th group subset point cloud G i If the point cloud segmentation threshold S k is greater than the point cloud segmentation threshold S k, classifying the point cloud as a background point cloud group G H, and:
wherein, C is the group number of the subset point clouds of the foreground point cloud group G L;
2.3 Respectively calculating approximate average depth values of the foreground point cloud group G L and the background point cloud group G H And/>Updating to obtain a new segmentation threshold S k+1:
Selecting the depth mean value d ave as a point cloud segmentation threshold S k+1 obtained by the k+1th iteration;
2.4 Judging whether the iteration stop condition is satisfied:
If not, returning to the step 2.2), and if so, stopping iteration, wherein abs is an absolute value operation;
Selecting a depth interval where a segmentation threshold S k+1 is located when iteration stops as a point cloud depth distribution histogram to further refine the segmented depth interval, wherein the depth interval is marked as [ T n-1,Tn ], and n is the sequence number of the depth interval;
(3) Computing the total number L (j) and H (j) of front and rear depth interval point clouds corresponding to the j-th group point cloud segmentation threshold T nj of the depth interval [ T n-1,Tn ] and the corresponding approximate depth average values m L (j) and m H (j)
3.1 For the nth subset point cloud corresponding to the depth interval [ T n-1,Tn ] toFor group distance and the like, dividing the group distance into J groups, wherein the right side boundary value of one group of point clouds is expressed as a segmentation threshold value of the group of point clouds, so that the segmentation threshold value of the J group of point clouds is expressed as T n1,Tn2,...,TnJ in an increasing way, and the corresponding J group of point clouds is expressed as: g n1,gn2,gn3...,gnJ, using p nj to represent the total point number of the j-th group point cloud, j epsilon [1, J ];
3.2 For the j-th group of point clouds g nj, calculating the total number of three-dimensional points L (j) of the foreground point cloud depth interval [ T min,Tnj ] and the total number of three-dimensional points H (j) of the background point cloud depth interval [ T n(j+1),Tmax ] corresponding to the corresponding segmentation threshold T nj:
3.3 Statistical approximate depth means m L (j) and m H (j):
(4) Obtaining a final segmentation threshold value in a depth interval [ T n-1,Tn ] based on a maximum inter-class variance method, and segmenting the whole scanning point cloud
4.1 Calculating the inter-class variance b 2 (j) of the foreground point cloud segmented by each segmentation threshold T n1,Tn2,...,TnJ:
4.2 Traversing the partition threshold T nj corresponding to all the J group point clouds g nj to obtain J inter-class variances b 2 (J), j=1, 2, …, J, selecting the partition threshold T nj′ with the largest inter-class variance, J' as the sequence number of the partition threshold with the largest inter-class variance, and then the final partition threshold T sec is:
4.3 Constructing a horizontal plane Z sec perpendicular to the shooting direction based on a final segmentation threshold T sec, segmenting the whole scanning point cloud, attributing the three-dimensional point to the foreground point cloud PC f when the three-dimensional point is located above the horizontal plane Z sec, and attributing the three-dimensional point to the background point cloud PC b when the three-dimensional point is located below the horizontal plane Z sec;
(5) Extracting independent point clouds according to the three views of the foreground point cloud and the background point cloud obtained by segmentation
5.1 Projection of the foreground point cloud PC f) to obtain three views: front view M f, side view S f, and top view T f, the background point cloud PC b is projected, three views are obtained: front view M b, side view S b, and top view T b;
5.2 Judging whether the whole scanning point cloud is an independent scanning point cloud according to whether patterns in the three views are attached or not: if the patterns in any one of the front view M f and the front view M b, the side view S f and the side view S b and the top view T f and the top view T b are not attached, the whole scanning point cloud is independent scanning point cloud, otherwise, the independent scanning point cloud is not exist, and the segmentation is not performed;
5.3 Mapping the two-dimensional coordinate ranges of the two views which are not attached to the pattern to the three-dimensional coordinate ranges of the whole scanning point cloud respectively, outlining the point cloud in the value range space of three dimensions of the whole scanning point cloud, obtaining independent scanning point clouds of two workpieces, and completing workpiece scanning point cloud segmentation.
The invention aims at realizing the following steps:
According to the large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection, a depth distribution histogram is firstly constructed based on overall scanning point cloud depth distribution, an initial segmentation threshold value and approximate average depth values of each group of sub-set point clouds are calculated, then mean iteration is carried out on depth intervals in the depth distribution histogram, and refined depth intervals are selected so as to reduce calculated amount and improve segmentation rapidity; and then further refining and dividing the selected and thinned depth interval, traversing and calculating the inter-class variance values of different thresholds by adopting a maximum inter-class variance method, selecting a dividing threshold according to the maximum inter-class variance value to solve the problem of ambiguity of characteristic information, and improving the reliability of the dividing threshold, thereby improving the accuracy of division, and finally realizing the division of the independent scanning point cloud through three-view projection of the front and rear scenic point clouds according to the condition that the boundaries of the front and rear scenic point clouds, which are obtained by the dividing threshold, are unknown due to the shielding of shooting objects.
Meanwhile, the large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection has the following beneficial effects:
1. compared with the traditional point cloud segmentation method for carrying out segmentation processing by considering complex features such as geometric relations, texture information and the like of point cloud data, the method is only based on the point cloud depth distribution histogram, simple iteration and traversal are adopted to calculate the inter-class variances of different depth segmentation thresholds, and the point cloud segmentation is carried out through the segmentation threshold with the maximum corresponding inter-class variance, so that the target point cloud of different depth intervals can be extracted more efficiently and rapidly, and the method has better practicality;
2. Aiming at the point cloud distinguishing problem in the same depth interval, the point cloud distinguishing problem is respectively projected to three two-dimensional planes, namely a front view, a side view and a top view, and meanwhile, the point cloud distinguishing problem is respectively observed and distinguished through the projection of the point cloud distinguishing problem. Because of the mutual correspondence of the projection two-dimensional coordinates, the acquired horizontal and vertical projection coordinates and the horizontal and vertical coordinates of the point cloud are in one-to-one correspondence, the value range of three dimensions of the scanning point cloud can be completely determined, the point cloud surface in the value range space is sketched, and different main scanning point clouds are independently extracted so as to facilitate distinguishing and processing. Compared with the simple distinction of the point cloud in the three-dimensional space, the three-dimensional Ping Miandian cloud is mapped to the two-dimensional plane, and the complexity and the calculated amount of the point cloud segmentation and extraction are reduced by performing the dimension reduction operation on the three-dimensional information;
3. the method combines the global iteration threshold method and the maximum inter-class variance method to perform point cloud segmentation, has the advantages of high reliability and easiness in implementation, uses the threshold value mean values of different depth intervals to represent the depths of all point clouds of the depth intervals, overcomes the defect that the time consumption of the global iteration threshold iterative computation and the maximum inter-class variance method traversal computation is more, reduces the interference of error point clouds in the intervals, and improves the robustness of the main scanning point cloud segmentation threshold computation of different depths based on the depth distribution histogram.
Drawings
FIG. 1 is a flow chart of an embodiment of a method for cloud segmentation of a large-area weak texture workpiece scan point based on depth region projection of the present invention;
FIG. 2 is a schematic view of a point cloud depth distribution histogram and a point cloud segmentation threshold in the present invention;
FIG. 3 is a flow chart of one embodiment of iterating through a point cloud segmentation threshold to select depth intervals for further refinement segmentation in accordance with the present invention;
FIG. 4 is a schematic diagram of a refined segmentation depth interval in the present invention;
FIG. 5 is a schematic diagram of a specific example of a workpiece point cloud segmentation result;
FIG. 6 is a pictorial view of two simple morphology rubber and plastic workpieces WP 1 and WP 2;
FIG. 7 is an image of an overall scan point cloud PC resulting from three-dimensional reconstruction of the two workpieces shown in FIG. 6;
FIG. 8 is a depth distribution histogram hist generated from the global scan point cloud of the two workpieces shown in FIG. 6;
FIG. 9 is a depth distribution histogram of the depth interval [ T n-1,Tn ] shown in FIG. 8;
Fig. 10 is an image of a foreground point cloud PC f segmented by a final segmentation threshold T sec
Fig. 11 is an image of the background point cloud PC b segmented by the final segmentation threshold T sec;
FIG. 12 is a projected image of front, side, top view of front and rear point clouds PC f and PC b;
fig. 13 is a scanning point cloud PC 1 of the workpiece 1 extracted in the three-view interval range;
Fig. 14 is a scanning point cloud PC 2 of the workpiece 2 extracted in the three-view interval range.
Detailed Description
The following description of the embodiments of the invention is presented in conjunction with the accompanying drawings to provide a better understanding of the invention to those skilled in the art. It is to be expressly noted that in the description below, detailed descriptions of known functions and designs are omitted here as perhaps obscuring the present invention.
FIG. 1 is a flow chart of an embodiment of a method for cloud segmentation of a large-area weak texture workpiece scan point based on depth region projection according to the present invention.
In this embodiment, as shown in fig. 1, the method for dividing the scanning point cloud of the large-area weak texture workpiece based on depth region projection includes the following steps:
Step S1: constructing a point cloud depth distribution histogram and calculating an approximate average depth value of each group of sub-set point clouds and an initial point cloud segmentation threshold
Step S1.1: for the three-dimensional point cloud reconstruction, the whole scanning point cloud comprising two workpieces firstly takes a value range [ T min,Tmax ] of three-dimensional point depth, and all three-dimensional points of the whole scanning point cloud are subjected to depth value based on the depth value so as to obtain the three-dimensional point cloudFor group distance and the like, dividing the group distance into I groups of sub-point clouds, taking the right boundary value of a depth interval where the sub-point clouds are located as a segmentation threshold value of the sub-point clouds, sequentially increasing and expressing the segmentation threshold value as T 1,T2,...,TI, expressing the corresponding sub-point clouds as g 1,g2,g3...,gI, expressing the depth interval corresponding to the I groups of sub-point clouds as [ T i-1,Ti ], expressing the total point number of the I groups of sub-point clouds by p i, and taking the segmentation threshold value as a horizontal axis coordinate value and the total point number of each group of sub-point clouds as a vertical axis coordinate value of the corresponding depth interval, thus obtaining the point cloud depth distribution histogram.
In the present embodiment, the entire scanning point cloud is denoted as a PC. The obtained point cloud depth distribution histogram is constructed with the shooting direction as the depth increasing direction, and is denoted as hist in this embodiment. In this embodiment, as shown in fig. 2, the horizontal axis of the point cloud depth distribution histogram hist is the depth value of the point cloud, the vertical axis is the number of point clouds corresponding to the depth interval, that is, the total point number of the point clouds of the corresponding sub-set, that is, the depth interval corresponding to the i-th group of sub-set point clouds is [ T i-1,Ti ], and the total point number of the i-th group of sub-set point clouds is p i. And taking the right segmentation threshold of each group of the sub-set point cloud depth intervals as the corresponding segmentation threshold of the group, namely, the segmentation threshold of the i-th group of the sub-set point clouds is T i. In this embodiment, i=8, i.e. T 8 is T max.
Step S1.2: the approximate average depth value of the depth interval corresponding to each group of the subset point clouds is calculated, and for the ith group of the subset point clouds, the approximate average depth value of the depth interval corresponding to the ith group of the subset point clouds is as follows:
Step S1.3: calculating a depth average value m g of the whole scanning point cloud:
The segmentation threshold that is the smallest in difference from the depth average m g is selected as the initial point cloud segmentation threshold S 0. In the present embodiment, as shown in fig. 2, the segmentation threshold having the smallest difference from the depth average m g is T 4, and the segmentation threshold T 4 is selected as the initial point cloud segmentation threshold S 0.
Step S2: and iterating the point cloud segmentation threshold value, and selecting a depth interval for further refining segmentation. The specific process is shown in fig. 3.
Step S2.1: initializing the iteration number k=0;
Step S2.2: classifying the subset point clouds into a foreground point cloud group G L or a background point cloud group G H according to the point cloud segmentation threshold S k: classifying the I group subset point cloud g 1,g2,g3...,gI as an approximate average of the I group subset point cloud g i If the point cloud segmentation threshold is smaller than or equal to the point cloud segmentation threshold S k, classifying the point cloud into a foreground point cloud group G L, and when the approximate average value/>, of the i-th group subset point cloud G i If the point cloud segmentation threshold S k is greater than the point cloud segmentation threshold S k, classifying the point cloud as a background point cloud group G H, and:
Wherein C is the group number of the subset point clouds of the foreground point cloud group G L.
The classification of group I subset point cloud g 1,g2,g3...,gI with initial point cloud segmentation threshold S 0 is shown in fig. 2.
Step S2.3: and (3) according to the approximate average depth values of the foreground point cloud group and the background point cloud group, following a new segmentation threshold value: respectively calculating approximate average depth values of the foreground point cloud group G L and the background point cloud group G H And/>Updating to obtain a new segmentation threshold S k+1:
Selecting the depth mean value d ave as a point cloud segmentation threshold S k+1 obtained by the (k+1) th iteration;
step S2.4: judging whether an iteration stop condition is satisfied:
If not, returning to the step S2.2, and if so, stopping iteration, wherein abs is an absolute value operation;
The depth interval where the segmentation threshold S k+1 is located when the iteration is stopped is selected as the depth interval of the further refined segmentation of the point cloud depth distribution histogram, as shown in fig. 4, the depth interval is denoted by [ T n-1,Tn ], and n is the sequence number of the depth interval. In this embodiment, n=5.
Step S3: calculating the total number L (j) and H (j) of front and rear depth interval point clouds corresponding to the j-th group point cloud segmentation threshold T nj of the selected depth interval [ T n-1,Tn ] and the corresponding approximate depth average values m L (j) and m H (j)
Step S3.1: as shown in FIG. 4, for the nth subset point cloud corresponding to the depth interval [ T n-1,Tn ] toFor group distance and the like, dividing the group distance into J groups, wherein the boundary value on the right side of one group of point clouds is expressed as a segmentation threshold value of the group of point clouds, so that the segmentation threshold value of the J group of point clouds is increasingly expressed as T n1,Tn2,...,TnJ, and the corresponding J group of point clouds is expressed as: g n1,gn2,gn3...,gnJ, using p nj to represent the total point number of the j-th group point cloud, j epsilon [1, J ];
step S3.2: for the j-th group of point clouds g nj, calculating the total number of three-dimensional points L (j) of the foreground point cloud depth interval [ T min,Tnj ] and the total number of three-dimensional points H (j) of the background point cloud depth interval [ T n(j+1),Tmax ] corresponding to the corresponding segmentation threshold T nj:
step S3.3: statistical approximate depth averages m L (j) and m H (j):
Step S4: depth segmentation threshold is obtained in a depth interval [ T n-1,Tn ] based on the maximum inter-class variance method, and the whole scanning point cloud is segmented
Step S4.1: calculating the inter-class variance b 2 (j) of the foreground and background point clouds segmented by each segmentation threshold T n1,Tn2,...,TnJ:
step S4.2: traversing all J groups of point clouds g nj corresponding to the segmentation threshold T nj to obtain J inter-class variances b 2 (J), j=1, 2, … and J, selecting the segmentation threshold T nj′ with the largest inter-class variance as the sequence number of the segmentation threshold with the largest inter-class variance, and then obtaining a final segmentation threshold T sec as follows:
Step S4.3: the horizontal plane Z sec perpendicular to the photographing direction is constructed based on the final segmentation threshold T sec to segment the entire scanning point cloud, which is assigned to the foreground point cloud PC f when the three-dimensional point is located above the horizontal plane Z sec, and assigned to the background point cloud PC b when the three-dimensional point is located below the horizontal plane Z sec.
Step S5: extracting independent point clouds according to three views of the foreground point cloud and the background point cloud obtained by segmentation
Step S5.1: the foreground point cloud PC f is projected to obtain three views: front view M f, side view S f, and top view T f, the rear view point cloud PC b is projected, three views thereof are obtained: a rear view front view M b, a side view S b, and a top view T f;
Step S5.2: judging whether the point cloud is an independent point cloud according to whether patterns in the three views are attached or not: if the patterns in any one of the front view M f and the front view M b, the side view S f and the side view S b and the top view T f and the top view T b are not attached, the whole scanning point cloud is independent, otherwise, the independent point cloud is not present, and the segmentation is not performed;
Step S5.3: mapping the two-dimensional coordinate ranges of the two views which are not attached to the pattern to the three-dimensional coordinate ranges of the whole scanning point cloud respectively, outlining the point cloud in the value range space of the three dimensions of the whole scanning point cloud, obtaining independent scanning point clouds of two workpieces, and completing workpiece scanning point cloud segmentation. In this embodiment, as shown in fig. 5, the two separated independent scanning point clouds are foreground separated point clouds, corresponding to a cube, and background separated point clouds, corresponding to cylinders.
Examples
In this example, the integral scanning point cloud reconstructed as two rubber and plastic square planar workpieces is segmented. The physical diagrams of the work piece WP 1 and the work piece WP 2 adopted in the example are shown in fig. 6, in order to avoid the influence of light transmission and light reflection on the surface of the work piece on the reconstruction effect, white matt paint is sprayed on the surface of the work piece, and four rough holes of the work piece WP 2 are filled in the shooting process. And carrying out preprocessing such as three-dimensional point cloud reconstruction, point cloud filtering, downsampling and the like on the two simple repeated morphology workpieces to obtain the whole scanning point cloud of the two preprocessed workpieces as shown in fig. 7.
The invention constructs a maximum threshold iteration method point cloud depth distribution histogram hist for the whole scanning point cloud PC of two workpieces according to the precision requirement of the three-dimensional point cloud, and takes a value range [ T min,Tmax ] in the point cloud depth of the whole workpieces to obtain the target point cloud depth distribution histogram histFor the group distance from shallow to deep and the like, the group distance is divided into the I group subset point clouds g 1,g2,g3,...,gI, the sequential increasing of the dividing threshold values of the I group subset point clouds can be represented as T 1,T2,...,TI, and then the number of the point clouds in each group depth range is counted, namely, the total point number p 1,p2,...,pI corresponding to the I group subset point clouds is used as the vertical axis coordinate value of the depth interval corresponding to the point cloud depth distribution histogram hist.
In this embodiment, the depth range of the whole workpiece scanning point cloud [ T min,Tmax ] = [56.5,60.5], i=8, thenA depth distribution histogram hist generated from the entire scanning point cloud of two workpieces is shown in fig. 8.
The approximate average depth value d i of each depth interval in the three-dimensional point depth value range [ T min,Tmax ] is adopted to approximately represent the average depth value of each point in the depth interval, so that the depth average value m g of the whole scanning point cloud is further calculated:
The segmentation threshold T 4 that is the smallest in difference from the depth mean m g is selected as the initial point cloud segmentation threshold S 0 =58.5. The depth interval obtained through the global iteration threshold interval algorithm is [ T n-1,Tn ], in the example, n=5, and the depth interval is [ T 4,T5 ], namely [58.5,59], which basically accords with the actual situation of two workpiece point cloud segmentation.
The segmentation is further refined for all points in the [58.5,59] depth interval toFor group distance and the like, dividing the group distance into J groups, wherein the boundary value on the right side of one group of point clouds is expressed as a segmentation threshold value of the group of point clouds, so that the segmentation threshold value of the J group of point clouds is increasingly expressed as T n1,Tn2,...,TnJ, and the corresponding J group of point clouds is expressed as: g n1,gn2,gn3...,gnJ, p nj is used for representing the total point number of the j-th group point cloud, and j is E [1, J ]. Wherein i= 8,J, then/>Depth interval [ T n-1,Tn ] the depth distribution histograms of the two workpiece point clouds are shown in FIG. 9. Obtaining a segmentation threshold value T 54 =58.9 when the inter-class variance b 2 (j) representing the depth difference of the foreground and background point clouds is maximum through a maximum inter-class variance method, namely j' =4, and obtaining a final segmentation threshold value
Fig. 10 and 11 show front and rear scenery spot clouds of the workpiece to be measured after being segmented by a plane Z sec =58.85 constructed by a final segmentation threshold T sec, and the ideal depth segmentation plane is closer to the depth intermediate position of the point cloud PC 1 of the workpiece 1 and the point cloud PC 2 of the workpiece 2, and most of the main body areas of the two point clouds in fig. 10 and 11 are better reserved.
After depth threshold segmentation is carried out on the generated point cloud, the approximate depth range of the foreground point cloud and the background point cloud in the shooting direction can be determined, the interval range of other two dimensions can not be well determined, and aiming at the situation that the respective edge point clouds are difficult to distinguish due to shielding caused by too close (non-contact) between workpieces to be detected, the segmentation result of the workpieces to be detected, namely the foreground point cloud and the background point cloud, are respectively projected to three two-dimensional planes, namely three views, different view angles are observed on the workpieces from the front view of the workpieces to be detected, the two-dimensional planes of the left view and the top view are used for carrying out analysis and judgment on the situation of shielding the point clouds with the same depth based on the three views, and the two workpiece point clouds are independently extracted through the interval range of the three dimensions. The three views of the front and rear view point clouds are shown in fig. 12.
In this embodiment, three views of the front and rear view point clouds are shown in fig. 12. As can be seen from fig. 12, although it is more difficult to distinguish the spatial positions of the two workpieces from the front view of the front and rear view point clouds of the two workpieces, that is, the front view M f shown in fig. 12 (a) and the rear view M b shown in fig. 12 (b), it can be explained that the two workpiece point cloud projections are not attached, but the independent scan point clouds exist in the three-dimensional whole scan point cloud, and the division can be performed from the side view, that is, the front view S f shown in fig. 12 (c), the rear view S b shown in fig. 12 (d), and the top view, that is, the front view T f shown in fig. 12 (e), and the rear view T f shown in fig. 12 (f). And respectively extracting two workpiece point clouds according to the correspondence of the three-view projection abscissa and the three-view projection ordinate to the spatial point cloud coordinate, so as to obtain the point clouds of the foreground and the background, namely the workpieces 1 and 2. The range of the three dimensions where the work piece 1 point cloud PC 1 is located is x E [7,24], y E [ -8, -1], z E [56,60], respectively; the three dimensions of the workpiece 2 point cloud PC 2 are located in a range of intervals of x E [10,23], y E [ -1, -6], z E [58,61], respectively. The extracted scan point cloud is shown in fig. 13 and 14 below.
From the above examples, the invention can efficiently and accurately obtain the foreground and background independent workpiece scanning point cloud with the common depth range.
While the foregoing describes illustrative embodiments of the present invention to facilitate an understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but is to be construed as protected by the following claims insofar as various changes are within the spirit and scope of the present invention as defined and defined by the appended claims.

Claims (1)

1. A large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection is characterized by comprising the following steps of:
(1) Constructing a point cloud depth distribution histogram and calculating an approximate average depth value of each group of sub-set point clouds and an initial point cloud segmentation threshold
1.1 For the three-dimensional point cloud reconstruction, the whole scanning point cloud comprising two workpieces firstly takes a value range [ T min,Tmax ] of the depth of the three-dimensional point, and all three-dimensional points of the whole scanning point cloud are determined according to the depth values of the three-dimensional points so as toDividing group distances into I groups of subset point clouds for group distances and the like, taking the right boundary value of a depth interval where the subset point clouds are located as a segmentation threshold value of the subset point clouds, sequentially increasing and expressing as T 1,T2,...,TI, expressing as g 1,g2,g3...,gI the corresponding subset point clouds, expressing as [ T i-1,Ti ] the depth interval corresponding to the I groups of subset point clouds, expressing the total point number of the I groups of subset point clouds by p i, and taking i=1, 2,3, I, taking the segmentation threshold value as a horizontal axis coordinate value and taking the total point number of each group of subset point clouds as a vertical axis coordinate value of the corresponding depth interval of the group, thus obtaining a point cloud depth distribution histogram;
1.2 For the i-th subset point cloud, the approximate average depth value of the depth interval corresponding to the i-th subset point cloud is:
1.3 Calculating a depth average m g of the whole scanning point cloud:
Selecting a segmentation threshold with the smallest phase difference with the depth mean m g as an initial point cloud segmentation threshold S0;
(2) Iterating the point cloud segmentation threshold value, and selecting a depth interval for further refinement segmentation
2.1 Initializing the iteration number k=0;
2.2 Classifying the I-group subset point cloud g 1,g2,g3...,gI as an approximate average of the I-th group subset point cloud g i) If the point cloud segmentation threshold is smaller than or equal to the point cloud segmentation threshold S k, classifying the point cloud into a foreground point cloud group G L, and when the point cloud G i of the ith group is approximately the average valueIf the point cloud segmentation threshold S k is greater than the point cloud segmentation threshold S k, classifying the point cloud as a background point cloud group G H, and:
wherein, C is the group number of the subset point clouds of the foreground point cloud group G L;
2.3 Respectively calculating approximate average depth values of the foreground point cloud group G L and the background point cloud group G H And/>Updating to obtain a new segmentation threshold S k+1:
Selecting the depth mean value d ave as a point cloud segmentation threshold S k+1 obtained by the k+1th iteration;
2.4 Judging whether the iteration stop condition is satisfied:
If not, returning to the step 2.2), and if so, stopping iteration, wherein abs is an absolute value operation;
Selecting a depth interval where a segmentation threshold S k+1 is located when iteration stops as a point cloud depth distribution histogram to further refine the segmented depth interval, wherein the depth interval is marked as [ T n-1,Tn ], and n is the sequence number of the depth interval;
(3) Computing the total number L (j) and H (j) of front and rear depth interval point clouds corresponding to the j-th group point cloud segmentation threshold T nj of the depth interval [ T n-1,Tn ] and the corresponding approximate depth average values m L (j) and m H (j)
3.1 For the nth subset point cloud corresponding to the depth interval [ T n-1,Tn ] toFor group distance and the like, dividing the group distance into J groups, wherein the right side boundary value of one group of point clouds is expressed as a segmentation threshold value of the group of point clouds, so that the segmentation threshold value of the J group of point clouds is expressed as T n1,Tn2,...,TnJ in an increasing way, and the corresponding J group of point clouds is expressed as: g n1,gn2,gn3...,gnJ, using p nj to represent the total point number of the j-th group point cloud, j epsilon [1, J ];
3.2 For the j-th group of point clouds g nj, calculating the total number of three-dimensional points L (j) of the foreground point cloud depth interval [ T min,Tnj ] and the total number of three-dimensional points H (j) of the background point cloud depth interval [ T n(j+1),Tmax ] corresponding to the corresponding segmentation threshold T nj:
3.3 Statistical approximate depth means m L (j) and m H (j):
(4) Obtaining a final segmentation threshold value in a depth interval [ T n-1,Tn ] based on a maximum inter-class variance method, and segmenting the whole scanning point cloud
4.1 Calculating the inter-class variance b 2 (j) of the foreground point cloud segmented by each segmentation threshold T n1,Tn2,...,TnJ:
4.2 Traversing the partition threshold T nj corresponding to all the J group point clouds g nj to obtain J inter-class variances b 2 (J), j=1, 2, …, J, selecting the partition threshold T nj′ with the largest inter-class variance, J' as the sequence number of the partition threshold with the largest inter-class variance, and then the final partition threshold T sec is:
4.3 Constructing a horizontal plane Z sec perpendicular to the shooting direction based on a final segmentation threshold T sec, segmenting the whole scanning point cloud, attributing the three-dimensional point to the foreground point cloud PC f when the three-dimensional point is located above the horizontal plane Z sec, and attributing the three-dimensional point to the background point cloud PC b when the three-dimensional point is located below the horizontal plane Z sec;
(5) Extracting independent point clouds according to the three views of the foreground point cloud and the background point cloud obtained by segmentation
5.1 Projection of the foreground point cloud PC f) to obtain three views: front view M f, side view S f, and top view T f, the background point cloud PC b is projected, three views are obtained: front view M b, side view S b, and top view T b;
5.2 Judging whether the whole scanning point cloud is an independent scanning point cloud according to whether patterns in the three views are attached or not: if the patterns in any one of the front view M f and the front view M b, the side view S f and the side view S b and the top view T f and the top view T b are not attached, the whole scanning point cloud is independent scanning point cloud, otherwise, the independent scanning point cloud is not exist, and the segmentation is not performed;
5.3 Mapping the two-dimensional coordinate ranges of the two views which are not attached to the pattern to the three-dimensional coordinate ranges of the whole scanning point cloud respectively, outlining the point cloud in the value range space of three dimensions of the whole scanning point cloud, obtaining independent scanning point clouds of two workpieces, and completing workpiece scanning point cloud segmentation.
CN202210615877.6A 2022-06-01 2022-06-01 Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection Active CN115063436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210615877.6A CN115063436B (en) 2022-06-01 2022-06-01 Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210615877.6A CN115063436B (en) 2022-06-01 2022-06-01 Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection

Publications (2)

Publication Number Publication Date
CN115063436A CN115063436A (en) 2022-09-16
CN115063436B true CN115063436B (en) 2024-05-10

Family

ID=83197637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210615877.6A Active CN115063436B (en) 2022-06-01 2022-06-01 Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection

Country Status (1)

Country Link
CN (1) CN115063436B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110802A (en) * 2019-05-14 2019-08-09 南京林业大学 Airborne laser point cloud classification method based on high-order condition random field
CN110400322A (en) * 2019-07-30 2019-11-01 江南大学 Fruit point cloud segmentation method based on color and three-dimensional geometric information
CN113177593A (en) * 2021-04-29 2021-07-27 上海海事大学 Fusion method of radar point cloud and image data in water traffic environment
CN114299150A (en) * 2021-12-31 2022-04-08 河北工业大学 Depth 6D pose estimation network model and workpiece pose estimation method
WO2022108745A1 (en) * 2020-11-23 2022-05-27 Argo AI, LLC Systems and methods for object detection with lidar decorrelation
CN114549307A (en) * 2022-01-28 2022-05-27 电子科技大学 High-precision point cloud color reconstruction method based on low-resolution image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537814B (en) * 2018-03-14 2019-09-03 浙江大学 A kind of three-dimensional sonar point cloud chart based on ViBe is as dividing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110802A (en) * 2019-05-14 2019-08-09 南京林业大学 Airborne laser point cloud classification method based on high-order condition random field
CN110400322A (en) * 2019-07-30 2019-11-01 江南大学 Fruit point cloud segmentation method based on color and three-dimensional geometric information
WO2022108745A1 (en) * 2020-11-23 2022-05-27 Argo AI, LLC Systems and methods for object detection with lidar decorrelation
CN113177593A (en) * 2021-04-29 2021-07-27 上海海事大学 Fusion method of radar point cloud and image data in water traffic environment
CN114299150A (en) * 2021-12-31 2022-04-08 河北工业大学 Depth 6D pose estimation network model and workpiece pose estimation method
CN114549307A (en) * 2022-01-28 2022-05-27 电子科技大学 High-precision point cloud color reconstruction method based on low-resolution image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Pose estimation of metal workpieces based on RPM-Net for robot grasping from point cloud;Lin Li等;《Industrial Robot》;20220531;全文 *
基于包裹相位匹配的工件点云生成及分割方法研究;王胤泽;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20230115;全文 *
复杂地形电力线机载激光雷达点云自动提取方法;沈小军;秦川;杜勇;于忻乐;;同济大学学报(自然科学版);20180814(07);全文 *

Also Published As

Publication number Publication date
CN115063436A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN109903327B (en) Target size measurement method of sparse point cloud
CN108932475B (en) Three-dimensional target identification system and method based on laser radar and monocular vision
CN104331699B (en) A kind of method that three-dimensional point cloud planarization fast search compares
CN113313815B (en) Real-time three-dimensional reconstruction method for object grabbed by mechanical arm
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN108597009A (en) A method of objective detection is carried out based on direction angle information
CN110838115A (en) Ancient cultural relic three-dimensional model change detection method by contour line extraction and four-dimensional surface fitting
CN103729872A (en) Point cloud enhancement method based on subsection resampling and surface triangularization
CN103714574A (en) GPU acceleration-based sea scene modeling and real-time interactive rendering method
CN112085675A (en) Depth image denoising method, foreground segmentation method and human motion monitoring method
CN115293287A (en) Vehicle-mounted radar-based target clustering method, memory and electronic device
CN113420658A (en) SAR image sea-land segmentation method based on FCM clustering and OTSU segmentation
CN112734844A (en) Monocular 6D pose estimation method based on octahedron
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN115063436B (en) Large-area weak texture workpiece scanning point cloud segmentation method based on depth region projection
Yuda et al. Target accurate positioning based on the point cloud created by stereo vision
CN113628170A (en) Laser line extraction method and system based on deep learning
Olson Adaptive-scale filtering and feature detection using range data
Liu et al. Deep learning of directional truncated signed distance function for robust 3D object recognition
Shen et al. A 3D modeling method of indoor objects using Kinect sensor
CN113177969B (en) Point cloud single-target tracking method of candidate seeds based on motion direction change
Yang et al. 3-D geometry enhanced superpixels for RGB-D data
CN111325229B (en) Clustering method for object space closure based on single line data analysis of laser radar
CN110619650A (en) Edge point extraction method and device based on line structure laser point cloud
CN113340201A (en) RGBD camera-based three-dimensional measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant