CN111768421A - Edge-aware semi-automatic point cloud target segmentation method - Google Patents
Edge-aware semi-automatic point cloud target segmentation method Download PDFInfo
- Publication number
- CN111768421A CN111768421A CN202010637867.3A CN202010637867A CN111768421A CN 111768421 A CN111768421 A CN 111768421A CN 202010637867 A CN202010637867 A CN 202010637867A CN 111768421 A CN111768421 A CN 111768421A
- Authority
- CN
- China
- Prior art keywords
- boundary
- segmentation
- voxel
- point cloud
- energy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 89
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000008447 perception Effects 0.000 claims abstract description 10
- 238000001914 filtration Methods 0.000 claims abstract description 7
- 239000000203 mixture Substances 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 10
- 238000009499 grossing Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 3
- 238000002924 energy minimization method Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000005484 gravity Effects 0.000 claims description 2
- 238000002156 mixing Methods 0.000 claims description 2
- 238000006386 neutralization reaction Methods 0.000 claims description 2
- 238000000638 solvent extraction Methods 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims description 2
- 238000011160 research Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a semi-automatic point cloud target segmentation method based on edge perception. The method comprises the steps that firstly, boundary-aware Supervoxel segmentation is applied, and the operation amount is reduced by taking a Supervoxel unit as an operation object under the condition of better retaining boundary information; in order to realize the utilization of point cloud boundary information, the discovery of point cloud boundary points and the filtering of boundary points which are meaningless to segmentation are realized, and a boundary-aware MRF is provided to consider the constraint of the boundary information on the segmentation. The framework provided by the method has excellent performance when the target in the point cloud scene is accurately segmented.
Description
Technical Field
The invention belongs to the field of computer vision, relates to the field of computer three-dimensional vision, and particularly relates to an edge-aware semi-automatic point cloud target segmentation method.
Background
With the rapid development and wide use of three-dimensional scanning equipment such as a depth scanner, a laser scanner and the like, a large amount of high-precision three-dimensional point cloud data of various scenes in a large range can be rapidly and conveniently obtained. The three-dimensional point cloud data has the characteristics of accurately providing space and geometric information of the target, having the same size and orientation with the target, being not influenced by illumination factors in the collection process and the like. Therefore, the three-dimensional point cloud data has gained wide attention in the field of computer vision. However, at present, three-dimensional point cloud data acquired by various instruments and modes is limited by technical difficulties of three-dimensional point cloud scanning equipment, and a certain number of abnormal and noise points are usually doped, so that the defects of non-uniform sampling density and disordered and sparse three-dimensional point cloud data exist. In addition, the statistical distribution of three-dimensional point cloud data from natural scenes is not fixed, unclear, and may have any form of symbolic surface shape and distribution. In summary, the realization of the three-dimensional point cloud data segmentation has wide application requirements and considerable technical difficulty, which makes the three-dimensional point cloud segmentation become a hot point and a difficult point of research at the same time, and is more and more emphasized by various scientific and technical companies and scientific research institutions.
Scene understanding based on three-dimensional point cloud is always a hotspot of three-dimensional point cloud application research, and the three-dimensional point cloud scene based on deep learning and machine learning needs a large amount of manually labeled supervision data. However, it takes a lot of time and labor to completely manually perform the segmentation and labeling of these supervisory data. The high-efficiency and high-quality semi-automatic point cloud segmentation algorithm is an effective method for improving the point cloud segmentation efficiency. Most of the existing semi-automatic point cloud segmentation algorithms do not utilize the boundary information of targets contained in a point cloud scene, so that the boundary information contained in the scene is wasted when the point cloud targets are segmented, and the boundaries of segmentation results can be lost under certain conditions.
Therefore, the invention researches how to extract and utilize the boundary information of the point cloud in the semi-automatic segmentation of the point cloud target and realize the high-efficiency and high-quality point cloud target segmentation, and provides the BASAS. The algorithm firstly applies a boundary-aware Supervoxel segmentation, and takes a Supervoxel unit as an operation object to reduce the operation amount of the algorithm under the condition of better retaining boundary information. In order to realize the utilization of point cloud boundary information, the discovery of point cloud boundary points and the filtering of boundary points which are meaningless to segmentation are realized, and a boundary-aware MRF is proposed to consider the constraint of the boundary information on the segmentation.
Experiments show that the frame provided by the invention has excellent performance when accurately segmenting the target in the point cloud scene. The semi-automatic three-dimensional point cloud target segmentation algorithm based on the boundary perception of the Markov random field is a feasible and effective method for efficiently and accurately segmenting the three-dimensional target object in the point cloud.
Disclosure of Invention
The invention aims to provide a semi-automatic point cloud target segmentation method based on edge perception,
in order to achieve the purpose, the technical scheme of the invention is as follows: the method realizes the discovery of point cloud boundary points and the filtration of meaningless boundary points for segmentation by utilizing point cloud boundary information, and provides a boundary-aware MRF to consider the constraint of the boundary information on the segmentation so as to complete the boundary-aware semi-automatic point cloud target segmentation.
In an embodiment of the invention, the filtering for the meaningless boundary points is to gradually filter the meaningless boundary points by using intersection of boundary points obtained by different scales of Supervoxel and boundary points obtained by offset of local barycenter of the point cloud.
In one embodiment of the invention, the energy function of the boundary-aware MRF is defined as the following form:
wherein the content of the first and second substances,for the purpose of the energy of the data item,in order to smooth the energy of the terms,for the boundary term energy, S and S represent the super-voxel block and the collection of all super-voxel blocks, respectively, NsIs a set of adjacent super-voxel blocks, p and q represent two adjacent super-voxel blocks, Ls、LpAnd LqLabels representing the voxel blocks s, p and q, respectively, α and β are parameters that control the weights of the smoothing energy term and the boundary energy term.
In an embodiment of the present invention, the boundary term energy is defined as the following formula:
where p and q represent two adjacent super-voxel blocks, LpAnd LqLabels representing the voxel blocks p and q respectively,andrespectively representing the ratio of boundary points to the total number of points, gamma, in the superpixels p and qbIs a scaling factor to make the boundary term energy comparable.
In an embodiment of the present invention, the method specifically includes the following steps:
step 1: bounding box BB for wrapping target object dragged by usera;
Step 2: in bounding box BBaSurrounding production of larger-sized and wrapped BBaSecond bounding box BB ofb;
And step 3: extraction of BB produced in step 2bThe method comprises the following steps of (1) obtaining valuable target boundary points in a point cloud, wherein the step sequentially comprises the following substeps:
step 3.1: extracting a boundary point set containing a boundary which is meaningless for segmentation by using an edge extraction algorithm, wherein the step sequentially comprises the following substeps:
and 3. step 3.1.1: for each point p in the point cloudiSolving k nearest neighbors to form neighborhood NiBy the formulaThe center of gravity c of the k points is calculatediCoordinates;
step 3.1.2: calculate out point piK nearest neighbor point neutralization points piDistance R of the nearest point of (2)i(Ni);
Step 3.1.3: using the formula ci-pi||>λ·Ri(Ni) Decision point piWhether the boundary point is determined, wherein lambda is a threshold value for controlling the boundary extraction;
step 3.2: boundary points which are meaningless to segmentation in the boundary points extracted in the step 3.1 are gradually removed by using the boundary calculation intersection of the superpixel obtained by selecting different segmentation scales, and meanwhile, the boundary points which have a guiding effect on edge perception segmentation are reserved; the method comprises the following substeps in sequence:
step 3.2.1: executing a voxel segmentation algorithm with a preset scale to over-segment the point cloud into voxel regions;
step 3.2.2: extracting the boundary of each hyper-voxel area in the point cloud to form a hyper-voxel boundary point set;
step 3.2.3: performing intersection extraction on the boundary points extracted in the step 3.2.2 and the boundary points extracted in the step 3.1 to obtain a boundary point set with meaningless boundary points removed;
step 3.2.4: changing the segmentation scale used when the hyper-voxel segmentation algorithm is executed, executing the step 3.2.1 and the step 3.2.2 again, and taking intersection of the hyper-voxel boundary point set obtained this time and the boundary point set obtained in the step 3.2.3 to obtain a target boundary point set valuable for segmentation;
and 4, step 4: for BB generated in step 2bThe point cloud in (1) is over-segmented by using a hyper-voxel segmentation algorithm to generate a hyper-voxel block;
and 5: calculating the characteristic value F of each super-voxel block by taking the super-voxel blocks divided in the step 4 as basic unitsi;
Step 6: an iterative energy minimization method is used for minimizing an energy function of the edge-aware MRF to solve the target segmentation problem, and the step sequentially comprises the following substeps:
step 6.1: initializing an iterative energy minimization algorithm to generate a first segmentation result, wherein the step sequentially comprises the following substeps:
step 6.1.1: respectively generating two Gaussian mixture models of foreground and background, and using BB generated in step 1aTraining the Gaussian mixture model of the foreground with the voxel blocks contained therein, using the bounding box BB generated in step 2bAnd bounding box BBaThe super voxel blocks in between are used for training a Gaussian mixture model of the background;
step 6.1.2: mixing BBbRespectively substituting all the super-voxel blocks into the two Gaussian mixture models to respectively obtain the probability P of each super-voxel block belonging to the foreground and the backgroundsUsing the formula Ds(Ls)=-log Ps(Ls) Calculating the energy of the data item, where s represents a hyper-voxel, where LsForeground or background labels, P, for hyper-voxelss(Ls) Is to label LsA probability assigned to a voxel block s;
step 6.1.3: using formulasFinding the smoothing term energy, where p and q represent adjacent voxel blocks, γaFor a scale factor with energy comparable to the smoothing term, LpAnd LqLabels, F, representing hyper-voxels p and q, respectivelypAnd FqRespectively representing the characteristic values of the superpixels p and q, | Fp-Fq| represents feature FpAnd FqThe Euclidean distance of;
where p and q represent adjacent super-voxel blocks, gammabTo make the boundary term energies comparableScale factor of sex, LpAnd LqLabels representing the hyper-voxels p and q respectively,andrespectively representing the proportion of boundary points in the hyper-voxels p and q to the total point number;
step 6.1.5: generating segmentation by using a maximum flow minimum cut algorithm, changing foreground and background labels of the super-voxel block, minimizing an MRF energy function of edge perception, and transmitting the result to the subsequent step;
step 6.2: using iterative energy minimization to achieve global energy minimization partitioning, the step comprising the following sub-steps in sequence:
step 6.2.1: initializing Gaussian mixture models of the foreground and the background, and respectively training the Gaussian mixture models of the foreground and the background by using foreground points and background points in a last segmentation result;
step 6.2.2: the energy of the data item, the smoothing item and the boundary item is obtained by the same method as the steps 6.1.2, 6.1.3 and 6.1.4;
step 6.2.3: generating a segmentation result by using a maximum flow minimum cut algorithm, changing foreground and background labels of the super-voxel block, minimizing an energy function of the MRF of edge perception, judging whether the segmentation result is the same as the result of the last segmentation, and returning to the step 6.2.1 if the segmentation result is different from the result of the last segmentation; and if the two are the same, outputting a final segmentation result.
Compared with the prior art, the invention has the following beneficial effects:
(1) high efficiency: the invention applies a boundary sensing Supervoxel segmentation, and takes a Supervoxel unit as an operation object to reduce the operation amount of the algorithm under the condition of better retaining boundary information;
(2) boundary protection: the invention realizes the utilization of point cloud boundary information, realizes the discovery of point cloud boundary points and the filtration of meaningless boundary points for segmentation, and provides a boundary-aware MRF to consider the constraint of the boundary information on the segmentation so as to complete the semi-automatic segmentation of the boundary awareness;
(3) high precision: tests on two different data sets show that compared with the existing partial method, the method has better accuracy and recall rate performance on different data sets;
(4) the interaction is simple: the point cloud target is segmented once only by dragging an approximate area wrapping the target object around the target by an operation user.
Drawings
FIG. 1 is a flow chart of a Boundary-Aware Semi-Automatic three-dimensional point cloud object Segmentation algorithm (BASAS).
Fig. 2 is a flow chart of an iterative energy minimization algorithm.
Fig. 3 is a BASAS algorithm framework.
Fig. 4 is a diagram illustrating a boundary that is extracted from meaningless boundaries and is meaningful for segmentation.
Detailed Description
The technical scheme of the invention is specifically explained below with reference to the accompanying drawings.
As shown in fig. 1 and 3, the invention provides an edge-aware semi-automatic point cloud target segmentation method, which comprises the following specific steps:
step 1: bounding box BB for wrapping target object dragged by useraAs shown in FIG. 3 (a);
step 2: in bounding box BBaSurrounding production of larger-sized and wrapped BBaSecond bounding box BB ofbAs shown in FIG. 3 (b);
and step 3: extraction of BB produced in step 2bThe valuable target boundary points in the point cloud;
and 4, step 4: for BB generated in step 2bThe point cloud in (1) is over-segmented by using a hyper-voxel segmentation algorithm to generate hyper-voxel blocks, and the blocks are used as operation units of the basis of the next step. (ii) a
And 5: calculating the characteristic value F of each super-voxel block by taking the super-voxel blocks divided in the step 4 as basic unitsi;
Step 6: an iterative energy minimization method is used to minimize the energy function of the edge-aware MRF to solve the object segmentation problem.
The iterative energy minimization algorithm of the present invention is further described in detail below with reference to fig. 2 and fig. 3(c) -3(f), and embodiments thereof.
As shown in fig. 2, the iterative energy minimization algorithm described in step 6 of the present invention has the following steps:
step S1: respectively initializing two Gaussian mixture models of a foreground and a background, training the Gaussian mixture model of the foreground by using a super-voxel block of a specified foreground label in the point cloud, and training the Gaussian mixture model of the background by using the super-voxel block of a specified background label in the point cloud;
step S2: respectively substituting all the super-voxel blocks in the point cloud into the two Gaussian mixture models to respectively obtain the probability P of each super-voxel block belonging to the foreground and the backgroundsUsing the formula Ds(Ls)=-log Ps(Ls) Calculating the energy of the data item, where s represents a hyper-voxel, where LsForeground or background labels, P, for hyper-voxelss(Ls) Is to label LsProbability assigned to the voxel block s, as shown in fig. 3 (c);
step S3: using formulasFinding the smoothing term energy, where p and q represent adjacent voxel blocks, γaFor a scale factor with energy comparable to the smoothing term, LpAnd LqLabels, F, representing hyper-voxels p and q, respectivelypAnd FqRespectively representing the characteristic values of the superpixels p and q, | Fp-Fq| represents feature FpAnd FqThe Euclidean distance of;
where p and q represent adjacent super-voxel blocks, gammabFor scale factors where the boundary term energy is comparable, LpAnd LqLabels representing the hyper-voxels p and q respectively,andrespectively representing the proportion of boundary points in the hyper-voxels p and q to the total point number;
step S5: generating a segmentation result by using a maximum flow minimum cut algorithm, changing foreground and background labels of the super-voxel block, minimizing an energy function of the MRF of edge perception, judging whether the segmentation result is the same as the result of the last segmentation, and returning to the step S1 if the segmentation result is different from the result of the last segmentation; and if the two are the same, outputting a final segmentation result.
Fig. 4 is a diagram illustrating a boundary that is extracted from meaningless boundaries and is meaningful for segmentation. Fig. 4(a) is a boundary map including meaningless boundaries extracted in the previous step, and fig. 4(b) and 4(c) are voxel boundary maps obtained by voxel segmentation at different scales. First, a certain number of meaningless boundaries are removed by the boundary intersection of fig. 4(a) and fig. 4(b), resulting in fig. 4 (c). Next, fig. 4(c) is intersected with the super-voxel boundary map 4(d) of another segmentation scale again to obtain a map 4(e) in which the included boundaries are mostly meaningful boundaries.
The above are preferred embodiments of the present invention, and all changes made according to the technical scheme of the present invention that produce functional effects do not exceed the scope of the technical scheme of the present invention belong to the protection scope of the present invention.
Claims (7)
1. An edge-aware semi-automatic point cloud target segmentation method is characterized in that point cloud boundary information is utilized to realize point cloud boundary point discovery and meaningless boundary point filtering, and a boundary-aware MRF is provided to consider the boundary information to segment constraints, so that boundary-aware semi-automatic point cloud target segmentation is completed.
2. The edge-aware semi-automatic point cloud target segmentation method according to claim 1, wherein the filtering of the meaningless boundary points for segmentation is to gradually filter the meaningless boundary points by using intersection of boundary points obtained by different scales of Supervoxel and boundary points obtained by offset of local barycenter of the point cloud.
3. The edge-aware semi-automatic point cloud target segmentation method according to claim 1, wherein the energy function of the boundary-aware MRF is defined as the following form:
wherein the content of the first and second substances,for the purpose of the energy of the data item,in order to smooth the energy of the terms,for the boundary term energy, S and S represent the super-voxel block and the set of all super-voxel blocks, respectively, NsIs a set of adjacent super-voxel blocks, p and q represent two adjacent super-voxel blocks, Ls、LpAnd LqLabels representing the voxel blocks s, p and q, respectively, α and β are parameters that control the smoothing energy term weight and the boundary energy term weight, respectively.
4. The edge-aware semi-automatic point cloud target segmentation method according to claim 3, wherein the boundary term energy is defined as the following formula:
5. The edge-aware semi-automatic point cloud target segmentation method according to claim 1, which is implemented by the following steps:
step 1: bounding box BB for wrapping target object dragged by usera;
Step 2: in bounding box BBaSurrounding production of larger-sized and wrapped BBaSecond bounding box BB ofb;
And step 3: extraction of BB produced in step 2bThe valuable target boundary points in the point cloud;
and 4, step 4: for BB generated in step 2bThe point cloud in (1) is over-segmented by using a hyper-voxel segmentation algorithm to generate a hyper-voxel block;
and 5: calculating the characteristic value F of each super-voxel block by taking the super-voxel blocks divided in the step 4 as basic unitsi;
Step 6: an iterative energy minimization method is used to minimize the energy function of the edge-aware MRF to solve the object segmentation problem.
6. The edge-aware semi-automatic point cloud target segmentation method according to claim 5, wherein the step 3 comprises the following sub-steps in sequence:
step 3.1: extracting a boundary point set containing a boundary which is meaningless for segmentation by using an edge extraction algorithm, wherein the step sequentially comprises the following substeps:
step 3.1.1: for each in the point cloudPoint piSolving k nearest neighbors to form neighborhood NiBy the formulaThe center of gravity c of the k points is calculatediCoordinates;
step 3.1.2: calculate out point piK nearest neighbor point neutralization points piDistance R of the nearest point of (2)i(Ni);
Step 3.1.3: using the formula ci-pi||>λ·Ri(Ni) Decision point piWhether the boundary point is determined, wherein lambda is a threshold value for controlling the boundary extraction;
step 3.2: boundary points which are meaningless to segmentation in the boundary points extracted in the step 3.1 are gradually removed by using the boundary calculation intersection of the superpixel obtained by selecting different segmentation scales, and meanwhile, the boundary points which have a guiding effect on edge perception segmentation are reserved; the method comprises the following substeps in sequence:
step 3.2.1: executing a voxel segmentation algorithm with a preset scale to over-segment the point cloud into voxel regions;
step 3.2.2: extracting the boundary of each hyper-voxel area in the point cloud to form a hyper-voxel boundary point set;
step 3.2.3: performing intersection extraction on the boundary points extracted in the step 3.2.2 and the boundary points extracted in the step 3.1 to obtain a boundary point set with meaningless boundary points removed;
step 3.2.4: and changing the segmentation scale used when the hyper-voxel segmentation algorithm is executed, executing the step 3.2.1 and the step 3.2.2 again, and taking intersection of the hyper-voxel boundary point set obtained this time and the boundary point set obtained in the step 3.2.3 to obtain a target boundary point set which is valuable for segmentation.
7. The edge-aware semi-automatic point cloud target segmentation method according to claim 5, wherein the step 6 comprises the following sub-steps in sequence:
step 6.1: initializing an iterative energy minimization algorithm to generate a first segmentation result, wherein the step sequentially comprises the following substeps:
step 6.1.1: respectively generating two Gaussian mixture models of foreground and background, and using BB generated in step 1aTraining the Gaussian mixture model of the foreground with the voxel blocks contained therein, using the bounding box BB generated in step 2bAnd bounding box BBaThe super voxel blocks in between are used for training a Gaussian mixture model of the background;
step 6.1.2: mixing BBbRespectively substituting all the super-voxel blocks into the two Gaussian mixture models to respectively obtain the probability P of each super-voxel block belonging to the foreground and the backgroundsUsing the formula Ds(Ls)=-log Ps(Ls) Calculating the energy of the data item, where s represents a hyper-voxel, LsForeground or background labels, P, for hyper-voxelss(Ls) Is to label LsA probability assigned to a voxel block s;
step 6.1.3: using formulasFinding the smoothing term energy, where p and q represent adjacent voxel blocks, γaFor a scale factor with energy comparable to the smoothing term, LpAnd LqLabels, F, representing hyper-voxels p and q, respectivelypAnd FqRespectively representing the characteristic values of the superpixels p and q, | Fp-Fq| represents feature FpAnd FqThe Euclidean distance of;
step 6.1.4: using formulasDetermining the boundary term energy, where p and q represent adjacent voxel blocks, γbFor scale factors where the boundary term energy is comparable, LpAnd LqLabels representing the hyper-voxels p and q respectively,andrespectively representing the proportion of boundary points in the hyper-voxels p and q to the total point number;
step 6.1.5: generating segmentation by using a maximum flow minimum cut algorithm, changing foreground and background labels of the super-voxel block, minimizing an MRF energy function of edge perception, and transmitting the result to the subsequent step;
step 6.2: using iterative energy minimization to achieve global energy minimization partitioning, the step comprising the following sub-steps in sequence:
step 6.2.1: initializing Gaussian mixture models of the foreground and the background, and respectively training the Gaussian mixture models of the foreground and the background by using foreground points and background points in a last segmentation result;
step 6.2.2: the energy of the data item, the smoothing item and the boundary item is obtained by the same method as the steps 6.1.2, 6.1.3 and 6.1.4;
step 6.2.3: generating a segmentation result by using a maximum flow minimum cut algorithm, changing foreground and background labels of the super-voxel block, minimizing an energy function of the MRF of edge perception, judging whether the segmentation result is the same as the result of the last segmentation, and returning to the step 6.2.1 if the segmentation result is different from the result of the last segmentation; and if the two are the same, outputting a final segmentation result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010637867.3A CN111768421A (en) | 2020-07-03 | 2020-07-03 | Edge-aware semi-automatic point cloud target segmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010637867.3A CN111768421A (en) | 2020-07-03 | 2020-07-03 | Edge-aware semi-automatic point cloud target segmentation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111768421A true CN111768421A (en) | 2020-10-13 |
Family
ID=72723751
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010637867.3A Pending CN111768421A (en) | 2020-07-03 | 2020-07-03 | Edge-aware semi-automatic point cloud target segmentation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111768421A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070122039A1 (en) * | 2005-11-29 | 2007-05-31 | Microsoft Corporation | Segmentation of objects by minimizing global-local variational energy |
US20170109611A1 (en) * | 2015-10-16 | 2017-04-20 | Thomson Licensing | Scene labeling of rgb-d data with interactive option |
CN106600622A (en) * | 2016-12-06 | 2017-04-26 | 西安电子科技大学 | Point cloud data partitioning method based on hyper voxels |
CN106780524A (en) * | 2016-11-11 | 2017-05-31 | 厦门大学 | A kind of three-dimensional point cloud road boundary extraction method |
US20180189956A1 (en) * | 2016-12-30 | 2018-07-05 | Dassault Systemes | Producing a segmented image using markov random field optimization |
-
2020
- 2020-07-03 CN CN202010637867.3A patent/CN111768421A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070122039A1 (en) * | 2005-11-29 | 2007-05-31 | Microsoft Corporation | Segmentation of objects by minimizing global-local variational energy |
US20170109611A1 (en) * | 2015-10-16 | 2017-04-20 | Thomson Licensing | Scene labeling of rgb-d data with interactive option |
CN106780524A (en) * | 2016-11-11 | 2017-05-31 | 厦门大学 | A kind of three-dimensional point cloud road boundary extraction method |
CN106600622A (en) * | 2016-12-06 | 2017-04-26 | 西安电子科技大学 | Point cloud data partitioning method based on hyper voxels |
US20180189956A1 (en) * | 2016-12-30 | 2018-07-05 | Dassault Systemes | Producing a segmented image using markov random field optimization |
Non-Patent Citations (2)
Title |
---|
HUAN LUO ET AL.: ""Boundary-Aware and Semiautomatic Segmentation of 3-D Object in Point Clouds"", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 * |
SYEDA MARIAM AHMED ET AL.: ""Edge and Corner Detection for Unorganized 3D Point Clouds with Application to Robotic Welding"", 《EDGE AND CORNER DETECTION FOR UNORGANIZED 3D POINT CLOUDS WITH APPLICATION TO ROBOTIC WELDING》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104809723B (en) | The three-dimensional CT image for liver automatic division method of algorithm is cut based on super voxel and figure | |
CN110781775B (en) | Remote sensing image water body information accurate segmentation method supported by multi-scale features | |
Yang et al. | Graph-regularized saliency detection with convex-hull-based center prior | |
CN102592268B (en) | Method for segmenting foreground image | |
CN113240691A (en) | Medical image segmentation method based on U-shaped network | |
CN108629783B (en) | Image segmentation method, system and medium based on image feature density peak search | |
CN104134234A (en) | Full-automatic three-dimensional scene construction method based on single image | |
BRPI0613102A2 (en) | cut and paste video object | |
CN109712143B (en) | Rapid image segmentation method based on superpixel multi-feature fusion | |
CN109934843B (en) | Real-time contour refinement matting method and storage medium | |
CN109685821A (en) | Region growing 3D rock mass point cloud plane extracting method based on high quality voxel | |
CN105389821B (en) | It is a kind of that the medical image cutting method being combined is cut based on cloud model and figure | |
CN103578107B (en) | A kind of interactive image segmentation method | |
CN103198479A (en) | SAR image segmentation method based on semantic information classification | |
CN106780508A (en) | A kind of GrabCut texture image segmenting methods based on Gabor transformation | |
CN110349159B (en) | Three-dimensional shape segmentation method and system based on weight energy adaptive distribution | |
CN111667491A (en) | Breast mass image generation method with marginal landmark annotation information based on depth countermeasure network | |
CN113223042A (en) | Intelligent acquisition method and equipment for remote sensing image deep learning sample | |
CN108965739A (en) | video keying method and machine readable storage medium | |
CN113705579A (en) | Automatic image annotation method driven by visual saliency | |
Yuan et al. | Volume cutout | |
CN113870196B (en) | Image processing method, device, equipment and medium based on anchor point cut graph | |
Xiang et al. | Interactive natural image segmentation via spline regression | |
CN101578632B (en) | Soft edge smoothness prior and application on alpha channel super resolution | |
CN104809721B (en) | A kind of caricature dividing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201013 |