CN113538474A - 3D point cloud segmentation target detection system based on edge feature fusion - Google Patents
3D point cloud segmentation target detection system based on edge feature fusion Download PDFInfo
- Publication number
- CN113538474A CN113538474A CN202110786257.4A CN202110786257A CN113538474A CN 113538474 A CN113538474 A CN 113538474A CN 202110786257 A CN202110786257 A CN 202110786257A CN 113538474 A CN113538474 A CN 113538474A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- feature
- edge
- extraction
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 63
- 238000001514 detection method Methods 0.000 title claims abstract description 20
- 230000011218 segmentation Effects 0.000 title claims abstract description 18
- 238000000605 extraction Methods 0.000 claims abstract description 143
- 238000000034 method Methods 0.000 claims abstract description 29
- 230000014759 maintenance of location Effects 0.000 claims abstract description 19
- 239000013598 vector Substances 0.000 claims description 46
- 238000005070 sampling Methods 0.000 claims description 45
- 238000006243 chemical reaction Methods 0.000 claims description 29
- 230000009467 reduction Effects 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 abstract description 4
- 230000008447 perception Effects 0.000 abstract 1
- 239000002131 composite material Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000012423 maintenance Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000006722 reduction reaction Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a 3D point cloud segmentation target detection system based on edge feature fusion, and relates to the technical field of deep learning 3D point cloud segmentation; the method adopts a multilayer perceptron to extract edge features, fuses point cloud retention features and point cloud extraction features to generate point cloud edge fusion features, enhances the extraction capability of the edge features, applies the obtained edge features to a target detection task, improves the precision of a point cloud segmentation model, obtains an accurate point cloud target detection result, and can be well applied to the fields of unmanned driving, manipulator perception and the like.
Description
Technical Field
The invention relates to the technical field of deep learning 3D point cloud segmentation, in particular to a 3D point cloud segmentation target detection system based on edge feature fusion.
Background
In recent years, with the development of three-dimensional laser scanning technology, the adoption of three-dimensional point cloud data is also becoming fast and convenient. The method has wide application in the fields of unmanned driving, robots, indoor scene detection and identification and the like, and the three-dimensional point cloud has become a research hotspot in the field of computer vision. At present, scene acquisition mainly depends on traditional point cloud, but the automatic computer review technology is not mature enough, and the existing algorithm cannot realize sufficient understanding and comprehension of point cloud targets. Deep learning is rapidly developed in the field of computer vision, and remarkable achievement is achieved in recognition and classification of two-dimensional images. Meanwhile, the research of three-dimensional point cloud classification is influenced by the fact that deep learning methods are increasingly used.
Most of existing 3D point cloud target detection algorithms use key points as 3D detection targets, an auxiliary training module is determined through the connection relation between the key points, and accurate positioning of a 3D target frame is achieved. The invention discloses a 3D target detection method based on key points, and discloses an invention patent application with the publication number of CN 112766100A. But the method ignores the relation between local and global, resulting in low 3D object detection accuracy. The invention discloses a point cloud classification method and system based on local edge feature enhancement, and discloses an invention patent application with the publication number of CN 112052884A.A point cloud classification model is constructed based on a graph convolution network structure and a channel attention mechanism by acquiring point cloud voxelization data, edge features of a point cloud corresponding to the point cloud voxelization data in a preset neighborhood and voxel positions corresponding to the point cloud voxelization data, the point cloud after feature filling is classified, and a point cloud classification result is output, so that the interdependency relationship among feature channels is increased, the global feature expression capability of the point cloud is enhanced, and the efficiency and the prediction accuracy of the point cloud classification are improved. However, due to the point cloud edge filling, the method can increase the identification error of the small sample point cloud model, and is not beneficial to the target classification of the small sample point cloud model.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a 3D point cloud segmentation target detection system based on edge feature fusion, which can obtain more accurate edge information and improve the accuracy of a point cloud target detection result.
In order to achieve the purpose, the technical scheme of the application is as follows: A3D point cloud segmentation target detection system based on edge feature fusion comprises an expansion extraction feature module, a retention extraction feature module and a reduction extraction feature module which are used in series, wherein each module comprises a plurality of edge feature extraction units, the edge feature extraction unit comprises a point cloud feature sampling layer, a point cloud feature holding branch and a point cloud feature extraction branch, the point cloud characteristic sampling layer changes the size of the point cloud characteristic obtained from the three-dimensional image, the point cloud characteristic holding branch adopts 1-dimensional convolution to further process the point cloud characteristic with the changed size, the consistency of the point cloud characteristic before and after the convolution processing is ensured, the point cloud feature extraction branch acquires edge features through an extraction type multilayer perceptron, and then is fused with consistent point cloud features, so that point cloud edge fusion keeps feature contents more diverse, and point cloud target feature information is fully and accurately represented.
Further, in order to obtain the point cloud edge fusion feature in the edge feature extraction unit, the implementation steps are as follows:
step 1: the feature vector of original point cloud in the three-dimensional image is processedReading the feature vector as input, wherein the size is n multiplied by 3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and inputting the feature vector of the original point cloud in the three-dimensional imageIn a specific form of
Step 2: characterizing the raw point cloudProcessing the point cloud characteristic sampling layer to obtain the conversion type point cloud characteristic with changed sizeThe expression is as follows:
and 3, step 3: in the Point cloud feature holding Branch, with the rotationShape-changing point cloud characteristicsAs the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the featuresAs follows:
wherein ,is a feature vector input expression formula in a point cloud feature holding branch, vσiIs a vector representation expression in a point cloud holding branch, and the concrete expression of one-dimensional convolution operation isi represents a point cloud characteristic dimension, the convolution kernel size of the convolution processing is 1 × 1, s represents the step length of the convolution operation, and s is 1;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also usedAs the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristicsAs shown in the following formula:
wherein Show and carryTaking one-dimensional convolution operation in a multi-layer perceptron, wherein the size of a convolution kernel is 1 multiplied by 1, s represents the step length of the convolution operation, and s is 1, and lambda represents the offset of each layer perceptron;
and 4, step 4: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion featuresThe method comprises the following specific steps:
by aligning the cloud characteristics of the original points in the three-dimensional imagePerforming the above operation to finally output the point cloud edge fusion characteristicsThe following;
because the convolution kernel size and the step length of the one-dimensional convolution are 1 in the convolution processing in each module, the edge fusion characteristic of the point cloud is finally outputThe characteristic size of the point cloud image is determined by the point cloud characteristic sampling layer and represents the point cloud characteristic size of each unit, so that the consistency of the point cloud characteristics before and after processing is ensured, the effective enhancement is realized, and the richness of the point cloud characteristics in the three-dimensional image is improved.
Furthermore, the expansion extraction feature module comprises a edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature expansion sampling layerTaking 5 units as an example, namely a E {1,2,3,4,5} is compounded, and in order to obtain the point cloud edge fusion expansion feature, the implementation steps are as follows:
step 1: the feature vector of original point cloud in the three-dimensional image is processedReading the feature vector as input, wherein the size is n multiplied by 3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and inputting the feature vector of the original point cloud in the three-dimensional imageIn a specific form of
Step 2: characterizing the raw point cloudProcessing the point cloud feature expansion sampling layer to obtain the conversion type point cloud feature with changed sizeThe expression is as follows:
and 3, step 3: in the point cloud feature holding branch, the conversion type point cloud feature is adoptedAs the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the featuresAs follows:
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also usedAs the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristicsAs shown in the following formula:
and 4, step 4: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion expansion featuresThe method comprises the following specific steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that the a point cloud edge fusion expansion feature is obtained. Taking 5 units as an example, the number of the units is,
the method can be obtained by the same principle of the steps:
Furthermore, the feature keeping and extracting module comprises b edge feature extracting units, and the point cloud feature sampling layer in each edge feature extracting unit is a point cloud feature keeping and sampling layerTaking 3 units as an example, namely b ∈ {1,2,3} is compounded, and in order to obtain the point cloud edge fusion retention feature, the implementation steps are as follows:
step 1: fusing and expanding characteristics of point cloud edges output by an expanding and extracting characteristic moduleAs input, reading the feature vector, and using a point cloud feature holding sampling layer to hold the feature size to obtain a conversion type point cloud featureIn a specific form of
Step 2: in the point cloud feature holding branch, the conversion type point cloud feature is usedAs the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the featuresAs follows:
Meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also usedAs the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristicsAs shown in the following formula:
and 3, step 3: fusing the point cloud retaining feature and the point cloud extraction feature to obtain a point cloud edge fusion retaining featureThe method comprises the following specific steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that b point cloud edge fusion retention features are obtained. Taking 3 units as an example, the number of the units is,
the method can be obtained by the same principle of the steps:
Furthermore, the reduction extraction feature module comprises c edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature reduction sampling layerTaking 4 units as an example, namely c ∈ {1,2,3,4} is compounded, and in order to obtain the point cloud edge fusion reduction feature, the implementation steps are as follows:
step 1: the point cloud edge output by the module for keeping and extracting the characteristics is fused with the keeping characteristicsAs input, reading the feature vector, and using a restored point cloud feature sampling layer to maintain the feature size to obtain a conversion-type point cloud featureIn a specific form of
Step 2: in the point cloud feature holding branch, the conversion type point cloud feature is usedAs the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the featuresAs follows:
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also usedAs the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristicsAs shown in the following formula:
and 3, step 3: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion reduction featuresThe method comprises the following specific steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that c point cloud edge fusion reduction features are obtained. Taking 4 units as an example of the method,
the method can be obtained by the same principle of the steps:
Due to the adoption of the technical scheme, the invention can obtain the following technical effects:
(1) is suitable for obtaining the point cloud characteristic condition through the edge characteristic
The three-dimensional image in the invention takes the original point cloud characteristics as input, and the edge characteristic information is fused in the process of extracting the point cloud characteristics by adopting a multi-layer perceptron structure, so that the obtained point cloud characteristics have dual expression of the original point cloud characteristics and the edge characteristics, the expression capability of the point cloud characteristics is enhanced, and the method is suitable for the condition of obtaining the point cloud characteristics through the edge characteristics.
(2) Suitable for point cloud segmentation task
According to the invention, the three-dimensional image obtains the point cloud edge fusion retention features with stronger expression capability through the edge feature extraction composite structure, the point cloud features and the point cloud retention features of the target in the three-dimensional image can be fused and input as the target features after cascading, and an accurate point cloud segmentation result is obtained.
(3) Adapted to target detection tasks
The method can effectively improve the performance of the point cloud segmentation model in the three-dimensional image, has relatively simple factors such as targets, actions and attributes, and can be better applied to target detection tasks.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an edge feature extraction unit;
FIG. 2 is a schematic diagram of an expansion feature extraction module composed of an edge feature extraction unit;
FIG. 3 is a schematic diagram of a keep-alive extraction feature module composed of an edge feature extraction unit;
FIG. 4 is a schematic diagram of a restoration extraction feature module composed of an edge feature extraction unit:
FIG. 5 is a schematic diagram of the overall framework of the present system (edge feature extraction composite);
FIG. 6 is a schematic view showing the environment in which the automatic driving recognition in example 1 is carried out;
FIG. 7 is a schematic diagram of a railway scene detection scenario in example 2;
fig. 8 is a schematic diagram of the case where the robot grips an object in example 3.
Detailed Description
The embodiments of the present invention are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are given, but the scope of the present invention is not limited to the following embodiments.
The embodiment provides a 3D point cloud segmentation target detection system based on edge feature fusion, which is used for processing original point cloud features of a three-dimensional image through an edge feature extraction composite structure to obtain point cloud edge fusion reduction features containing edge feature information with strong expression capacity. The composite structure comprises three modules, namely an expansion feature extraction module, a retention feature extraction module and a reduction feature extraction module, and the basic structure of the composite structure is an edge feature extraction unit which comprises a point cloud feature sampling layer, a point cloud feature retention branch and a point cloud feature extraction branch. The point cloud feature sampling layer changes the feature size of input point cloud, the point cloud feature keeping branch adopts 1-dimensional convolution to further process the point cloud features after the size is changed, consistency of the point cloud features before and after the convolution processing is guaranteed, the point cloud feature extracting branch extracts edge features through an extraction type multilayer perceptron and then fuses the edge features with the consistent point cloud features, so that point cloud edge fusion feature content is more diverse, and point cloud target feature information is fully and accurately represented.
The edge feature extraction unit acquires hidden layer features in the processing process of the extraction type multilayer perceptron, then point cloud features are fused, the diversity of feature contents is enriched, the edge feature extraction unit is multiplexed to obtain an expansion extraction feature module, a maintenance extraction feature module and a reduction extraction feature module, the features with point cloud features and edge features fused and expressed are acquired, and the expression capability of the point cloud features is enhanced. And fusing the point cloud maintaining characteristics obtained by the point cloud characteristic maintaining branch and the point cloud extracting characteristics obtained by the point cloud characteristic extracting branch to be used as the input of the next unit. In the expansion extraction feature module, a sampling layer of an edge feature extraction unit is a point cloud feature expansion sampling layer, a sampling layer of an edge feature extraction unit in the retention extraction feature module is a point cloud feature retention sampling layer, a sampling layer of an edge feature extraction unit in the reduction extraction feature module is a point cloud feature reduction sampling layer, and feature size conversion is performed on input point cloud features in each unit.
The edge feature extraction composite structure comprises the three modules, is used as a point cloud multi-layer extraction feature main structure, and then is input into a maximum pooling layer to extract the most dominant features in the whole structure.
The edge feature extraction unit takes the original point cloud coordinates of the three-dimensional image as input and carries out preprocessing on the original point cloud features in a clustering mode such as KNN (K nearest neighbor) and the likeChanging or keeping the characteristic size of the point cloud through a point cloud characteristic sampling layer, and outputting a conversion type point cloud characteristic RFAnd the point cloud characteristic keeping branch and the point cloud characteristic extracting branch are used as the input of the point cloud characteristic keeping branch and the point cloud characteristic extracting branch. Output point cloud feature keeping R in point cloud feature keeping branchBThe point cloud characteristic extraction branch is processed by an extraction type multilayer perceptron and then output as a point cloud extraction characteristic RTKeeping the point cloud in the characteristic RBExtracting feature R from point cloudTAnd fusing, and finally outputting a point cloud edge fusion characteristic R, specifically:
(1) inputting original point cloud characteristics of three-dimensional imageI.e. the feature vectors to be input to the edge feature extraction unit.
(2) Outputting point cloud retaining feature R after processing three-dimensional imageBThe method specifically comprises the following steps: original point cloud features input by a three-dimensional image are subjected to size change through a point cloud feature sampling layer to obtain conversion type point cloud features RFThe converted point cloud feature RFAnd carrying out 1-dimensional convolution processing on the convolution kernel with the size of 1 to obtain the feature vector.
(3) Three-dimensional image is processed and then point cloud extraction characteristic R is outputTThe method specifically comprises the following steps: original point cloud features input by a three-dimensional image are subjected to size change through a point cloud feature sampling layer to obtain conversion type point cloud features RFThe converted point cloud feature RFExtracting features through a multilayer perceptron, outputting point cloud extraction features RT。
(4) Point cloud feature fusion, which comprises the following specific operations: keeping the point cloud in the characteristic RBExtracting feature R from point cloudTAnd adding and fusing to obtain a point cloud edge fusion retention characteristic R.
And (3) keeping a branch for the point cloud characteristics, further processing the original point cloud characteristics, and under the condition that the characteristic size is changed and the point cloud characteristics represent that the attributes of the detection target are not changed, improving the expression capability of the edge characteristics and simultaneously providing the point cloud original characteristics for the point cloud characteristic extraction branch. In the point cloud feature extraction branch, the point cloud extraction features and the point cloud retention features are added, so that feature contents are enriched, and the point cloud features with strong expression capability are obtained.
The edge feature extraction composite structure comprises three modules which are used in series, namely an expansion feature extraction module, a retention feature extraction module and a reduction feature extraction module, and the input of the three modules is the original point cloud featureOutput as point cloud edge fusion featuresThe method specifically comprises the following steps:
(1) inputting original point cloud features in three-dimensional imageI.e. the feature vectors that are input to the edge feature extraction composite structure.
(2) The expansion extraction feature module is provided with a edge feature extraction units, wherein the point cloud feature sampling layer is a point cloud feature expansion sampling layer, the size of the input original point cloud feature is enlarged, and the output of the point cloud feature expansion sampling layer isB edge feature extraction units are arranged in the feature keeping and extracting module, the point cloud feature sampling layer is a point cloud feature keeping and sampling layer, the feature size of the input point cloud is kept unchanged, and the output of the point cloud feature keeping and sampling layer isC edge feature extraction units are arranged in the restoration extraction feature module, the point cloud feature sampling layer is a point cloud feature restoration sampling layer, the size of the input original point cloud features is restored, and the output of the point cloud feature restoration sampling layer is
(3) Point cloud retention feature R in the three-dimensional imageBIn the point cloud feature holding branch, the conversion type point cloud feature RFAnd performing 1-dimensional convolution processing on the feature vectors by 1 convolution kernel with the size of 1. In particular, the point cloud feature in the expansion extraction feature module is kept and expressed asKeeping point cloud features in the extracted features module asThe point cloud features in the reduction extraction feature module are kept represented as
(4) Extracting characteristics R from the point cloud in the three-dimensional imageTThe conversion type point cloud features are extracted through a multilayer perceptron, the fusion processing is waited to be carried out in a point cloud feature extraction branch, and the point cloud feature extraction in an expansion extraction feature module is expressed asKeeping the point cloud feature extraction in the extraction feature module asThe point cloud feature extraction in the reduction extraction feature module is represented as
(5) And point cloud features in the three-dimensional image are fused, and the specific operation is as follows: in the point cloud feature extraction branch, the point cloud is kept with the feature RBExtracting feature R from point cloudTAdding and fusing, expanding and extracting point cloud characteristic in the characteristic module, and expressing asKeeping the point cloud feature fusion representation in the extracted feature module asThe point cloud feature fusion in the reduction extraction feature module is represented as
And (4) extracting a composite structure from the edge characteristics, and carrying out size amplification, maintenance and reduction transformation on the input point cloud characteristics to obtain conversion type point cloud characteristics. And (4) a point cloud feature holding branch, further processing the converted point cloud features, and extracting the point cloud features under the condition of not changing the size of the point cloud features. And (4) adding the point cloud extraction features and the point cloud retention features in the point cloud feature extraction branch, so that the generated point cloud edge fusion feature content is more diverse, and the point cloud target feature information in the three-dimensional image is fully and accurately represented.
In the expansion extraction feature module:
(1) when a is 1, the output point cloud edge fusion expansion feature size is [ n × 64 ].
(2) When a is 2, the output point cloud edge fusion expansion feature size is [ n × 128 ].
(3) When a is 3, the output point cloud edge fusion expansion feature size is [ n × 256 ].
(4) When a is 4, the output point cloud edge fusion expansion characteristic size is [ n × 512 ].
(5) When a is 5, the output point cloud edge fusion expansion feature size is [ n × 1024 ].
In the keep extracting features module:
(1) when b is 1, the output point cloud edge fusion preserving feature size is [ n × 1024 ].
(2) When b is 2, the output point cloud edge fusion preserving feature size is [ n × 1024 ].
(3) When b is 3, the output point cloud edge fusion preserving feature size is [ n × 1024 ].
In the reduction extraction feature module:
(1) when c is 1, the output point cloud edge fusion reduction feature size is [ n × 512 ].
(2) When c is 2, the output point cloud edge fusion reduction feature size is [ n × 256 ].
(3) When c is 3, the output point cloud edge fusion reduction feature size is [ n × 128 ].
(4) When c is 4, the output point cloud edge fusion reduction feature size is [ n × 64 ].
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.
Claims (5)
1. The 3D point cloud segmentation target detection system based on edge feature fusion is characterized by comprising an expansion extraction feature module, a retention extraction feature module and a reduction extraction feature module which are used in series, wherein each module comprises a plurality of edge feature extraction units, each edge feature extraction unit comprises a point cloud feature sampling layer, a point cloud feature retaining branch and a point cloud feature extraction branch, the point cloud feature sampling layer changes the size of point cloud features obtained from a three-dimensional image, the point cloud feature retaining branch further processes the point cloud features after size change by adopting 1-dimensional convolution to ensure consistency of the point cloud features before and after the convolution processing, and the point cloud feature extraction branch obtains edge features through an extraction type multilayer perceptron and then is fused with the consistent point cloud features.
2. The system for detecting the 3D point cloud segmentation target based on edge feature fusion as claimed in claim 1, wherein the edge feature extraction unit implements the following steps for obtaining point cloud edge fusion features:
step 1: the feature vector of original point cloud in the three-dimensional image is processedReading the feature vector as input, wherein the size is n multiplied by 3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and inputting the feature vector of the original point cloud in the three-dimensional imageIn a specific form of
Step 2: characterizing the raw point cloudProcessing the point cloud characteristic sampling layer to obtain the conversion type point cloud characteristic with changed sizeThe expression is as follows:
and 3, step 3: in the point cloud feature holding branch, the conversion type point cloud feature is adoptedAs the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the featuresAs follows:
wherein ,is a feature vector input expression formula in a point cloud feature holding branch, vσiIs a vector representation expression in a point cloud holding branch, and the concrete expression of one-dimensional convolution operation isi represents a point cloud characteristic dimension, the convolution kernel size of the convolution processing is 1 × 1, s represents the step length of the convolution operation, and s is 1;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also usedAs the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristicsAs shown in the following formula:
wherein Representing a one-dimensional convolution operation in an extraction type multilayer perceptron, wherein the size of a convolution kernel is 1 multiplied by 1, s represents the step length of the convolution operation, and s is 1, and lambda represents the offset of each layer perceptron;
and 4, step 4: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion featuresThe method comprises the following specific steps:
by aligning the cloud characteristics of the original points in the three-dimensional imagePerforming the above operation to finally output the point cloud edge fusion characteristicsThe following;
3. the system for detecting the 3D point cloud segmentation target based on the edge feature fusion as claimed in claim 1, wherein the expansion extraction feature module comprises a edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature expansion sampling layerIn order to obtain the point cloud edge fusion expansion characteristics, the implementation steps are as follows:
step 1: the feature vector of original point cloud in the three-dimensional image is processedReading the feature vector as input, wherein the size is n multiplied by 3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and inputting the feature vector of the original point cloud in the three-dimensional imageIn a specific form of
Step 2: characterizing the raw point cloudProcessing the point cloud feature expansion sampling layer to obtain the conversion type point cloud feature with changed sizeThe expression is as follows:
and 3, step 3: in the point cloud feature holding branch, the conversion type point cloud feature is adoptedAs the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the featuresAs follows:
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also usedAs the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristicsAs shown in the following formula:
and 4, step 4: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion expansion featuresThe method comprises the following specific steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that the a point cloud edge fusion expansion feature is obtained.
4. The system for detecting the 3D point cloud segmentation target based on the edge feature fusion as claimed in claim 1, wherein the feature-keeping and extracting module comprises b edge feature extracting units, and the point cloud feature sampling layer in each edge feature extracting unit is a point cloud feature-keeping sampling layerIn order to obtain the point cloud edge fusion retention characteristics, the implementation steps are as follows:
step 1: the point cloud edge fusion expansion feature output by the expansion extraction feature module is used as input, the feature vector is read, the point cloud feature holding sampling layer is used for holding the feature size, and the conversion type point cloud feature is obtained
Step 2: in the point cloud feature holding branch, the conversion type point cloud feature is usedAs the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the featuresAs follows:
wherein ,in a branch for retaining point cloud featuresInputting an expression by the characteristic vector;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also usedAs the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristicsAs shown in the following formula:
and 3, step 3: fusing the point cloud retaining feature and the point cloud extraction feature to obtain a point cloud edge fusion retaining featureThe method comprises the following specific steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that b point cloud edge fusion retention features are obtained.
5. The system for detecting the 3D point cloud segmentation target based on the edge feature fusion as claimed in claim 1, wherein the reduction extraction feature module comprises c edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature reduction sampling layerIn order to obtain the point cloud edge fusion reduction characteristics, the implementation steps are as follows:
step 1: will keep extractingThe point cloud edge fusion retention features output by the feature module are used as input, the feature vector is read, the restored point cloud feature sampling layer is used for retaining the feature size, and the conversion type point cloud features are obtained
Step 2: in the point cloud feature holding branch, the conversion type point cloud feature is usedAs the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the featuresAs follows:
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also usedAs the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristicsAs shown in the following formula:
and 3, step 3: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion reduction featuresThe method comprises the following specific steps:
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that c point cloud edge fusion reduction features are obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110786257.4A CN113538474B (en) | 2021-07-12 | 2021-07-12 | 3D point cloud segmentation target detection system based on edge feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110786257.4A CN113538474B (en) | 2021-07-12 | 2021-07-12 | 3D point cloud segmentation target detection system based on edge feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113538474A true CN113538474A (en) | 2021-10-22 |
CN113538474B CN113538474B (en) | 2023-08-22 |
Family
ID=78098712
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110786257.4A Active CN113538474B (en) | 2021-07-12 | 2021-07-12 | 3D point cloud segmentation target detection system based on edge feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113538474B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114299243A (en) * | 2021-12-14 | 2022-04-08 | 中科视语(北京)科技有限公司 | Point cloud feature enhancement method and device based on multi-scale fusion |
CN114998890A (en) * | 2022-05-27 | 2022-09-02 | 长春大学 | Three-dimensional point cloud target detection algorithm based on graph neural network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489358A (en) * | 2020-03-18 | 2020-08-04 | 华中科技大学 | Three-dimensional point cloud semantic segmentation method based on deep learning |
CN112052860A (en) * | 2020-09-11 | 2020-12-08 | 中国人民解放军国防科技大学 | Three-dimensional target detection method and system |
CN112270249A (en) * | 2020-10-26 | 2021-01-26 | 湖南大学 | Target pose estimation method fusing RGB-D visual features |
US20210042929A1 (en) * | 2019-01-22 | 2021-02-11 | Institute Of Automation, Chinese Academy Of Sciences | Three-dimensional object detection method and system based on weighted channel features of a point cloud |
CN112785611A (en) * | 2021-01-29 | 2021-05-11 | 昆明理工大学 | 3D point cloud weak supervision semantic segmentation method and system |
-
2021
- 2021-07-12 CN CN202110786257.4A patent/CN113538474B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210042929A1 (en) * | 2019-01-22 | 2021-02-11 | Institute Of Automation, Chinese Academy Of Sciences | Three-dimensional object detection method and system based on weighted channel features of a point cloud |
CN111489358A (en) * | 2020-03-18 | 2020-08-04 | 华中科技大学 | Three-dimensional point cloud semantic segmentation method based on deep learning |
CN112052860A (en) * | 2020-09-11 | 2020-12-08 | 中国人民解放军国防科技大学 | Three-dimensional target detection method and system |
CN112270249A (en) * | 2020-10-26 | 2021-01-26 | 湖南大学 | Target pose estimation method fusing RGB-D visual features |
CN112785611A (en) * | 2021-01-29 | 2021-05-11 | 昆明理工大学 | 3D point cloud weak supervision semantic segmentation method and system |
Non-Patent Citations (1)
Title |
---|
樊丽;刘晋浩;黄青青;: "基于特征融合的林下环境点云分割", 北京林业大学学报, no. 05 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114299243A (en) * | 2021-12-14 | 2022-04-08 | 中科视语(北京)科技有限公司 | Point cloud feature enhancement method and device based on multi-scale fusion |
CN114998890A (en) * | 2022-05-27 | 2022-09-02 | 长春大学 | Three-dimensional point cloud target detection algorithm based on graph neural network |
CN114998890B (en) * | 2022-05-27 | 2023-03-10 | 长春大学 | Three-dimensional point cloud target detection algorithm based on graph neural network |
Also Published As
Publication number | Publication date |
---|---|
CN113538474B (en) | 2023-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111179324B (en) | Object six-degree-of-freedom pose estimation method based on color and depth information fusion | |
CN107563381B (en) | Multi-feature fusion target detection method based on full convolution network | |
CN108647694B (en) | Context-aware and adaptive response-based related filtering target tracking method | |
CN114255238A (en) | Three-dimensional point cloud scene segmentation method and system fusing image features | |
CN109447979B (en) | Target detection method based on deep learning and image processing algorithm | |
CN113538474A (en) | 3D point cloud segmentation target detection system based on edge feature fusion | |
Shen et al. | Vehicle detection in aerial images based on lightweight deep convolutional network and generative adversarial network | |
WO2023151237A1 (en) | Face pose estimation method and apparatus, electronic device, and storage medium | |
CN114170410A (en) | Point cloud part level segmentation method based on PointNet graph convolution and KNN search | |
CN107609509A (en) | A kind of action identification method based on motion salient region detection | |
CN116129289A (en) | Attention edge interaction optical remote sensing image saliency target detection method | |
CN114677357A (en) | Model, method and equipment for detecting self-explosion defect of aerial photographing insulator and storage medium | |
CN111368733A (en) | Three-dimensional hand posture estimation method based on label distribution learning, storage medium and terminal | |
CN112613478B (en) | Data active selection method for robot grabbing | |
CN117351078A (en) | Target size and 6D gesture estimation method based on shape priori | |
CN116912673A (en) | Target detection method based on underwater optical image | |
CN116486089A (en) | Point cloud segmentation network light-weight method, device and equipment based on knowledge distillation | |
CN114937153A (en) | Neural network-based visual feature processing system and method under weak texture environment | |
CN115496859A (en) | Three-dimensional scene motion trend estimation method based on scattered point cloud cross attention learning | |
CN114638866A (en) | Point cloud registration method and system based on local feature learning | |
CN111797782A (en) | Vehicle detection method and system based on image features | |
CN111401203A (en) | Target identification method based on multi-dimensional image fusion | |
CN116486203B (en) | Single-target tracking method based on twin network and online template updating | |
Abdulameer et al. | Bird Image Dataset Classification using Deep Convolutional Neural Network Algorithm | |
Wang et al. | Steel Coil Recognition Neural Networks for Intelligent Cranes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |