CN113538474A - 3D point cloud segmentation target detection system based on edge feature fusion - Google Patents

3D point cloud segmentation target detection system based on edge feature fusion Download PDF

Info

Publication number
CN113538474A
CN113538474A CN202110786257.4A CN202110786257A CN113538474A CN 113538474 A CN113538474 A CN 113538474A CN 202110786257 A CN202110786257 A CN 202110786257A CN 113538474 A CN113538474 A CN 113538474A
Authority
CN
China
Prior art keywords
point cloud
feature
edge
extraction
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110786257.4A
Other languages
Chinese (zh)
Other versions
CN113538474B (en
Inventor
毛琳
向姝芬
杨大伟
张汝波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Minzu University
Original Assignee
Dalian Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Minzu University filed Critical Dalian Minzu University
Priority to CN202110786257.4A priority Critical patent/CN113538474B/en
Publication of CN113538474A publication Critical patent/CN113538474A/en
Application granted granted Critical
Publication of CN113538474B publication Critical patent/CN113538474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a 3D point cloud segmentation target detection system based on edge feature fusion, and relates to the technical field of deep learning 3D point cloud segmentation; the method adopts a multilayer perceptron to extract edge features, fuses point cloud retention features and point cloud extraction features to generate point cloud edge fusion features, enhances the extraction capability of the edge features, applies the obtained edge features to a target detection task, improves the precision of a point cloud segmentation model, obtains an accurate point cloud target detection result, and can be well applied to the fields of unmanned driving, manipulator perception and the like.

Description

3D point cloud segmentation target detection system based on edge feature fusion
Technical Field
The invention relates to the technical field of deep learning 3D point cloud segmentation, in particular to a 3D point cloud segmentation target detection system based on edge feature fusion.
Background
In recent years, with the development of three-dimensional laser scanning technology, the adoption of three-dimensional point cloud data is also becoming fast and convenient. The method has wide application in the fields of unmanned driving, robots, indoor scene detection and identification and the like, and the three-dimensional point cloud has become a research hotspot in the field of computer vision. At present, scene acquisition mainly depends on traditional point cloud, but the automatic computer review technology is not mature enough, and the existing algorithm cannot realize sufficient understanding and comprehension of point cloud targets. Deep learning is rapidly developed in the field of computer vision, and remarkable achievement is achieved in recognition and classification of two-dimensional images. Meanwhile, the research of three-dimensional point cloud classification is influenced by the fact that deep learning methods are increasingly used.
Most of existing 3D point cloud target detection algorithms use key points as 3D detection targets, an auxiliary training module is determined through the connection relation between the key points, and accurate positioning of a 3D target frame is achieved. The invention discloses a 3D target detection method based on key points, and discloses an invention patent application with the publication number of CN 112766100A. But the method ignores the relation between local and global, resulting in low 3D object detection accuracy. The invention discloses a point cloud classification method and system based on local edge feature enhancement, and discloses an invention patent application with the publication number of CN 112052884A.A point cloud classification model is constructed based on a graph convolution network structure and a channel attention mechanism by acquiring point cloud voxelization data, edge features of a point cloud corresponding to the point cloud voxelization data in a preset neighborhood and voxel positions corresponding to the point cloud voxelization data, the point cloud after feature filling is classified, and a point cloud classification result is output, so that the interdependency relationship among feature channels is increased, the global feature expression capability of the point cloud is enhanced, and the efficiency and the prediction accuracy of the point cloud classification are improved. However, due to the point cloud edge filling, the method can increase the identification error of the small sample point cloud model, and is not beneficial to the target classification of the small sample point cloud model.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a 3D point cloud segmentation target detection system based on edge feature fusion, which can obtain more accurate edge information and improve the accuracy of a point cloud target detection result.
In order to achieve the purpose, the technical scheme of the application is as follows: A3D point cloud segmentation target detection system based on edge feature fusion comprises an expansion extraction feature module, a retention extraction feature module and a reduction extraction feature module which are used in series, wherein each module comprises a plurality of edge feature extraction units, the edge feature extraction unit comprises a point cloud feature sampling layer, a point cloud feature holding branch and a point cloud feature extraction branch, the point cloud characteristic sampling layer changes the size of the point cloud characteristic obtained from the three-dimensional image, the point cloud characteristic holding branch adopts 1-dimensional convolution to further process the point cloud characteristic with the changed size, the consistency of the point cloud characteristic before and after the convolution processing is ensured, the point cloud feature extraction branch acquires edge features through an extraction type multilayer perceptron, and then is fused with consistent point cloud features, so that point cloud edge fusion keeps feature contents more diverse, and point cloud target feature information is fully and accurately represented.
Further, in order to obtain the point cloud edge fusion feature in the edge feature extraction unit, the implementation steps are as follows:
step 1: the feature vector of original point cloud in the three-dimensional image is processed
Figure BDA0003158913370000031
Reading the feature vector as input, wherein the size is n multiplied by 3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and inputting the feature vector of the original point cloud in the three-dimensional image
Figure BDA0003158913370000032
In a specific form of
Figure BDA0003158913370000033
Step 2: characterizing the raw point cloud
Figure BDA0003158913370000034
Processing the point cloud characteristic sampling layer to obtain the conversion type point cloud characteristic with changed size
Figure BDA0003158913370000035
The expression is as follows:
Figure BDA0003158913370000036
and 3, step 3: in the Point cloud feature holding Branch, with the rotationShape-changing point cloud characteristics
Figure BDA0003158913370000037
As the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the features
Figure BDA0003158913370000038
As follows:
Figure BDA0003158913370000039
wherein ,
Figure BDA00031589133700000310
is a feature vector input expression formula in a point cloud feature holding branch, vσiIs a vector representation expression in a point cloud holding branch, and the concrete expression of one-dimensional convolution operation is
Figure BDA00031589133700000311
i represents a point cloud characteristic dimension, the convolution kernel size of the convolution processing is 1 × 1, s represents the step length of the convolution operation, and s is 1;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also used
Figure BDA00031589133700000312
As the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristics
Figure BDA00031589133700000313
As shown in the following formula:
Figure BDA00031589133700000314
wherein
Figure BDA00031589133700000315
Show and carryTaking one-dimensional convolution operation in a multi-layer perceptron, wherein the size of a convolution kernel is 1 multiplied by 1, s represents the step length of the convolution operation, and s is 1, and lambda represents the offset of each layer perceptron;
and 4, step 4: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion features
Figure BDA0003158913370000041
The method comprises the following specific steps:
Figure BDA0003158913370000042
by aligning the cloud characteristics of the original points in the three-dimensional image
Figure BDA0003158913370000043
Performing the above operation to finally output the point cloud edge fusion characteristics
Figure BDA0003158913370000044
The following;
Figure BDA0003158913370000045
because the convolution kernel size and the step length of the one-dimensional convolution are 1 in the convolution processing in each module, the edge fusion characteristic of the point cloud is finally output
Figure BDA0003158913370000046
The characteristic size of the point cloud image is determined by the point cloud characteristic sampling layer and represents the point cloud characteristic size of each unit, so that the consistency of the point cloud characteristics before and after processing is ensured, the effective enhancement is realized, and the richness of the point cloud characteristics in the three-dimensional image is improved.
Furthermore, the expansion extraction feature module comprises a edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature expansion sampling layer
Figure BDA0003158913370000047
Taking 5 units as an example, namely a E {1,2,3,4,5} is compounded, and in order to obtain the point cloud edge fusion expansion feature, the implementation steps are as follows:
step 1: the feature vector of original point cloud in the three-dimensional image is processed
Figure BDA0003158913370000048
Reading the feature vector as input, wherein the size is n multiplied by 3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and inputting the feature vector of the original point cloud in the three-dimensional image
Figure BDA0003158913370000049
In a specific form of
Figure BDA00031589133700000410
Step 2: characterizing the raw point cloud
Figure BDA00031589133700000411
Processing the point cloud feature expansion sampling layer to obtain the conversion type point cloud feature with changed size
Figure BDA0003158913370000051
The expression is as follows:
Figure BDA0003158913370000052
and 3, step 3: in the point cloud feature holding branch, the conversion type point cloud feature is adopted
Figure BDA0003158913370000053
As the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the features
Figure BDA0003158913370000054
As follows:
Figure BDA0003158913370000055
wherein ,
Figure BDA0003158913370000056
inputting a feature vector expression in a point cloud feature holding branch;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also used
Figure BDA0003158913370000057
As the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristics
Figure BDA0003158913370000058
As shown in the following formula:
Figure BDA0003158913370000059
and 4, step 4: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion expansion features
Figure BDA00031589133700000510
The method comprises the following specific steps:
Figure BDA00031589133700000511
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that the a point cloud edge fusion expansion feature is obtained. Taking 5 units as an example, the number of the units is,
the method can be obtained by the same principle of the steps:
when a is 2, the output result is
Figure BDA00031589133700000512
When a is 3, the output result is
Figure BDA00031589133700000513
When a is 4, the output result is
Figure BDA00031589133700000514
When a is 5, the output result is
Figure BDA0003158913370000061
Furthermore, the feature keeping and extracting module comprises b edge feature extracting units, and the point cloud feature sampling layer in each edge feature extracting unit is a point cloud feature keeping and sampling layer
Figure BDA0003158913370000062
Taking 3 units as an example, namely b ∈ {1,2,3} is compounded, and in order to obtain the point cloud edge fusion retention feature, the implementation steps are as follows:
step 1: fusing and expanding characteristics of point cloud edges output by an expanding and extracting characteristic module
Figure BDA0003158913370000063
As input, reading the feature vector, and using a point cloud feature holding sampling layer to hold the feature size to obtain a conversion type point cloud feature
Figure BDA0003158913370000064
In a specific form of
Figure BDA0003158913370000065
Step 2: in the point cloud feature holding branch, the conversion type point cloud feature is used
Figure BDA0003158913370000066
As the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the features
Figure BDA0003158913370000067
As follows:
Figure BDA0003158913370000068
wherein ,
Figure BDA0003158913370000069
the feature vector input expression in the point cloud feature holding branch is shown.
Meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also used
Figure BDA00031589133700000610
As the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristics
Figure BDA00031589133700000611
As shown in the following formula:
Figure BDA00031589133700000612
and 3, step 3: fusing the point cloud retaining feature and the point cloud extraction feature to obtain a point cloud edge fusion retaining feature
Figure BDA00031589133700000613
The method comprises the following specific steps:
Figure BDA0003158913370000071
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that b point cloud edge fusion retention features are obtained. Taking 3 units as an example, the number of the units is,
the method can be obtained by the same principle of the steps:
when b is 2, the output result is
Figure BDA0003158913370000072
When b is 3, the output result is
Figure BDA0003158913370000073
Furthermore, the reduction extraction feature module comprises c edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature reduction sampling layer
Figure BDA0003158913370000074
Taking 4 units as an example, namely c ∈ {1,2,3,4} is compounded, and in order to obtain the point cloud edge fusion reduction feature, the implementation steps are as follows:
step 1: the point cloud edge output by the module for keeping and extracting the characteristics is fused with the keeping characteristics
Figure BDA0003158913370000075
As input, reading the feature vector, and using a restored point cloud feature sampling layer to maintain the feature size to obtain a conversion-type point cloud feature
Figure BDA0003158913370000076
In a specific form of
Figure BDA0003158913370000077
Step 2: in the point cloud feature holding branch, the conversion type point cloud feature is used
Figure BDA0003158913370000078
As the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the features
Figure BDA0003158913370000079
As follows:
Figure BDA00031589133700000710
wherein ,
Figure BDA00031589133700000711
inputting a feature vector expression in a point cloud feature holding branch;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also used
Figure BDA00031589133700000712
As the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristics
Figure BDA0003158913370000081
As shown in the following formula:
Figure BDA0003158913370000082
and 3, step 3: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion reduction features
Figure BDA0003158913370000083
The method comprises the following specific steps:
Figure BDA0003158913370000084
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that c point cloud edge fusion reduction features are obtained. Taking 4 units as an example of the method,
the method can be obtained by the same principle of the steps:
when c is 2, the output result is
Figure BDA0003158913370000085
When c is 3, the output result is
Figure BDA0003158913370000086
When c is 4, the output result is
Figure BDA0003158913370000087
Due to the adoption of the technical scheme, the invention can obtain the following technical effects:
(1) is suitable for obtaining the point cloud characteristic condition through the edge characteristic
The three-dimensional image in the invention takes the original point cloud characteristics as input, and the edge characteristic information is fused in the process of extracting the point cloud characteristics by adopting a multi-layer perceptron structure, so that the obtained point cloud characteristics have dual expression of the original point cloud characteristics and the edge characteristics, the expression capability of the point cloud characteristics is enhanced, and the method is suitable for the condition of obtaining the point cloud characteristics through the edge characteristics.
(2) Suitable for point cloud segmentation task
According to the invention, the three-dimensional image obtains the point cloud edge fusion retention features with stronger expression capability through the edge feature extraction composite structure, the point cloud features and the point cloud retention features of the target in the three-dimensional image can be fused and input as the target features after cascading, and an accurate point cloud segmentation result is obtained.
(3) Adapted to target detection tasks
The method can effectively improve the performance of the point cloud segmentation model in the three-dimensional image, has relatively simple factors such as targets, actions and attributes, and can be better applied to target detection tasks.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an edge feature extraction unit;
FIG. 2 is a schematic diagram of an expansion feature extraction module composed of an edge feature extraction unit;
FIG. 3 is a schematic diagram of a keep-alive extraction feature module composed of an edge feature extraction unit;
FIG. 4 is a schematic diagram of a restoration extraction feature module composed of an edge feature extraction unit:
FIG. 5 is a schematic diagram of the overall framework of the present system (edge feature extraction composite);
FIG. 6 is a schematic view showing the environment in which the automatic driving recognition in example 1 is carried out;
FIG. 7 is a schematic diagram of a railway scene detection scenario in example 2;
fig. 8 is a schematic diagram of the case where the robot grips an object in example 3.
Detailed Description
The embodiments of the present invention are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are given, but the scope of the present invention is not limited to the following embodiments.
The embodiment provides a 3D point cloud segmentation target detection system based on edge feature fusion, which is used for processing original point cloud features of a three-dimensional image through an edge feature extraction composite structure to obtain point cloud edge fusion reduction features containing edge feature information with strong expression capacity. The composite structure comprises three modules, namely an expansion feature extraction module, a retention feature extraction module and a reduction feature extraction module, and the basic structure of the composite structure is an edge feature extraction unit which comprises a point cloud feature sampling layer, a point cloud feature retention branch and a point cloud feature extraction branch. The point cloud feature sampling layer changes the feature size of input point cloud, the point cloud feature keeping branch adopts 1-dimensional convolution to further process the point cloud features after the size is changed, consistency of the point cloud features before and after the convolution processing is guaranteed, the point cloud feature extracting branch extracts edge features through an extraction type multilayer perceptron and then fuses the edge features with the consistent point cloud features, so that point cloud edge fusion feature content is more diverse, and point cloud target feature information is fully and accurately represented.
The edge feature extraction unit acquires hidden layer features in the processing process of the extraction type multilayer perceptron, then point cloud features are fused, the diversity of feature contents is enriched, the edge feature extraction unit is multiplexed to obtain an expansion extraction feature module, a maintenance extraction feature module and a reduction extraction feature module, the features with point cloud features and edge features fused and expressed are acquired, and the expression capability of the point cloud features is enhanced. And fusing the point cloud maintaining characteristics obtained by the point cloud characteristic maintaining branch and the point cloud extracting characteristics obtained by the point cloud characteristic extracting branch to be used as the input of the next unit. In the expansion extraction feature module, a sampling layer of an edge feature extraction unit is a point cloud feature expansion sampling layer, a sampling layer of an edge feature extraction unit in the retention extraction feature module is a point cloud feature retention sampling layer, a sampling layer of an edge feature extraction unit in the reduction extraction feature module is a point cloud feature reduction sampling layer, and feature size conversion is performed on input point cloud features in each unit.
The edge feature extraction composite structure comprises the three modules, is used as a point cloud multi-layer extraction feature main structure, and then is input into a maximum pooling layer to extract the most dominant features in the whole structure.
The edge feature extraction unit takes the original point cloud coordinates of the three-dimensional image as input and carries out preprocessing on the original point cloud features in a clustering mode such as KNN (K nearest neighbor) and the like
Figure BDA0003158913370000111
Changing or keeping the characteristic size of the point cloud through a point cloud characteristic sampling layer, and outputting a conversion type point cloud characteristic RFAnd the point cloud characteristic keeping branch and the point cloud characteristic extracting branch are used as the input of the point cloud characteristic keeping branch and the point cloud characteristic extracting branch. Output point cloud feature keeping R in point cloud feature keeping branchBThe point cloud characteristic extraction branch is processed by an extraction type multilayer perceptron and then output as a point cloud extraction characteristic RTKeeping the point cloud in the characteristic RBExtracting feature R from point cloudTAnd fusing, and finally outputting a point cloud edge fusion characteristic R, specifically:
(1) inputting original point cloud characteristics of three-dimensional image
Figure BDA0003158913370000112
I.e. the feature vectors to be input to the edge feature extraction unit.
(2) Outputting point cloud retaining feature R after processing three-dimensional imageBThe method specifically comprises the following steps: original point cloud features input by a three-dimensional image are subjected to size change through a point cloud feature sampling layer to obtain conversion type point cloud features RFThe converted point cloud feature RFAnd carrying out 1-dimensional convolution processing on the convolution kernel with the size of 1 to obtain the feature vector.
(3) Three-dimensional image is processed and then point cloud extraction characteristic R is outputTThe method specifically comprises the following steps: original point cloud features input by a three-dimensional image are subjected to size change through a point cloud feature sampling layer to obtain conversion type point cloud features RFThe converted point cloud feature RFExtracting features through a multilayer perceptron, outputting point cloud extraction features RT
(4) Point cloud feature fusion, which comprises the following specific operations: keeping the point cloud in the characteristic RBExtracting feature R from point cloudTAnd adding and fusing to obtain a point cloud edge fusion retention characteristic R.
And (3) keeping a branch for the point cloud characteristics, further processing the original point cloud characteristics, and under the condition that the characteristic size is changed and the point cloud characteristics represent that the attributes of the detection target are not changed, improving the expression capability of the edge characteristics and simultaneously providing the point cloud original characteristics for the point cloud characteristic extraction branch. In the point cloud feature extraction branch, the point cloud extraction features and the point cloud retention features are added, so that feature contents are enriched, and the point cloud features with strong expression capability are obtained.
The edge feature extraction composite structure comprises three modules which are used in series, namely an expansion feature extraction module, a retention feature extraction module and a reduction feature extraction module, and the input of the three modules is the original point cloud feature
Figure BDA0003158913370000121
Output as point cloud edge fusion features
Figure BDA0003158913370000122
The method specifically comprises the following steps:
(1) inputting original point cloud features in three-dimensional image
Figure BDA0003158913370000123
I.e. the feature vectors that are input to the edge feature extraction composite structure.
(2) The expansion extraction feature module is provided with a edge feature extraction units, wherein the point cloud feature sampling layer is a point cloud feature expansion sampling layer, the size of the input original point cloud feature is enlarged, and the output of the point cloud feature expansion sampling layer is
Figure BDA0003158913370000124
B edge feature extraction units are arranged in the feature keeping and extracting module, the point cloud feature sampling layer is a point cloud feature keeping and sampling layer, the feature size of the input point cloud is kept unchanged, and the output of the point cloud feature keeping and sampling layer is
Figure BDA0003158913370000125
C edge feature extraction units are arranged in the restoration extraction feature module, the point cloud feature sampling layer is a point cloud feature restoration sampling layer, the size of the input original point cloud features is restored, and the output of the point cloud feature restoration sampling layer is
Figure BDA0003158913370000126
(3) Point cloud retention feature R in the three-dimensional imageBIn the point cloud feature holding branch, the conversion type point cloud feature RFAnd performing 1-dimensional convolution processing on the feature vectors by 1 convolution kernel with the size of 1. In particular, the point cloud feature in the expansion extraction feature module is kept and expressed as
Figure BDA0003158913370000127
Keeping point cloud features in the extracted features module as
Figure BDA0003158913370000128
The point cloud features in the reduction extraction feature module are kept represented as
Figure BDA0003158913370000129
(4) Extracting characteristics R from the point cloud in the three-dimensional imageTThe conversion type point cloud features are extracted through a multilayer perceptron, the fusion processing is waited to be carried out in a point cloud feature extraction branch, and the point cloud feature extraction in an expansion extraction feature module is expressed as
Figure BDA0003158913370000131
Keeping the point cloud feature extraction in the extraction feature module as
Figure BDA0003158913370000132
The point cloud feature extraction in the reduction extraction feature module is represented as
Figure BDA0003158913370000133
(5) And point cloud features in the three-dimensional image are fused, and the specific operation is as follows: in the point cloud feature extraction branch, the point cloud is kept with the feature RBExtracting feature R from point cloudTAdding and fusing, expanding and extracting point cloud characteristic in the characteristic module, and expressing as
Figure BDA0003158913370000134
Keeping the point cloud feature fusion representation in the extracted feature module as
Figure BDA0003158913370000135
The point cloud feature fusion in the reduction extraction feature module is represented as
Figure BDA0003158913370000136
And (4) extracting a composite structure from the edge characteristics, and carrying out size amplification, maintenance and reduction transformation on the input point cloud characteristics to obtain conversion type point cloud characteristics. And (4) a point cloud feature holding branch, further processing the converted point cloud features, and extracting the point cloud features under the condition of not changing the size of the point cloud features. And (4) adding the point cloud extraction features and the point cloud retention features in the point cloud feature extraction branch, so that the generated point cloud edge fusion feature content is more diverse, and the point cloud target feature information in the three-dimensional image is fully and accurately represented.
In the expansion extraction feature module:
(1) when a is 1, the output point cloud edge fusion expansion feature size is [ n × 64 ].
(2) When a is 2, the output point cloud edge fusion expansion feature size is [ n × 128 ].
(3) When a is 3, the output point cloud edge fusion expansion feature size is [ n × 256 ].
(4) When a is 4, the output point cloud edge fusion expansion characteristic size is [ n × 512 ].
(5) When a is 5, the output point cloud edge fusion expansion feature size is [ n × 1024 ].
In the keep extracting features module:
(1) when b is 1, the output point cloud edge fusion preserving feature size is [ n × 1024 ].
(2) When b is 2, the output point cloud edge fusion preserving feature size is [ n × 1024 ].
(3) When b is 3, the output point cloud edge fusion preserving feature size is [ n × 1024 ].
In the reduction extraction feature module:
(1) when c is 1, the output point cloud edge fusion reduction feature size is [ n × 512 ].
(2) When c is 2, the output point cloud edge fusion reduction feature size is [ n × 256 ].
(3) When c is 3, the output point cloud edge fusion reduction feature size is [ n × 128 ].
(4) When c is 4, the output point cloud edge fusion reduction feature size is [ n × 64 ].
The foregoing descriptions of specific exemplary embodiments of the present invention have been presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (5)

1. The 3D point cloud segmentation target detection system based on edge feature fusion is characterized by comprising an expansion extraction feature module, a retention extraction feature module and a reduction extraction feature module which are used in series, wherein each module comprises a plurality of edge feature extraction units, each edge feature extraction unit comprises a point cloud feature sampling layer, a point cloud feature retaining branch and a point cloud feature extraction branch, the point cloud feature sampling layer changes the size of point cloud features obtained from a three-dimensional image, the point cloud feature retaining branch further processes the point cloud features after size change by adopting 1-dimensional convolution to ensure consistency of the point cloud features before and after the convolution processing, and the point cloud feature extraction branch obtains edge features through an extraction type multilayer perceptron and then is fused with the consistent point cloud features.
2. The system for detecting the 3D point cloud segmentation target based on edge feature fusion as claimed in claim 1, wherein the edge feature extraction unit implements the following steps for obtaining point cloud edge fusion features:
step 1: the feature vector of original point cloud in the three-dimensional image is processed
Figure FDA0003158913360000011
Reading the feature vector as input, wherein the size is n multiplied by 3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and inputting the feature vector of the original point cloud in the three-dimensional image
Figure FDA0003158913360000012
In a specific form of
Figure FDA0003158913360000013
Step 2: characterizing the raw point cloud
Figure FDA0003158913360000014
Processing the point cloud characteristic sampling layer to obtain the conversion type point cloud characteristic with changed size
Figure FDA0003158913360000015
The expression is as follows:
Figure FDA0003158913360000016
and 3, step 3: in the point cloud feature holding branch, the conversion type point cloud feature is adopted
Figure FDA0003158913360000017
As the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the features
Figure FDA0003158913360000018
As follows:
Figure FDA0003158913360000021
wherein ,
Figure FDA0003158913360000022
is a feature vector input expression formula in a point cloud feature holding branch, vσiIs a vector representation expression in a point cloud holding branch, and the concrete expression of one-dimensional convolution operation is
Figure FDA0003158913360000023
i represents a point cloud characteristic dimension, the convolution kernel size of the convolution processing is 1 × 1, s represents the step length of the convolution operation, and s is 1;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also used
Figure FDA0003158913360000024
As the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristics
Figure FDA0003158913360000025
As shown in the following formula:
Figure FDA0003158913360000026
wherein
Figure FDA0003158913360000027
Representing a one-dimensional convolution operation in an extraction type multilayer perceptron, wherein the size of a convolution kernel is 1 multiplied by 1, s represents the step length of the convolution operation, and s is 1, and lambda represents the offset of each layer perceptron;
and 4, step 4: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion features
Figure FDA0003158913360000028
The method comprises the following specific steps:
Figure FDA0003158913360000029
by aligning the cloud characteristics of the original points in the three-dimensional image
Figure FDA00031589133600000210
Performing the above operation to finally output the point cloud edge fusion characteristics
Figure FDA00031589133600000211
The following;
Figure FDA00031589133600000212
3. the system for detecting the 3D point cloud segmentation target based on the edge feature fusion as claimed in claim 1, wherein the expansion extraction feature module comprises a edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature expansion sampling layer
Figure FDA0003158913360000031
In order to obtain the point cloud edge fusion expansion characteristics, the implementation steps are as follows:
step 1: the feature vector of original point cloud in the three-dimensional image is processed
Figure FDA0003158913360000032
Reading the feature vector as input, wherein the size is n multiplied by 3, n represents the number of point clouds, each point is represented by 3-dimensional coordinates of (x, y, z), and inputting the feature vector of the original point cloud in the three-dimensional image
Figure FDA0003158913360000033
In a specific form of
Figure FDA0003158913360000034
Step 2: characterizing the raw point cloud
Figure FDA0003158913360000035
Processing the point cloud feature expansion sampling layer to obtain the conversion type point cloud feature with changed size
Figure FDA0003158913360000036
The expression is as follows:
Figure FDA0003158913360000037
and 3, step 3: in the point cloud feature holding branch, the conversion type point cloud feature is adopted
Figure FDA0003158913360000038
As the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the features
Figure FDA0003158913360000039
As follows:
Figure FDA00031589133600000310
wherein ,
Figure FDA00031589133600000311
inputting a feature vector expression in a point cloud feature holding branch;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also used
Figure FDA00031589133600000312
As the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristics
Figure FDA00031589133600000313
As shown in the following formula:
Figure FDA00031589133600000314
and 4, step 4: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion expansion features
Figure FDA00031589133600000315
The method comprises the following specific steps:
Figure FDA0003158913360000041
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that the a point cloud edge fusion expansion feature is obtained.
4. The system for detecting the 3D point cloud segmentation target based on the edge feature fusion as claimed in claim 1, wherein the feature-keeping and extracting module comprises b edge feature extracting units, and the point cloud feature sampling layer in each edge feature extracting unit is a point cloud feature-keeping sampling layer
Figure FDA0003158913360000042
In order to obtain the point cloud edge fusion retention characteristics, the implementation steps are as follows:
step 1: the point cloud edge fusion expansion feature output by the expansion extraction feature module is used as input, the feature vector is read, the point cloud feature holding sampling layer is used for holding the feature size, and the conversion type point cloud feature is obtained
Figure FDA0003158913360000043
Step 2: in the point cloud feature holding branch, the conversion type point cloud feature is used
Figure FDA0003158913360000044
As the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the features
Figure FDA0003158913360000045
As follows:
Figure FDA0003158913360000046
wherein ,
Figure FDA0003158913360000047
in a branch for retaining point cloud featuresInputting an expression by the characteristic vector;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also used
Figure FDA0003158913360000048
As the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristics
Figure FDA0003158913360000049
As shown in the following formula:
Figure FDA00031589133600000410
and 3, step 3: fusing the point cloud retaining feature and the point cloud extraction feature to obtain a point cloud edge fusion retaining feature
Figure FDA0003158913360000051
The method comprises the following specific steps:
Figure FDA0003158913360000052
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that b point cloud edge fusion retention features are obtained.
5. The system for detecting the 3D point cloud segmentation target based on the edge feature fusion as claimed in claim 1, wherein the reduction extraction feature module comprises c edge feature extraction units, and the point cloud feature sampling layer in each edge feature extraction unit is a point cloud feature reduction sampling layer
Figure FDA0003158913360000053
In order to obtain the point cloud edge fusion reduction characteristics, the implementation steps are as follows:
step 1: will keep extractingThe point cloud edge fusion retention features output by the feature module are used as input, the feature vector is read, the restored point cloud feature sampling layer is used for retaining the feature size, and the conversion type point cloud features are obtained
Figure FDA0003158913360000054
Step 2: in the point cloud feature holding branch, the conversion type point cloud feature is used
Figure FDA0003158913360000055
As the input of the feature vector, 1-dimensional convolution operation is carried out by adopting 1 convolution kernel with the size of 1, and the output point cloud keeps the features
Figure FDA0003158913360000056
As follows:
Figure FDA0003158913360000057
wherein ,
Figure FDA0003158913360000058
inputting a feature vector expression in a point cloud feature holding branch;
meanwhile, in the point cloud feature extraction branch, the conversion type point cloud feature is also used
Figure FDA0003158913360000059
As the characteristic vector input, extracting the characteristics through an extraction type multilayer perceptron to obtain point cloud extraction characteristics
Figure FDA00031589133600000510
As shown in the following formula:
Figure FDA0003158913360000061
and 3, step 3: fusing the point cloud keeping features and the point cloud extraction features to obtain point cloud edge fusion reduction features
Figure FDA0003158913360000062
The method comprises the following specific steps:
Figure FDA0003158913360000063
and the output of the 1 st edge feature extraction unit is used as the input of the 2 nd edge feature extraction unit, and the like, so that c point cloud edge fusion reduction features are obtained.
CN202110786257.4A 2021-07-12 2021-07-12 3D point cloud segmentation target detection system based on edge feature fusion Active CN113538474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110786257.4A CN113538474B (en) 2021-07-12 2021-07-12 3D point cloud segmentation target detection system based on edge feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110786257.4A CN113538474B (en) 2021-07-12 2021-07-12 3D point cloud segmentation target detection system based on edge feature fusion

Publications (2)

Publication Number Publication Date
CN113538474A true CN113538474A (en) 2021-10-22
CN113538474B CN113538474B (en) 2023-08-22

Family

ID=78098712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110786257.4A Active CN113538474B (en) 2021-07-12 2021-07-12 3D point cloud segmentation target detection system based on edge feature fusion

Country Status (1)

Country Link
CN (1) CN113538474B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299243A (en) * 2021-12-14 2022-04-08 中科视语(北京)科技有限公司 Point cloud feature enhancement method and device based on multi-scale fusion
CN114998890A (en) * 2022-05-27 2022-09-02 长春大学 Three-dimensional point cloud target detection algorithm based on graph neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111489358A (en) * 2020-03-18 2020-08-04 华中科技大学 Three-dimensional point cloud semantic segmentation method based on deep learning
CN112052860A (en) * 2020-09-11 2020-12-08 中国人民解放军国防科技大学 Three-dimensional target detection method and system
CN112270249A (en) * 2020-10-26 2021-01-26 湖南大学 Target pose estimation method fusing RGB-D visual features
US20210042929A1 (en) * 2019-01-22 2021-02-11 Institute Of Automation, Chinese Academy Of Sciences Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN112785611A (en) * 2021-01-29 2021-05-11 昆明理工大学 3D point cloud weak supervision semantic segmentation method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210042929A1 (en) * 2019-01-22 2021-02-11 Institute Of Automation, Chinese Academy Of Sciences Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN111489358A (en) * 2020-03-18 2020-08-04 华中科技大学 Three-dimensional point cloud semantic segmentation method based on deep learning
CN112052860A (en) * 2020-09-11 2020-12-08 中国人民解放军国防科技大学 Three-dimensional target detection method and system
CN112270249A (en) * 2020-10-26 2021-01-26 湖南大学 Target pose estimation method fusing RGB-D visual features
CN112785611A (en) * 2021-01-29 2021-05-11 昆明理工大学 3D point cloud weak supervision semantic segmentation method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
樊丽;刘晋浩;黄青青;: "基于特征融合的林下环境点云分割", 北京林业大学学报, no. 05 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299243A (en) * 2021-12-14 2022-04-08 中科视语(北京)科技有限公司 Point cloud feature enhancement method and device based on multi-scale fusion
CN114998890A (en) * 2022-05-27 2022-09-02 长春大学 Three-dimensional point cloud target detection algorithm based on graph neural network
CN114998890B (en) * 2022-05-27 2023-03-10 长春大学 Three-dimensional point cloud target detection algorithm based on graph neural network

Also Published As

Publication number Publication date
CN113538474B (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN111179324B (en) Object six-degree-of-freedom pose estimation method based on color and depth information fusion
CN107563381B (en) Multi-feature fusion target detection method based on full convolution network
CN108647694B (en) Context-aware and adaptive response-based related filtering target tracking method
CN114255238A (en) Three-dimensional point cloud scene segmentation method and system fusing image features
CN109447979B (en) Target detection method based on deep learning and image processing algorithm
CN113538474A (en) 3D point cloud segmentation target detection system based on edge feature fusion
Shen et al. Vehicle detection in aerial images based on lightweight deep convolutional network and generative adversarial network
WO2023151237A1 (en) Face pose estimation method and apparatus, electronic device, and storage medium
CN114170410A (en) Point cloud part level segmentation method based on PointNet graph convolution and KNN search
CN107609509A (en) A kind of action identification method based on motion salient region detection
CN116129289A (en) Attention edge interaction optical remote sensing image saliency target detection method
CN114677357A (en) Model, method and equipment for detecting self-explosion defect of aerial photographing insulator and storage medium
CN111368733A (en) Three-dimensional hand posture estimation method based on label distribution learning, storage medium and terminal
CN112613478B (en) Data active selection method for robot grabbing
CN117351078A (en) Target size and 6D gesture estimation method based on shape priori
CN116912673A (en) Target detection method based on underwater optical image
CN116486089A (en) Point cloud segmentation network light-weight method, device and equipment based on knowledge distillation
CN114937153A (en) Neural network-based visual feature processing system and method under weak texture environment
CN115496859A (en) Three-dimensional scene motion trend estimation method based on scattered point cloud cross attention learning
CN114638866A (en) Point cloud registration method and system based on local feature learning
CN111797782A (en) Vehicle detection method and system based on image features
CN111401203A (en) Target identification method based on multi-dimensional image fusion
CN116486203B (en) Single-target tracking method based on twin network and online template updating
Abdulameer et al. Bird Image Dataset Classification using Deep Convolutional Neural Network Algorithm
Wang et al. Steel Coil Recognition Neural Networks for Intelligent Cranes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant