CN110210431B - Point cloud semantic labeling and optimization-based point cloud classification method - Google Patents

Point cloud semantic labeling and optimization-based point cloud classification method Download PDF

Info

Publication number
CN110210431B
CN110210431B CN201910492227.5A CN201910492227A CN110210431B CN 110210431 B CN110210431 B CN 110210431B CN 201910492227 A CN201910492227 A CN 201910492227A CN 110210431 B CN110210431 B CN 110210431B
Authority
CN
China
Prior art keywords
point cloud
graph
classification
classification result
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910492227.5A
Other languages
Chinese (zh)
Other versions
CN110210431A (en
Inventor
黄荣
叶真
徐聿升
潘玥
顾振雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Saihei Intelligent Technology Co ltd
Original Assignee
Shanghai Saihei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Saihei Intelligent Technology Co ltd filed Critical Shanghai Saihei Intelligent Technology Co ltd
Priority to CN201910492227.5A priority Critical patent/CN110210431B/en
Publication of CN110210431A publication Critical patent/CN110210431A/en
Application granted granted Critical
Publication of CN110210431B publication Critical patent/CN110210431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • G06V20/39Urban scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a point cloud classification method based on point cloud semantic labeling and optimization, which comprises the following steps: step 1: pre-classifying original point cloud data by using PointNet + + to obtain a point cloud pre-classification result; step 2: and (4) optimizing the classification result by using global space regularization aiming at the pre-classification result to obtain a final point cloud classification result. Compared with the prior art, the invention provides a general framework for acquiring the point cloud semantic labels and improving the classification result. In the provided general framework, the existing steps can be replaced by using the same algorithm, the initial labeling result of the three-dimensional point cloud is subjected to an optimization method based on graph structure regularization, the space smoothness of semantic labeling is realized, and a small amount of training data is only required on the premise of realizing the same point cloud classification result.

Description

Point cloud semantic labeling and optimization-based point cloud classification method
Technical Field
The invention relates to the technical field of computers, in particular to a point cloud classification method based on point cloud semantic labeling and optimization.
Background
LiDAR techniques can easily acquire three-dimensional spatial information in urban scenes, which is represented as a three-dimensional point cloud. However, an unstructured set of points does not directly and unambiguously describe semantic information of objects in the real world. Specifically, there is a semantic lack between the actual application of the point cloud and the original representation of the three-dimensional raw data. Therefore, imparting accurate semantic information to point clouds has become the basis for many three-dimensional applications. However, due to the complexity of the urban environment, the quality of the acquired point cloud may be affected by many aspects, such as noise and outliers caused by scanning errors, uneven point cloud density caused by distance variations measured by the scanner, resulting occlusion, interference caused by limited viewing positions and dynamic objects, etc., which make semantic scene analysis of the point cloud in the urban scene still a challenging task.
Typically, the goal of semantic scene analysis is to assign a semantic label to each point in the point cloud. Traditionally, semantic labeling compromises extracting various hand-designed features for each point in the point cloud and concatenating them into a feature vector, which is then applied on a training sample and put into training with a classifier. Classifiers used include AdaBoost, Support Vector Machines (SVMs), random forests, and the like. These supervised statistical methods are the most common methods applied to this task. However, for these supervised point-by-point classification approaches, although good classification results can be produced by this simple procedure, since the hand-designed features already show excellent capabilities, the classification results may be inhomogeneous, especially in areas with low density of points. The difference in point cloud density can cause insufficient neighborhood selection, resulting in misalignment of object class boundaries.
To enhance the region smoothness of the semantic labeling result, some context-based classification methods (e.g., markov random field or conditional random field) have been proposed. In such methods, each point is classified taking into account not only the extracted features but also the labels of its surrounding points. With the increase in spatial smoothness, the classification results are undoubtedly improved, accompanied by high computational costs. With the availability of high-performance computing resources and access to large-scale data sets, deep learning techniques have developed vigorously in recent years and appear to be attractive tools in many areas (e.g., image classification, segmentation, and object detection and tracking). In three-dimensional point cloud classification, a deep learning technology derived from PointNet enables a three-dimensional point set to be directly used as input of a network, and an end-to-end classification strategy is realized by combining the above-mentioned steps of feature extraction and supervised classification, so that the flow of semantic annotation is greatly simplified. Meanwhile, in the process of PointNet, local and global characteristics are learned, so that the capability of considering the local context of each point is improved. However, for deep learning techniques like PointNet, the result of classification depends to some extent on the sampling and segmentation method in the pre-processing, and the interpolation way of the post-processing, since the number of input samples needs to be fixed when fed into the network. Some classification errors and invalidity will be introduced in the boundaries of each split point set in these steps.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a point cloud classification method based on point cloud semantic labeling and optimization.
The purpose of the invention can be realized by the following technical scheme:
a point cloud classification method based on point cloud semantic labeling and optimization is characterized by comprising the following steps:
step 1: pre-classifying original point cloud data by using PointNet + + to obtain a point cloud pre-classification result;
step 2: and (4) optimizing the classification result by using global space regularization aiming at the pre-classification result to obtain a final point cloud classification result.
Preferably, the step 1 specifically includes: and (3) pre-classifying the original point cloud data by using an automatic encoder part of PointNet + + to obtain a point cloud pre-classification result.
Preferably, the input of the automatic encoder part of PointNet + + is a ShapeNet data set corresponding to the pre-training model of the urban scene.
Preferably, the output of the automatic encoder part of PointNet + + is the classification probability of each category in the ShapeNet dataset corresponding to the pre-trained model and the urban scene.
Preferably, the step 2 comprises the following substeps:
step 21: the point cloud pre-classification result is subdivided into a plurality of sub-point sets, and image-based regularization operation is carried out on each sub-point set to obtain a plurality of sub-point sets which are subjected to image-based regularization operation;
step 22: constructing a weighted graph for each sub-point set subjected to image-based regularization operation and carrying out graph segmentation operation to obtain a plurality of graph models subjected to graph segmentation operation;
step 23: and solving the energy function of each graph model subjected to graph segmentation operation to finally obtain a final point cloud classification result.
Preferably, the step 22 specifically includes: and constructing a weighted graph for each sub-point set subjected to image-based regularization operation, and performing graph segmentation operation through a GraphCuts algorithm to obtain a plurality of graph models subjected to graph segmentation operation.
Preferably, the step 23 specifically includes: and solving the energy function of each graph model subjected to graph segmentation operation through an alpha expansion algorithm to finally obtain a final point cloud classification result.
Preferably, the solving process of the alpha expansion algorithm specifically includes: the iteration is ended by separating all alpha labeled and alpha unlabeled nodes by cutting, changing the alpha label at each iteration, and inserting intermediate nodes when it occurs during the iteration that two adjacent nodes do not share the same label, looping through the iteration until each possible label for alpha converges.
Compared with the prior art, the invention has the following advantages:
(1) the method provides a general framework for acquiring point cloud semantic labels and improving classification results. Within the proposed generic framework, existing steps can be replaced with homogeneous algorithms.
(2) The method of the invention does not use the manually designed characteristics as the input of classification and refinement, but embeds the local context of the point into the deep dimensional space through an automatic encoder (PointNet + +), and simultaneously obtains a soft label as the initial result of the next refinement, thereby improving the classification precision.
(3) The method carries out an optimization method based on graph structure regularization on the initial labeling result of the three-dimensional point cloud, and realizes the space smoothness of semantic labeling. On the premise of realizing the same point cloud classification result, only a small amount of training data is required.
Drawings
FIG. 1 is a diagram illustrating the actual classification result of raw data according to an embodiment of the present invention;
FIG. 2 is a diagram of pre-classification results obtained using PointNet + +, according to an embodiment of the present invention;
FIG. 3 is a diagram of classification results obtained after optimization using a graph model according to an embodiment of the present invention;
FIG. 4 is a schematic process diagram of the method of the present invention;
FIG. 5 is a schematic diagram of a graph model optimization process in the method of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
Examples
Fig. 4 is a schematic flow chart of the method of the present invention, which can be specifically summarized as follows:
step 1: pre-classifying original point cloud data by using PointNet + + to obtain a point cloud pre-classification result;
step 2: and (4) optimizing the classification result by using global space regularization aiming at the pre-classification result to obtain a final point cloud classification result.
In a first step, an initial classification result with soft labels is obtained by feeding the subdivided point cloud into PointNet + +. The initial labels are then optimized by building a weighted graph model for global regularization, taking into account the spatial correlation of points in the neighborhood and the initial labels in this step.
(1) Pre-classification using PointNet ++
In the semantic annotation task of the point cloud, the goal is to obtain a unique label for each point in the point cloud. Thus, the auto-encoder portion of PointNet + + is applied. For each point of the input and its neighborhood set containing 8096 points, the output provided by the automatic encoder is the 1088-dimensional geometric feature corresponding to the point, which contains 64-dimensional local geometric feature and 1024-dimensional global geometric feature. These points and their features will be sorted in a fully connected network layer. Here, the classification result is a probability value corresponding to each point in different categories, namely, soft labels.
At the same time, due to the differences between the city scene and the object-based inputs in the ShapeNet dataset used by the pre-trained model, the city scene is subdivided into a set of sub-points as the network input using the voxelization strategy for the entire scene. The sub-point set is thinning and downsampling of the complete city scene data set, and can be understood as dividing the whole city scene data set into different groups, and putting the groups into a PointNet + + network respectively to perform point cloud classification processing, so that the calculation efficiency is improved, the operation time is reduced, and details are presented in the preprocessing step of the experimental part. In this step, soft labels are generated in the form of classification probabilities for each class for subsequent regularization.
(2) Classification result optimization using global spatial regularization
The refinement of the initial labeling is done by searching for the best label with improved spatial regularity, taking into account the classification probability as input in this step. This step can be divided into three sub-steps, namely, the subdivision and refinement of the original pre-classified point cloud, the construction of weighted graphs and global optimization using graph cutting.
a) Subdividing and refining
Due to the high density of point clouds and the large volume of data of complex urban scenes, graph-based regularization is almost impossible if more than ten million labeled points are fed. Many studies have proposed voxel or hyper-voxel based regularization methods to down-sample the points and reduce the number of points fed into the regularization method. Inspired by the downsampling strategy and maintaining the spatial resolution, the embodiment of the invention provides a method based on refinement, and the method is used for subdividing the pre-classification points into a plurality of sub-point sets. Due to the random sampling of each subset, the geometric context of each point does not change significantly and the main structure is maintained. Thus, a graph-based regularization may be further performed for each subset. After the optimization step, these subsets will be merged to produce a classified point cloud having the same points as the original point cloud and the optimization labels.
b) Construction of weighted graphs
The graph model consists of vertices and edges. Specifically, graph model G ═ V, E is used to represent the data to be classified, and V and E are a set of vertices and edges, respectively. If the edges have directions, such a graph is called a directed graph. Otherwise, they are undirected graphs. Each edge has a value of access and the value of the edge varies according to different weights associated with different physical properties. In the GraphCuts algorithm, the graph model is slightly different from the general graph model. The GraphCuts diagram also has two vertices based on normal plots. These two vertices S and T are denoted by the symbols "S" and "T" (see fig. 5), which are collectively referred to as end vertices. All other vertices should be connected and concatenated to two vertices to construct a portion of the edge set.
c) Global optimization using graph cut, as shown in FIG. 5
The solution of the energy function constructed by the graph model can be realized by an alpha expansion algorithm. The energy function used is as follows:
ε(L)=μR(L)+β(L)
where L represents the set of labels that the corresponding point may obtain, r (L) is the penalty term of the energy equation for applying the penalty weight required for a particular label given each point, and β (L) is the boundary term for weighting the smoothness of labels of two adjacent points, i.e. the weight is smaller when the labels of two adjacent points are consistent, and vice versa.
Here, the alpha expansion algorithm can be applied only if the boundary term is a metric. The general idea of the alpha expansion algorithm is to separate all alpha labeled and non-alpha labeled nodes with a "cut", and the algorithm will change the label of alpha at each iteration. At each iteration, the region near the node labeled alpha is expanded and the graph weights are reset. During an iteration, if two neighboring nodes do not share the same label, an intermediate node is inserted with a weight linked to the distance to the node with label alpha. The algorithm will iterate through each possible label for alpha until it converges.
As shown in fig. 1, fig. 2, fig. 3 and table 1, which are schematic diagrams of the results corresponding to different stages of the method of the present invention, it can be seen from the combination of the three diagrams: the experimental result of the algorithm on the vehicle-mounted LiDAR point cloud data shows that compared with the traditional algorithm using random forests and multi-scale features, the algorithm of the invention has great improvement on the classification precision. Meanwhile, compared with the traditional deep learning point cloud classification algorithm based on PointNet + +, the classification accuracy is improved after the global optimization is carried out by using the graph model.
TABLE 1 results of classification accuracy of the same point cloud data using different point cloud classification methods
Figure BDA0002087415710000061
Appendix:
the english meaning in fig. 1, fig. 2, fig. 3, fig. 4, fig. 5 and table 1 explains:
CLASS represents an item category;
HF + Random Forest represents the name of classification method 1;
PointNet2 denotes classification method 2 name;
PointNet2+ Global Regularization represents the classification method of the present invention;
man-made terrain artificial terrain, representing classification categories;
natural terrains Natural terrain, representing classification categories;
high-vector High vegetation, which represents classification categories;
low vegetation, representing a classification category;
building Buildings, representing classification categories;
hard scape Hard landscape, which represents classification categories;
scanning artifacts, which represent classification categories;
vehicles cars, representing classification categories;
cutting represents Cutting of the graph;
OA denotes overall accuracy;
AA represents the average precision;
kappa denotes the conformity test;
set abstruction, PointNet, Sampling & Grouping, Skip link localization, Unit PointNet and interworking are all proprietary words in the PointNet + + network structure;
per-point scores represents pre-classification results;
subdivision & thinning represents thinning & thinning;
graph constraint represents Graph model construction;
graph Optimization represents Graph model Optimization;
grouping indicates combinations.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A point cloud classification method based on point cloud semantic labeling and optimization is characterized by comprising the following steps:
step 1: pre-classifying original point cloud data by using PointNet + + to obtain a point cloud pre-classification result;
step 2: performing classification result optimization by using global space regularization aiming at the pre-classification result to obtain a final point cloud classification result;
the step 2 comprises the following sub-steps:
step 21: the point cloud pre-classification result is subdivided into a plurality of sub-point sets, and image-based regularization operation is carried out on each sub-point set to obtain a plurality of sub-point sets which are subjected to image-based regularization operation;
step 22: constructing a weighted graph for each sub-point set subjected to image-based regularization operation and carrying out graph segmentation operation to obtain a plurality of graph models subjected to graph segmentation operation;
step 23: and solving the energy function of each graph model subjected to graph segmentation operation to finally obtain a final point cloud classification result.
2. The point cloud classification method based on point cloud semantic labeling and optimization according to claim 1, wherein the step 1 specifically comprises: and (3) pre-classifying the original point cloud data by using an automatic encoder part of PointNet + + to obtain a point cloud pre-classification result.
3. The point cloud classification method based on point cloud semantic labeling and optimization according to claim 2, wherein the input of the automatic encoder part of PointNet + + is a ShapeNet dataset corresponding to a pre-trained model of an urban scene.
4. The point cloud classification method based on point cloud semantic labeling and optimization according to claim 2, wherein the output of the automatic encoder part of PointNet + + is the classification probability of each category in a Shapelet dataset corresponding to a pre-trained model and an urban scene.
5. The point cloud classification method based on point cloud semantic labeling and optimization according to claim 1, wherein the step 22 specifically comprises: and constructing a weighted graph for each sub-point set subjected to image-based regularization operation, and performing graph segmentation operation through a GraphCuts algorithm to obtain a plurality of graph models subjected to graph segmentation operation.
6. The point cloud classification method based on point cloud semantic labeling and optimization according to claim 1, wherein the step 23 specifically comprises: and solving the energy function of each graph model subjected to graph segmentation operation through an alpha expansion algorithm to finally obtain a final point cloud classification result.
7. The point cloud classification method based on point cloud semantic labeling and optimization according to claim 6, wherein the solving process of the alpha expansion algorithm specifically comprises: the iteration is ended by separating all alpha labeled and alpha unlabeled nodes by cutting, changing the alpha label at each iteration, and inserting intermediate nodes when it occurs during the iteration that two adjacent nodes do not share the same label, looping through the iteration until each possible label for alpha converges.
CN201910492227.5A 2019-06-06 2019-06-06 Point cloud semantic labeling and optimization-based point cloud classification method Active CN110210431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910492227.5A CN110210431B (en) 2019-06-06 2019-06-06 Point cloud semantic labeling and optimization-based point cloud classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910492227.5A CN110210431B (en) 2019-06-06 2019-06-06 Point cloud semantic labeling and optimization-based point cloud classification method

Publications (2)

Publication Number Publication Date
CN110210431A CN110210431A (en) 2019-09-06
CN110210431B true CN110210431B (en) 2021-05-11

Family

ID=67791429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910492227.5A Active CN110210431B (en) 2019-06-06 2019-06-06 Point cloud semantic labeling and optimization-based point cloud classification method

Country Status (1)

Country Link
CN (1) CN110210431B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706238B (en) * 2019-09-12 2022-06-17 南京人工智能高等研究院有限公司 Method and device for segmenting point cloud data, storage medium and electronic equipment
CN110807774B (en) * 2019-09-30 2022-07-12 九天创新(广东)智能科技有限公司 Point cloud classification and semantic segmentation method
CN112085123B (en) * 2020-09-25 2022-04-12 北方民族大学 Point cloud data classification and segmentation method based on salient point sampling
CN112966775B (en) * 2021-03-24 2023-12-01 广州大学 Three-dimensional point cloud classification method, system and device for building components and storage medium
CN113901991A (en) * 2021-09-15 2022-01-07 天津大学 3D point cloud data semi-automatic labeling method and device based on pseudo label
CN115222988B (en) * 2022-07-17 2024-06-18 桂林理工大学 Fine classification method for urban ground object PointEFF based on laser radar point cloud data
CN116091777A (en) * 2023-02-27 2023-05-09 阿里巴巴达摩院(杭州)科技有限公司 Point Yun Quanjing segmentation and model training method thereof and electronic equipment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719286B (en) * 2009-12-09 2012-05-23 北京大学 Multiple viewpoints three-dimensional scene reconstructing method fusing single viewpoint scenario analysis and system thereof
GB2531585B8 (en) * 2014-10-23 2017-03-15 Toshiba Res Europe Limited Methods and systems for generating a three dimensional model of a subject
CN104599275B (en) * 2015-01-27 2018-06-12 浙江大学 The RGB-D scene understanding methods of imparametrization based on probability graph model
CN105046688B (en) * 2015-06-23 2017-10-10 北京工业大学 A kind of many plane automatic identifying methods in three-dimensional point cloud
CN107423730B (en) * 2017-09-20 2024-02-13 湖南师范大学 Human gait behavior active detection and recognition system and method based on semantic folding
CN108319957A (en) * 2018-02-09 2018-07-24 深圳市唯特视科技有限公司 A kind of large-scale point cloud semantic segmentation method based on overtrick figure
CN108389251B (en) * 2018-03-21 2020-04-17 南京大学 Projection full convolution network three-dimensional model segmentation method based on fusion of multi-view features
CN108876831A (en) * 2018-06-08 2018-11-23 西北工业大学 A kind of building three-dimensional point cloud method for registering based on deep learning
CN109086683B (en) * 2018-07-11 2020-09-15 清华大学 Human hand posture regression method and system based on point cloud semantic enhancement
CN109186550B (en) * 2018-07-20 2021-03-12 潘玥 Coding decoding and measuring method for codable close-range photogrammetric mark
CN109410238B (en) * 2018-09-20 2021-10-26 中国科学院合肥物质科学研究院 Wolfberry identification and counting method based on PointNet + + network
CN109753995B (en) * 2018-12-14 2021-01-01 中国科学院深圳先进技术研究院 Optimization method of 3D point cloud target classification and semantic segmentation network based on PointNet +

Also Published As

Publication number Publication date
CN110210431A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110210431B (en) Point cloud semantic labeling and optimization-based point cloud classification method
CN109614979B (en) Data augmentation method and image classification method based on selection and generation
US9558268B2 (en) Method for semantically labeling an image of a scene using recursive context propagation
CN104850633B (en) A kind of three-dimensional model searching system and method based on the segmentation of cartographical sketching component
CN109671102B (en) Comprehensive target tracking method based on depth feature fusion convolutional neural network
CN112288857A (en) Robot semantic map object recognition method based on deep learning
CN112949647B (en) Three-dimensional scene description method and device, electronic equipment and storage medium
CN112085072B (en) Cross-modal retrieval method of sketch retrieval three-dimensional model based on space-time characteristic information
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN106327506A (en) Probability-partition-merging-based three-dimensional model segmentation method
CN114332473B (en) Object detection method, device, computer apparatus, storage medium, and program product
CN105868706A (en) Method for identifying 3D model based on sparse coding
CN111428758A (en) Improved remote sensing image scene classification method based on unsupervised characterization learning
CN110737788B (en) Rapid three-dimensional model index establishing and retrieving method
CN113408584A (en) RGB-D multi-modal feature fusion 3D target detection method
Baraheem et al. Image synthesis: a review of methods, datasets, evaluation metrics, and future outlook
CN108537109B (en) OpenPose-based monocular camera sign language identification method
CN112891945A (en) Data processing method and device, electronic equipment and storage medium
CN113570573A (en) Pulmonary nodule false positive eliminating method, system and equipment based on mixed attention mechanism
CN108805280B (en) Image retrieval method and device
Zhang et al. A graph-voxel joint convolution neural network for ALS point cloud segmentation
Xu et al. Semantic segmentation of sparsely annotated 3D point clouds by pseudo-labelling
CN113408651B (en) Unsupervised three-dimensional object classification method based on local discriminant enhancement
CN105956604B (en) Action identification method based on two-layer space-time neighborhood characteristics
CN114155524A (en) Single-stage 3D point cloud target detection method and device, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant