CN114862911A - Three-dimensional point cloud single-target tracking method based on graph convolution - Google Patents
Three-dimensional point cloud single-target tracking method based on graph convolution Download PDFInfo
- Publication number
- CN114862911A CN114862911A CN202210478289.2A CN202210478289A CN114862911A CN 114862911 A CN114862911 A CN 114862911A CN 202210478289 A CN202210478289 A CN 202210478289A CN 114862911 A CN114862911 A CN 114862911A
- Authority
- CN
- China
- Prior art keywords
- template
- search area
- seed
- point cloud
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a three-dimensional point cloud single target tracking method based on graph convolution, and belongs to the technical field of three-dimensional point cloud single target tracking. The method applies graph convolution to a three-dimensional single-target tracking task for the first time, is improved based on the existing three-dimensional single-target tracking method P2B, firstly, template point cloud and search point cloud are input into a network, point cloud down-sampling is carried out, and seed point characteristics are extracted; secondly, performing global and local feature fusion by using a graph convolution module, and embedding template information into a search area; and finally, the seed points with the template information are sent to a Hough voting module, the tracked object is positioned in the search area, and a three-dimensional target frame of the tracked object is generated. The invention improves the tracking quality by carrying out global and local feature fusion through the graph convolution module; the present invention utilizes a real point cloud dataset for evaluation and observes improvements over an advanced baseline.
Description
Technical Field
The invention belongs to the technical field of three-dimensional point cloud single target tracking, and particularly relates to a three-dimensional point cloud single target tracking method based on graph convolution. In particular to a method for improving and optimizing a P2B target tracking method.
Background
The single-target tracking of three-dimensional point clouds is an important task in the field of computer vision, and the single-target tracking of the three-dimensional point clouds is input into a tracking object point cloud scanned by a laser radar and a searching point cloud sequence of the tracking object, so that the position and the size of a target are detected in each frame of point cloud of the searching point cloud sequence. The task is widely applied to an intelligent robot interaction system and an unmanned system, and can also be used in the aspects of aviation, military and the like.
With the rapid recommendation of the depth model, the target tracking method constructed based on the deep learning has obvious improvement and good performance on the tracking task. The target tracking methods generally realize that template information is embedded into a search area based on matching of target shapes, and further the search area is better judged. However, when considering shape matching, current target tracking methods consider either global or local feature learning. Furthermore, current tracking methods consider similarity of features and ignore spatial matches. Therefore, the invention designs an effective global and local feature learning method aiming at the matching problem of the template point cloud and the search area point cloud so as to improve the target tracking performance.
Based on the current target tracking method, there are several problems as follows: (1) only global matching is performed, i.e. the points in each search area learn the features of all points in the template, ignoring the specificity between the points, thereby embedding erroneous or redundant information. (2) Only local matching is carried out, namely the characteristics of partial high-similarity points in the point learning template in each search area are ignored as important global target information of the tracking target object, and therefore information loss is caused. (3) Only the similarity of the feature level is used when considering template and search area matching, thus ignoring spatial correspondence. Therefore, the invention aims at the problem of three-dimensional point cloud target tracking, and improves the network structure of P2B to have better effect.
Disclosure of Invention
The invention mainly aims at overcoming the defects of the three-dimensional point cloud target tracking method and provides a three-dimensional point cloud single target tracking method based on graph convolution. The method is designed and improved based on a P2B method, and has the specific design characteristics that:
unlike the conventional P2B target tracking method, the present invention fully learns global information and local information of the tracked object. Different from the similarity only on the features, the distance measurement is added, and the information of different levels is obtained by using two measurement scales and two feature learning modes and is fully mixed. Thereby improving the performance of the target tracking method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a three-dimensional point cloud single-target tracking method based on graph convolution comprises the following steps:
step S1: reading point cloud data, determining a tracking object, namely template point cloud, and determining a search area;
step S2: sending the template point cloud and the search area point cloud into a shared encoder to carry out point cloud down sampling and feature extraction to obtain three-dimensional coordinates of the seed points and feature vectors of the seed points;
step S3: using a global feature fusion module, sending the template seed points and the search area seed points into the global feature fusion module for global feature learning, embedding template global information into the search area seed point features, and updating the search area seed point features;
step S4: sending the template seed points and the search area seed points into a local feature fusion module by using a local feature fusion module, performing local feature learning, embedding template local information into the search area seed point features, and updating the search area seed point features;
step S5: and iterating the steps S3-S4 to complete feature fusion and obtain the seed points rich in template information.
Step S6: and sending the updated seed points of the search area into Hough voting, searching for a clustering center and voting, and determining the position of the target center and the deflection angle of the surrounding frame.
Compared with the prior art, the invention has the following beneficial effects:
the invention realizes the task of tracking the single target of the three-dimensional point cloud by utilizing the improved P2B network, and supplements the defects of the current target tracking method: 1) only global matching is performed, i.e. the points in each search area learn the characteristics of all points in the template, and the specificity between the points is ignored, so that error information or redundant information is embedded. (2) And only local matching is carried out, namely the points in each search area learn the characteristics of partial high-similarity points in the template, and the template is ignored as important global target information for tracking the target object, so that the information is lost. (3) Only the similarity of the feature level is used when considering template and search area matching, thus ignoring spatial correspondence. The tracking quality is improved by fusing a plurality of measurement scales and a plurality of modules fusing information; the present invention utilizes a real point cloud dataset for evaluation and observes improvements over an advanced baseline.
Drawings
Figure 1 is the overall structure of the design of the present invention.
FIG. 2 is a connection mode of global feature learning nodes designed by the present invention.
FIG. 3 is a connection mode of global feature learning node features designed by the present invention.
FIG. 4 is a local feature learning node connection mode designed by the present invention.
Detailed Description
The technical solution of the present invention will be further described with reference to the following specific embodiments and accompanying drawings.
A three-dimensional point cloud single-target tracking method for global and local feature learning based on graph convolution iteration comprises the following steps:
step S1: and reading the point cloud data, determining a tracking object, namely the template point cloud, and determining a search area. The specific process is as follows:
step S11: and reading a target frame of the previous frame of point cloud tracking object as the size of the target, and taking points in the target frame as a tracking object template, namely the template point cloud.
Step S12: and selecting a search area of the current frame according to the position of the target frame of the previous frame of point cloud tracking object in a world coordinate system, expanding the target frame of the previous frame of tracking object to be used as the search area, wherein points in the search area are point clouds in the search area.
Step S2: and sending the template point cloud and the search area point cloud into a shared encoder to carry out point cloud down-sampling and feature extraction to obtain a template seed point, a three-dimensional coordinate and a feature vector of the template seed point, and a three-dimensional coordinate and a feature vector of the search area seed point. The specific process is as follows:
inputting the template Point cloud and the Point cloud of the search area into a Point-Net + + network for Point cloud down-sampling in a mode of sampling the farthest Point to respectively obtain respective seed pointsAnd performing feature learning; finally obtaining the position of the seed pointAnd the characteristics of the seed points Where i ═ t, s denote the template area point cloud and the search area point cloud, respectively, N i Representing the number of seed points and N representing the dimension of the seed point feature.
Step S3: and sending the template seed points and the search area seed points into the global feature fusion module by using the global feature fusion module to perform global feature learning to obtain the search area seed points embedded with the template global information. Further, the step S3 specifically includes:
step S31: the cosine similarity between the template seed point feature obtained by step S2 and the search region seed point feature is calculated.
Step S32: constructing a global graph link, as shown in FIG. 2: and the template area and the search area seed points are used as graph convolution graph nodes, the edge structure from the template area nodes to the search area nodes is established, and each search area node is connected with all the nodes in the template area.
Step S33: performing feature fusion by using a global feature learning module, as shown in fig. 3, first connecting the coordinates of the template seed points with the features of the template seed points, and extending the connected feature tensor to the dimension of (B, N +3, Nt, Ns) and connecting the dimension of (B, N +3+1, Nt, Ns) with the cosine similarity obtained in step S31 to obtain the tensor of (B, N +3+1, Nt, Ns); the tensor sequentially passes through a layer of MLP and a layer of maximal pooling, and the feature tensor after pooling is (B, N, Ns).
Step S34: and connecting the pooled feature tensor with the search area seed point features in feature dimensions, embedding the learned template global information into the search area through a layer of MLP (Multi level processing), obtaining the search area seed point features embedded with the template global information, and finishing the global updating of the search area seed point features.
Step S4: and sending the template seed points and the search area seed points into the local feature fusion module by using a local feature fusion module to perform local feature learning to obtain the search area seed points embedded in the template local information. Further, the step S4 specifically includes:
step S41: calculating cosine similarity between the template seed point features obtained in the step S2 and the search area seed point features updated in the step S3 to obtain a cosine similarity graph;
step S42: calculating the distance between the template seed point coordinates obtained in step S2 to obtain a distance map;
step S43: constructing a local graph link, as shown in FIG. 4: the template area and the search area seed points are used as graph convolution graph nodes, firstly, edge structures from the template area nodes to the search area nodes are established, and connection is established when the cosine similarity between the search area nodes and the template area nodes is greater than a set threshold value according to the cosine similarity graph obtained in the step S41; then, the edge structure between the nodes of the template area is established, and according to the distance map obtained in step S42, if the distance between the nodes of the template area is smaller than the predetermined threshold, the connection is established. And finishing the construction of the graph structure of the graph convolution.
Step S44: connecting the template seed point features with the updated search area seed point features of S3 in a numerical dimension, inputting cosine similarity or distance between connecting nodes as weight and feature tensor and a graph structure established in S43 into a graph convolution network, embedding local information of the template into the search area seed point features, and completing local updating of the search area seed point features.
Step S5: steps S3-S4 are iterated. And completing the feature fusion to obtain the seed points rich in the template information. Further, the step S5 specifically includes:
step S51: the updated search region seed point feature is sent to step S3 to learn the template global information again.
Step S52: the search area seed point feature after updating the template global information again is sent to step S4 to learn the template local information again.
Step S6: and (4) sending the seed points of the search area after multiple updates into Hough voting, searching a clustering center and voting, and determining the position of a target center and a target frame. Further, the step S6 specifically includes:
step S61: and sending the seed point characteristics and the three-dimensional coordinates of the obtained search area embedded with the global and local template information into Hough voting.
Step S62: applying MLP regression classification score S ═ S to search region seed points 1 ,s 2 ...s j Judging whether the seed point is a target point or a non-target point; where j represents the index of the seed point.
Step S63: voting network uses MLP to regress each seed point into the targetDeviation of the heartEach seed point p s,j Corresponding to a potential target center C j (ii) a At the same time, the residual error from the seed point characteristic to the potential target center characteristic is predictedThe location and characteristics of the potential target center are respectively expressed as:
step S64: using ball clustering to all potential target centers to obtain K clustering centers, wherein the clustering centers are a set of potential target centers in a ball neighborhood, and the potential target centers are characterized byWhere j denotes the index of the seed point, s j 、And respectively representing the classification score, the three-dimensional coordinate and the feature vector of the potential target center corresponding to the jth seed point.
Step S65: and applying MLP-Maxpool-MLP to the K clustering clusters to obtain a target proposal, the offset from the potential target center to the target center, the rotation offset of the target bounding box and the confidence score of the target.
Step S66: selecting the result with the highest confidence score to obtain the position of the center of the target tracking object in the current frame, applying the bounding box comparison center point of the template point cloud tracking object to the current frame, and offsetting the bounding box by using the predicted rotation offset of the target bounding box to obtain the position of the tracking object of the current frame and the bounding box.
It is to be understood that the invention is not limited to the precise embodiments disclosed herein, and that various changes and modifications may be effected therein without departing from the scope of the invention.
Claims (8)
1. A three-dimensional point cloud single-target tracking method based on graph convolution is characterized by comprising the following steps:
step S1: reading point cloud data, determining a tracking object, namely template point cloud, and determining a search area;
step S2: sending the template point cloud and the search area point cloud into a shared encoder to carry out point cloud down sampling and feature extraction to obtain three-dimensional coordinates of the seed points and feature vectors of the seed points;
step S3: using a global feature fusion module, sending the template seed points and the search area seed points into the global feature fusion module for global feature learning, embedding template global information into the search area seed point features, and updating the search area seed point features;
step S4: sending the template seed points and the search area seed points into a local feature fusion module by using a local feature fusion module, performing local feature learning, embedding template local information into the search area seed point features, and updating the search area seed point features;
step S5: iterating the steps S3-S4 to complete feature fusion to obtain seed points rich in template information;
step S6: and sending the updated seed points of the search area into Hough voting, searching for a clustering center and voting, and determining the position of the target center and the deflection angle of the surrounding frame.
2. The method for tracking the single target of the three-dimensional point cloud based on the graph convolution as claimed in claim 1, wherein the specific process of the step S1 is as follows:
step S11: reading a target frame of a previous frame of point cloud tracking object as the size of a target, and taking points in the target frame as a tracking object template, namely template point cloud;
step S12: and selecting a search area of the current frame according to the position of the target frame of the previous frame of point cloud tracking object in a world coordinate system, expanding the target frame of the previous frame of tracking object to be used as the search area, wherein points in the search area are point clouds in the search area.
3. The method for tracking the single target of the three-dimensional point cloud based on the graph convolution as claimed in claim 1 or 2, wherein the step S2 specifically comprises:
inputting the template Point cloud and the Point cloud of the search area into a Point-Net + + network for Point cloud down-sampling in a mode of sampling the farthest Point to respectively obtain respective seed pointsAnd performing feature learning; finally obtaining the position of the seed pointAnd the characteristics of the seed point Where i ═ t, s denote the template area point cloud and the search area point cloud, respectively, N i Representing the number of seed points and N representing the dimension of the seed point feature.
4. The method for tracking the single target of the three-dimensional point cloud based on the graph convolution as claimed in claim 1 or 2, wherein the step S3 specifically comprises:
step S31: calculating cosine similarity between the template seed point features obtained in step S2 and the search region seed point features;
step S32: constructing a global graph link: the template area and the search area seed points are used as graph convolution graph nodes, an edge structure from the template area nodes to the search area nodes is established, and each search area node is connected with all the nodes of the template area;
step S33: and (3) performing feature fusion by using a global feature learning module: firstly, connecting the coordinates of the template seed points with the features of the template seed points, expanding the connected feature tensor to the (B, N +3, Nt, Ns) dimension and connecting the dimension with the cosine similarity obtained in the step S31 to obtain the tensor of the (B, N +3+1, Nt, Ns); the tensor sequentially passes through a layer of MLP and a layer of maximum pooling, and the feature tensor after pooling is (B, N, Ns);
step S34: and connecting the pooled feature tensor with the search area seed point features in feature dimensions, embedding the learned template global information into the search area through a layer of MLP (Multi level processing), obtaining the search area seed point features embedded with the template global information, and finishing the global updating of the search area seed point features.
5. The method for tracking the single target of the three-dimensional point cloud based on the graph convolution as claimed in claim 3, wherein the step S3 specifically comprises:
step S31: calculating cosine similarity between the template seed point features obtained in step S2 and the search region seed point features;
step S32: constructing a global graph link: the template area and the search area seed points are used as graph convolution graph nodes, an edge structure from the template area nodes to the search area nodes is established, and each search area node is connected with all the nodes of the template area;
step S33: and (3) performing feature fusion by using a global feature learning module: firstly, connecting the coordinates of template seed points with the features of the template seed points, expanding the connected feature tensor to the (B, N +3, Nt, Ns) dimension and connecting the dimension with the cosine similarity obtained in the step S31 to obtain the tensor of (B, N +3+1, Nt, Ns); the tensor sequentially passes through a layer of MLP and a layer of maximal pooling, and the feature tensor after pooling is (B, N, Ns);
step S34: and connecting the pooled feature tensor with the search area seed point features in feature dimensions, embedding the learned template global information into the search area through a layer of MLP (Multi level processing), obtaining the search area seed point features embedded with the template global information, and finishing the global updating of the search area seed point features.
6. The method for tracking the single target of the three-dimensional point cloud based on the graph convolution as claimed in claim 1, 2 or 5, wherein the step S4 is specifically as follows:
step S41: calculating cosine similarity between the template seed point features obtained in the step S2 and the search area seed point features updated in the step S3 to obtain a cosine similarity graph;
step S42: calculating the distance between the template seed point coordinates obtained in step S2 to obtain a distance map;
step S43: constructing local graph links: the template area and the search area seed points are used as graph convolution graph nodes, firstly, edge structures from the template area nodes to the search area nodes are established, and connection is established when the cosine similarity between the search area nodes and the template area nodes is greater than a set threshold value according to the cosine similarity graph obtained in the step S41; secondly, establishing an edge structure between the nodes of the template areas, and establishing connection when the distance between the nodes of the template areas is smaller than a set threshold value according to the distance graph obtained in the step S42; completing the construction of the graph structure of the graph convolution;
step S44: connecting the template seed point features with the updated search area seed point features of S3 in a numerical dimension, inputting cosine similarity or distance between connecting nodes as weight and feature tensor and a graph structure established in S43 into a graph convolution network, embedding local information of the template into the search area seed point features, and completing local updating of the search area seed point features.
7. The method for tracking the single target of the three-dimensional point cloud based on the graph convolution as claimed in claim 1, 2 or 5, wherein the step S5 is specifically as follows:
step S51: the updated seed point characteristics of the search area are sent to step S3 to learn the global information of the template again;
step S52: the search area seed point feature after updating the template global information again is sent to step S4 to learn the template local information again.
8. The method for tracking the single target of the three-dimensional point cloud based on the graph convolution as claimed in claim 1, 2 or 5, wherein the step S6 is specifically as follows:
step S61: sending the seed point characteristics and the three-dimensional coordinates of the obtained search area embedded with the global and local template information into Hough voting;
step S62: applying MLP regression classification score S ═ S to search region seed points 1 ,s 2 ...s j Judging whether the seed point is a target point or a non-target point; wherein j represents the index of the seed point;
step S63: voting network uses MLP to regress the offset of each seed point to the target centerEach seed point p s,j Corresponding to a potential target center C j (ii) a At the same time, the residual error from the seed point characteristic to the potential target center characteristic is predictedThe location and characteristics of the potential target center are respectively expressed as:
step S64: using ball clustering to all the potential target centers to obtain K clustering centers, wherein the clustering centers are a set of potential target centers in a ball neighborhood, and the potential target centers are characterized in thatWhere j denotes the index of the seed point, s j 、Respectively representing the classification score, the three-dimensional coordinate and the feature vector of the potential target center corresponding to the jth seed point;
step S65: MLP-Maxpool-MLP is applied to the K clustering clusters to obtain target proposals, offset from potential target centers to target centers, rotary offset of target bounding boxes and confidence scores of the targets;
step S66: selecting the result with the highest confidence score to obtain the position of the center of the target tracking object in the current frame, applying the bounding box comparison center point of the template point cloud tracking object to the current frame, and offsetting the bounding box by using the predicted rotation offset of the target bounding box to obtain the position of the tracking object of the current frame and the bounding box.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210478289.2A CN114862911A (en) | 2022-05-05 | 2022-05-05 | Three-dimensional point cloud single-target tracking method based on graph convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210478289.2A CN114862911A (en) | 2022-05-05 | 2022-05-05 | Three-dimensional point cloud single-target tracking method based on graph convolution |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114862911A true CN114862911A (en) | 2022-08-05 |
Family
ID=82636368
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210478289.2A Pending CN114862911A (en) | 2022-05-05 | 2022-05-05 | Three-dimensional point cloud single-target tracking method based on graph convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114862911A (en) |
-
2022
- 2022-05-05 CN CN202210478289.2A patent/CN114862911A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108960140B (en) | Pedestrian re-identification method based on multi-region feature extraction and fusion | |
CN111179314B (en) | Target tracking method based on residual intensive twin network | |
CN106909877B (en) | Visual simultaneous mapping and positioning method based on dotted line comprehensive characteristics | |
CN111144364B (en) | Twin network target tracking method based on channel attention updating mechanism | |
CN110188228B (en) | Cross-modal retrieval method based on sketch retrieval three-dimensional model | |
CN112258554B (en) | Double-current hierarchical twin network target tracking method based on attention mechanism | |
WO2021203807A1 (en) | Three-dimensional object detection framework based on multi-source data knowledge transfer | |
CN110110763B (en) | Grid map fusion method based on maximum public subgraph | |
Engel et al. | Deeplocalization: Landmark-based self-localization with deep neural networks | |
CN111427047A (en) | Autonomous mobile robot S L AM method in large scene | |
CN109035329A (en) | Camera Attitude estimation optimization method based on depth characteristic | |
CN113706710A (en) | Virtual point multi-source point cloud fusion method and system based on FPFH (field programmable gate flash) feature difference | |
CN111505662A (en) | Unmanned vehicle positioning method and system | |
CN113947636B (en) | Laser SLAM positioning system and method based on deep learning | |
CN112396655A (en) | Point cloud data-based ship target 6D pose estimation method | |
Yin et al. | Pse-match: A viewpoint-free place recognition method with parallel semantic embedding | |
CN111275748A (en) | Point cloud registration method based on laser radar in dynamic environment | |
CN114862911A (en) | Three-dimensional point cloud single-target tracking method based on graph convolution | |
CN113724325B (en) | Multi-scene monocular camera pose regression method based on graph convolution network | |
CN113888603A (en) | Loop detection and visual SLAM method based on optical flow tracking and feature matching | |
CN113985435A (en) | Mapping method and system fusing multiple laser radars | |
CN113963374A (en) | Pedestrian attribute identification method based on multi-level features and identity information assistance | |
CN115375731B (en) | 3D point cloud single-target tracking method for association points and voxels and related device | |
CN110634151A (en) | Single-target tracking method | |
CN111738306B (en) | Multi-view three-dimensional model retrieval method based on block convolution neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |