CN115116013A - Online dense point cloud semantic segmentation system and method integrating time sequence features - Google Patents

Online dense point cloud semantic segmentation system and method integrating time sequence features Download PDF

Info

Publication number
CN115116013A
CN115116013A CN202110305389.0A CN202110305389A CN115116013A CN 115116013 A CN115116013 A CN 115116013A CN 202110305389 A CN202110305389 A CN 202110305389A CN 115116013 A CN115116013 A CN 115116013A
Authority
CN
China
Prior art keywords
frame
key
feature
point cloud
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110305389.0A
Other languages
Chinese (zh)
Inventor
朱弘恣
周云松
李淳钦
崔天凯
过敏意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202110305389.0A priority Critical patent/CN115116013A/en
Publication of CN115116013A publication Critical patent/CN115116013A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

An online dense point cloud semantic segmentation system and method fusing time sequence features comprises the following steps: the system comprises an adaptive frame scheduler, a static segmentation module, a time sequence feature aggregation network and a partial feature update network, wherein the adaptive frame scheduler determines whether the next frame is regarded as a key frame or a non-key frame according to the ratio of an updated part to an un-updated part of information from the partial feature update network; the static segmentation module performs feature extraction on the key frame original point cloud through static point cloud segmentation through a backbone network of the static segmentation module to obtain a semantic segmentation result of a single frame; the time sequence feature aggregation network utilizes the time sequence correlation of two adjacent frames and carries out accurate optimization on the semantic segmentation result by aggregating the features on the key frames; and the partial feature updating network rapidly updates the semantic segmentation result by selectively updating the inheritance feature according to the local important key points of the non-key frames in the two adjacent frames. According to the method, on the basis of simultaneously considering feature consistency and calculation cost, the features are aggregated by utilizing the time correlation between continuous point cloud frames, the features of each frame after aggregation are enhanced, so that the segmentation is more accurate, the features between frames are consistent, and the flicker during processing of point cloud series is eliminated.

Description

Online dense point cloud semantic segmentation system and method fusing time sequence features
Technical Field
The invention relates to a technology in the field of information processing, in particular to an online dense point cloud semantic segmentation system and method integrating time sequence characteristics.
Background
In order to better sense the driving environment, most automatic vehicles are equipped with a laser radar sensor to continuously acquire point cloud data, and the performance of a point cloud semantic segmentation algorithm is the key for the automatic vehicles to make correct decisions in real time. Because the point cloud data has the characteristics of discreteness and irregular distribution, compared with the semantic segmentation of images, the point cloud semantic segmentation is a more difficult task. The practical point cloud semantic segmentation method should meet the following two requirements. First, the segmentation results are accurate so that the autonomous vehicle can make the correct driving decision based on the segmentation results. Second, the method should be able to process the time series of point clouds in real time.
Disclosure of Invention
The invention provides an online dense point cloud semantic segmentation system and method fusing time sequence characteristics, aiming at the problems that the existing point cloud semantic segmentation technology is mainly focused on a single and static point cloud, inconsistent segmentation results are easily generated when processing point clouds of continuous frames, and the calculation cost is high.
The invention is realized by the following technical scheme:
the invention relates to an online dense point cloud semantic segmentation system fusing time sequence characteristics, which comprises: the system comprises an adaptive frame scheduler, a static segmentation module, a time sequence feature aggregation network and a partial feature update network, wherein: the adaptive frame scheduler updates the information from the network for the partial features to determine whether the next frame is considered a key frame or a non-key frame as the ratio of the updated portions to the non-updated portions; the static segmentation module performs feature extraction on the key frame original point cloud through static point cloud segmentation through a backbone network of the static segmentation module to obtain a semantic segmentation result of a single frame; the time sequence feature aggregation network utilizes the time sequence correlation of two adjacent frames and carries out accurate optimization on the semantic segmentation result by aggregating the features on the key frames; and the partial feature updating network rapidly updates the semantic segmentation result by selectively updating the inheritance feature according to the local important key points of the non-key frames in the two adjacent frames.
The invention relates to an online dense point cloud semantic segmentation method based on the system and fusing time sequence characteristics, which is characterized in that complete characteristic extraction and aggregation are carried out on key frames, the aggregated enhanced characteristics are transmitted to non-key frames, and when the non-key frames are detected to contain non-negligible information through lightweight difference evaluation, the non-key frames are further partially updated.
Technical effects
The invention integrally solves the defects that the accuracy of semantic segmentation results in the prior art cannot meet the requirement of automatic driving decision and the time sequence of point cloud cannot be processed in real time.
Compared with the prior art, the method has the advantages that the point cloud characteristics are enhanced by using the motion information between the time sequence frames while the light weight and the easy deployment are realized through an online point cloud semantic segmentation frame (TempNet) and a time sequence characteristic aggregation network, the semantic segmentation precision is improved, the calculation is further reduced through the motion continuity information of multiple frames, and the semantic segmentation speed is improved.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a schematic diagram of an embodiment of static point cloud segmentation;
in the figure: (a) extracting and aggregating the full features of the key frames, (b) updating and aggregating the partial features of the non-key frames;
FIG. 3 is a schematic diagram of an exemplary timing feature aggregation network;
in the figure: the keypoints of two consecutive frames i and j are taken as input. For each keypoint of frame j, the position of which is used to find the adjacent keypoint in the space of the previous frame i, it is noted that the convolutional network is used to encode the collected motion information and calculate the residual error of feature aggregation;
FIG. 4 is a schematic diagram of a partial feature update network in an embodiment;
in the figure: all points P of the current non-key frame j j And the key point of the previous frame i
Figure BDA0002983420410000021
As input, the weight-sharing convolutional network is used to encode the collected spatial information and calculate a consistency estimator;
fig. 5 and 6 are schematic diagrams illustrating effects of the embodiment.
Detailed Description
As shown in fig. 1, the present embodiment relates to an online dense point cloud semantic segmentation system with time-series feature fusion, which includes: an adaptive frame scheduler AFS, a static partitioning module, a timing feature aggregation network TFA and a partial feature update network PFU, wherein:
the adaptive frame scheduler updates the information from the network for the partial features to determine whether the next frame is considered a key frame or a non-key frame as the ratio of the updated portions to the non-updated portions; the static segmentation module performs feature extraction on the key frame original point cloud through static point cloud segmentation through a backbone network of the static segmentation module to obtain a semantic segmentation result of a single frame; the time sequence feature aggregation network utilizes the time sequence correlation of two adjacent frames and carries out accurate optimization on the semantic segmentation result by aggregating the features on the key frames; and the partial feature updating network rapidly updates the semantic segmentation result by selectively updating the inheritance feature according to the local important key points of the non-key frames in the two adjacent frames.
The adaptive frame scheduler determines key frames and non-key frames at equal time intervals and dynamically adjusts the number of key frames according to the recently observed degree of difference of the non-key frames.
The adaptive frame scheduler calculates the ratio of the updated part to the non-updated part in the partially updated network: when the ratio is large, it indicates that most points are updated, and the difference between the next frame and the current frame is large, it should be regarded as a key frame, otherwise it is a non-key frame.
In the embodiment, a point cloud semantic segmentation method RandLA-Net is adopted as a static segmentation module.
The static segmentation module is realized by adopting but not limited to a SqueezeSegV2 point cloud semantic segmentation model, the model takes original point cloud data as input, extracts the proximity relation between points, encodes a local space geometric structure, and finally obtains the semantic label of each point through neural network processing.
As shown in fig. 2, the static point cloud segmentation includes (a) performing feature extraction through a pre-trained backbone network and (b) performing semantic segmentation on a detection network composed of a plurality of output branches. In the figure, arrows indicate feature aggregation flows, and in fig. 2(a), all key frame features are shown, and in fig. 2(b), key frame features, partially updated non-key frame features, and genetic features are shown in sequence.
The polymerization refers to that: the position and feature differences between two consecutive frames due to motion are first measured and then used to calculate an attention score. Such an attention score is used as an aggregate weight for keypoints sampled from both features, making those keypoints with consistent motion contribute more in the aggregate, specifically: for two consecutive frames i and j, the predicted key frame feature of frame j
Figure BDA0002983420410000031
Wherein: an element multiplication;
Figure BDA0002983420410000032
aggregating networks for timing characteristics; w is a group of i→j And W j→j Predicting key frame features for regularization weights
Figure BDA0002983420410000033
And recursively aggregating the historical characteristics and the current characteristics, wherein H is a point cloud characteristic vector space (matrix), and W is a weighting coefficient.
As shown in fig. 3, the time sequence feature aggregation specifically includes:
coding the position difference between two frames by a differential position matrix M pos (p j )=mlp(concat(p i,l ,p j ) Whereinsaid:
Figure BDA0002983420410000034
for collecting the key points p by KNN algorithm j Point cloud P of i I, j are two consecutive frames, mlp means: a multilayer sensor.
The KNN searching radius is 1.6, and the maximum sampling point is 64.
Secondly, the feature difference between two frames is coded through a differential feature matrix, and the differential feature matrix M fea (p j )=concat(concat(h i,l ,h j ) Whereinsaid:
Figure BDA0002983420410000035
for the
Figure BDA0002983420410000036
The coded relative point position is connected with the corresponding point information of each adjacent key point to obtain an enhanced feature vector.
Connecting the two matrixes to obtain a motion difference matrix M diff (p j )=concat(M pos (p j ),M fea (p j ) Whereinsaid: concat () is a matrix splicing function, the motion difference matrix M diff (p) feature enhancement of key points is achieved.
In this embodiment, an attention mechanism is used to further determine which neighboring points in the motion difference matrix have a greater influence on the current key point, and the specific steps are as follows:
1) calculating the attention score: from motion difference matrix
Figure BDA0002983420410000037
Learning a unique attention score for each neighbor by computing a share function g (·)
Figure BDA0002983420410000038
Wherein: H. m, P is a set of vectors, h, m, p, s are the individual vectors in the corresponding set.
2) Weighted summation of attention scores:
Figure BDA0002983420410000039
keypoint updated representation vectors
Figure BDA00029834204100000310
Figure BDA00029834204100000311
The partial feature updating network calculates the feature Q i→j The spatial consistency index of (a) determines the feature H delivered from the previous frame i i→j Is a good approximation of frame j and thus the inherited characteristics are selectively updated.
The spatial consistency index
Figure BDA0002983420410000041
Wherein:
Figure BDA0002983420410000042
detecting a network for spatial correlation achieved by updating the network with a portion of the features; p i Is the key point of frame i and X i 、X j Point cloud data for frames i and j, respectively, for each p i ∈P i
Figure BDA0002983420410000043
Checking the similarity of local spatial features between frames, and when the spatial consistency index is less than or equal to the consistency threshold, Q is i→j (p i ) τ or less, the polymerization characteristic h is regarded as i With the current feature h j There is an inconsistency between, i.e. represents h i Application characteristic h j And (6) updating.
As shown in fig. 4, the partial feature update determines a portion to be updated through the following steps, which specifically includes:
i) according to all the points Xi, Xj of two adjacent frames i, j and the key point P of the previous frame i i For each keypoint p of frame i i ∈P i Searching for neighbors in the point cloud by KNN
Figure BDA0002983420410000044
To splice into an adjacent matrix
Figure BDA0002983420410000045
Wherein:
Figure BDA0002983420410000046
Figure BDA0002983420410000047
x is a set of geometric coordinates (X-y-z three-dimensional coordinates) of the input point cloud, P is a set of attribute features (such as reflectivity, density, distance and other additional attributes) of the input point cloud, and H is a set of characterization vectors of the point cloud after feature extraction.
The adjacency matrix
Figure BDA0002983420410000048
I.e. local spatial information coding matrix, for a key point P in a frame i i And the spatial relationship between its neighbors.
ii) correspondingly, for each key point of the frame j, constructing a local spatial information coding matrix
Figure BDA0002983420410000049
iii) encoding the matrix according to the local spatial information
Figure BDA00029834204100000410
And
Figure BDA00029834204100000411
a convolution-full connectivity layer (ConvFC) was constructed to measure consistency, specifically:
Figure BDA00029834204100000412
wherein: q i→j Generation of a feature update mask U Using a pointing function I (-) i→j =I(Q i→j τ) when Q i→j (p i ) If the threshold requirement is met, the current frame inherits the key point p i ∈P i And its feature vector h i ∈H i Otherwise, the feature point is discarded.
The convolution-full link layer comprises two 3 x 3 two-dimensional convolution layers, and each convolution layer is followed by a pooling layer, so that the size is reduced to half of the original size. In addition, two layers of FC are also designed to predict the feature consistency index, where: the feature consistency index is limited to [0,1] by regularization. The threshold for masking determination may be specified between 0,1, and each keypoint needs to retain its updated masking information. When the threshold is set to 1, the inheritance network is not inherited and the entire feature vector needs to be computed over the feature network.
iv) network feature extraction
Figure BDA00029834204100000413
And reapplying the abandoned feature points in the current frame, collecting local space features to supplement key points, namely encoding a space geometric structure in the point cloud local space so as to process the space geometric structure through a neural network and obtain a semantic segmentation label.
The feature extraction network is realized by adopting, but not limited to RandLA-Net.
The partial feature update propagation mechanism satisfies the following conditions:
Figure BDA00029834204100000414
wherein: u shape i→j The consistency index is a binary variable of 0-1, and is 1 when the consistency index is greater than a preset threshold value, otherwise, the consistency index is 0; when the value of U is taken to be 0, the model inherits the features from the previous ones, and when it is taken to be 1, the model re-extracts the features at the current point in time,
Figure BDA00029834204100000415
the predicted value is represented.
The number of key frames in the adaptive frame scheduler is dynamically determined by a consistency estimation value in a latest non-key frame, and specifically comprises the following steps: in order to determine whether the current frame i should be regarded as a key frame, the ratio of the number of updated key points to the total number of key points is adopted
Figure BDA0002983420410000051
Wherein: frame k is the last key frame, N i Is the number of key points in the current frame i when r is k→i If the frequency of the key frame is greater than the threshold eta, the frequency of the key frame is reduced, otherwise, the frequency of the key frame is increased, and therefore the calculation cost is saved.
FIG. 5 shows the quantization results of the two models of TempNet and SqueezeSegV2 in the present invention on consecutive frames. The semantic segmentation effect of the invention is better.
Fig. 6 shows a comparison between TempNet and two timing processing algorithms (DFA, aggregation of all frame information; DFP, direct feature prediction, processing the next frame directly according to the feature of the previous frame). The present invention is biased towards the upper right corner, which illustrates that feature aggregation enhancement can be achieved well and quickly.
In conclusion, the online point cloud series semantic segmentation framework TempNet has the characteristics of light weight and easiness in implementation on the existing single-frame segmentation scheme; two point cloud frames in motion are effectively aggregated by a temporal feature aggregation network using motion continuity and attention pooling.
The foregoing embodiments may be modified in many different ways by one skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and not by the preceding embodiments, and all embodiments within their scope are intended to be limited by the scope of the invention.

Claims (10)

1. An online dense point cloud semantic segmentation system fusing time sequence features is characterized by comprising: the system comprises an adaptive frame scheduler, a static segmentation module, a time sequence feature aggregation network and a partial feature update network, wherein: the adaptive frame scheduler updates the information from the network for the partial features to determine whether the next frame is considered a key frame or a non-key frame as the ratio of updated portions to non-updated portions; the static segmentation module performs feature extraction on the key frame original point cloud through static point cloud segmentation through a backbone network of the static segmentation module to obtain a semantic segmentation result of a single frame; the time sequence feature aggregation network utilizes the time sequence correlation of two adjacent frames and carries out accurate optimization on the semantic segmentation result by aggregating the features on the key frames; the partial feature updating network updates the semantic segmentation result quickly by selectively updating the inheritance feature according to the local important key points of the non-key frames in the two adjacent frames;
the adaptive frame scheduler determines key frames and non-key frames at equal time intervals and dynamically adjusts the number of key frames according to the recently observed degree of difference of the non-key frames.
2. The system of claim 1, wherein the static point cloud segmentation comprises a step (a) of extracting features through a pre-trained backbone network and a step (b) of performing semantic segmentation on a detection network composed of a plurality of output branches.
3. The system for online semantic segmentation of dense point clouds with fused temporal features according to claim 1, wherein the aggregation is: first measuring the position and feature differences between two consecutive frames caused by motion and then used to calculate an attention score; such an attention score is used as a score from two featuresThe aggregate weight of the key points of the middle sampling, so that the key points with consistent motion contribute more in the aggregate, specifically: for two consecutive frames i and j, the predicted key frame characteristics of frame j
Figure FDA0002983420400000011
Figure FDA0002983420400000012
Wherein: an element multiplication;
Figure FDA0002983420400000013
aggregating networks for timing characteristics; w is a group of i→j And W j→j Predicting key frame features for regularization weights
Figure FDA0002983420400000018
And recursively aggregating the historical characteristics and the current characteristics, wherein H is a point cloud characteristic vector space (matrix), and W is a weighting coefficient.
4. The system for online dense point cloud semantic segmentation with temporal fusion of features as claimed in claim 1, wherein the temporal fusion of features specifically comprises:
coding the position difference between two frames by a differential position matrix M pos (p j )=mlp(concat(p i,l ,p j ) Whereinsaid: p is a radical of j ∈P j ,p i,l ∈P i
Figure FDA0002983420400000019
Figure FDA0002983420400000016
For collecting the keypoints p by KNN algorithm j Point cloud P of i I, j are two consecutive frames, mlp refers to: a multilayer sensor;
coding the characteristic difference between two frames by differential characteristic matrixCode, differential feature matrix M fea (p j )=concat(concat(h i,l ,h j ) Whereinsaid: h is equal to H j ,h i,l ∈H i
Figure FDA0002983420400000014
For the
Figure FDA0002983420400000015
Connecting the coded relative point position with the corresponding point information of each adjacent key point to obtain an enhanced feature vector;
connecting the two matrixes to obtain a motion difference matrix M diff (p j )=concat(M pos (p j ),M fea (p j ) In which: concat () is a matrix splicing function, the motion difference matrix M diff (p) feature enhancement of key points is achieved.
5. The system for semantic segmentation of dense point clouds on line with fused temporal features according to claim 1, wherein an attention mechanism is adopted to further judge which neighboring points in the motion difference matrix have a greater influence on the current key point, and the specific steps are as follows:
1) calculating the attention score: from motion difference matrix
Figure FDA0002983420400000021
Learning a unique attention score for each neighbor by computing a sharing function g (·)
Figure FDA0002983420400000022
Wherein: H. m, P is a set of vectors, h, m, p, s are the single vectors in the corresponding set;
2) weighted summation of attention scores:
Figure FDA0002983420400000023
keypoint updated representation vectors
Figure FDA0002983420400000024
Figure FDA0002983420400000025
6. The system as claimed in claim 1, wherein the partial feature update network calculates the feature Q i→j The spatial consistency index of (a), the characteristic H delivered from the previous frame i is judged i→j Is a good approximation of frame j and thus the inherited characteristics are selectively updated.
7. The system of claim 1, wherein the spatial consistency indicator is a measure of the semantic consistency of the dense point cloud on-line with the fused temporal features
Figure FDA0002983420400000026
Wherein:
Figure FDA0002983420400000027
detecting a network for spatial correlation achieved by updating the network with a portion of the features; p is i Is the key point of frame i and X i 、X j Point cloud data for frames i and j, respectively, for each p i ∈P i
Figure FDA0002983420400000028
Checking the similarity of local spatial features between frames, and when the spatial consistency index is less than or equal to the consistency threshold, Q is i→j (p i ) τ. ltoreq.is determined as polymerization characteristic h i With the current feature h j There is an inconsistency between, i.e. represents h i Application characteristic h j And (6) updating.
8. The system according to claim 1, wherein the partial feature update is performed to determine the portion to be updated by the following steps, and specifically comprises:
i) all points X according to two adjacent frames i and j i 、X j And the key point P of the previous frame i i For each keypoint p of frame i i ∈P i Searching for neighbors in the point cloud by KNN
Figure FDA0002983420400000029
To splice into an adjacent matrix
Figure FDA00029834204000000210
Wherein: x is the number of i,l ∈X i
Figure FDA00029834204000000211
X is a set of geometric coordinates (X-y-z three-dimensional coordinates) of the input point cloud, P is a set of attribute features (such as reflectivity, density, distance and other additional attributes) of the input point cloud, and H is a set of characterization vectors of the point cloud after feature extraction;
ii) correspondingly, for each key point of the frame j, constructing a local spatial information coding matrix
Figure FDA00029834204000000212
iii) encoding the matrix according to the local spatial information
Figure FDA00029834204000000213
And
Figure FDA00029834204000000214
a convolution-full connectivity layer (ConvFC) was constructed to measure consistency, specifically:
Figure FDA0002983420400000031
wherein: q i→j Generation of a feature update mask U Using a pointing function I (-) i→j =I(Q i→j τ) when Q i→j (p i ) If the threshold requirement is met, the current frame inherits the key point p i ∈P i And its feature vector h i ∈H i Otherwise, discarding the feature point;
iv) network feature extraction
Figure FDA0002983420400000032
And reapplying the abandoned feature points in the current frame, collecting local space features to supplement key points, namely encoding a space geometric structure in the point cloud local space so as to process the space geometric structure through a neural network and obtain a semantic segmentation label.
9. The system for online dense point cloud semantic segmentation with temporal feature fusion as claimed in claim 1, wherein the partial feature update propagation mechanism satisfies the following conditions:
Figure FDA0002983420400000033
wherein: u shape i→j The consistency index is a binary variable of 0-1, and is 1 when the consistency index is greater than a preset threshold value, otherwise, the consistency index is 0; when the value of U is taken to be 0, the model inherits the features from the previous ones, and when it is taken to be 1, the model re-extracts the features at the current point in time,
Figure FDA0002983420400000034
the predicted value is represented.
10. The system for online dense point cloud semantic segmentation with temporal fusion features according to claim 1, wherein the number of key frames in the adaptive frame scheduler is dynamically determined by a consistency estimation value in the nearest non-key frame, and specifically comprises: in order to determine whether the current frame i should be regarded as a key frame, the ratio of the number of updated key points to the total number of key points is adopted
Figure FDA0002983420400000035
Figure FDA0002983420400000036
Wherein: frame k is the last key frame, N i Is the number of key points in the current frame i when r is k→i And when the threshold eta is larger than the threshold eta, reducing the set frequency of the key frame, otherwise, increasing the set frequency of the key frame, thereby saving the calculation cost.
CN202110305389.0A 2021-03-19 2021-03-19 Online dense point cloud semantic segmentation system and method integrating time sequence features Pending CN115116013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110305389.0A CN115116013A (en) 2021-03-19 2021-03-19 Online dense point cloud semantic segmentation system and method integrating time sequence features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110305389.0A CN115116013A (en) 2021-03-19 2021-03-19 Online dense point cloud semantic segmentation system and method integrating time sequence features

Publications (1)

Publication Number Publication Date
CN115116013A true CN115116013A (en) 2022-09-27

Family

ID=83324289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110305389.0A Pending CN115116013A (en) 2021-03-19 2021-03-19 Online dense point cloud semantic segmentation system and method integrating time sequence features

Country Status (1)

Country Link
CN (1) CN115116013A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11954892B1 (en) * 2023-06-22 2024-04-09 Illuscio, Inc. Systems and methods for compressing motion in a point cloud

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11954892B1 (en) * 2023-06-22 2024-04-09 Illuscio, Inc. Systems and methods for compressing motion in a point cloud

Similar Documents

Publication Publication Date Title
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN108596053B (en) Vehicle detection method and system based on SSD and vehicle posture classification
CN107358626B (en) Method for generating confrontation network calculation parallax by using conditions
CN111462282B (en) Scene graph generation method
CN110781776B (en) Road extraction method based on prediction and residual refinement network
CN113313947B (en) Road condition evaluation method of short-term traffic prediction graph convolution network
CN109636049B (en) Congestion index prediction method combining road network topological structure and semantic association
CN108537264B (en) Heterogeneous image matching method based on deep learning
CN112561191B (en) Prediction model training method, prediction device, prediction apparatus, prediction program, and program
CN109523013B (en) Air particulate matter pollution degree estimation method based on shallow convolutional neural network
CN113628249B (en) RGBT target tracking method based on cross-modal attention mechanism and twin structure
CN111382686B (en) Lane line detection method based on semi-supervised generation confrontation network
CN113570859B (en) Traffic flow prediction method based on asynchronous space-time expansion graph convolution network
CN114332578A (en) Image anomaly detection model training method, image anomaly detection method and device
CN112784954A (en) Method and device for determining neural network
CN115116013A (en) Online dense point cloud semantic segmentation system and method integrating time sequence features
CN114463548A (en) Image classification method based on visual features and capsule network
CN114048546A (en) Graph convolution network and unsupervised domain self-adaptive prediction method for residual service life of aircraft engine
CN116958057A (en) Strategy-guided visual loop detection method
CN115861664A (en) Feature matching method and system based on local feature fusion and self-attention mechanism
CN116258877A (en) Land utilization scene similarity change detection method, device, medium and equipment
CN114742280B (en) Road condition prediction method and corresponding model training method, device, equipment and medium
CN115049676A (en) Binocular vision stereo matching method based on dense grouping cavity convolution and multi-scale cost aggregation
CN117422689B (en) Rainy day insulator defect detection method based on improved MS-PReNet and GAM-YOLOv7
Awad et al. Satellite image segmentation using hybrid variable genetic algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination