CN113988203A - Track sequence clustering method based on deep learning - Google Patents

Track sequence clustering method based on deep learning Download PDF

Info

Publication number
CN113988203A
CN113988203A CN202111298174.7A CN202111298174A CN113988203A CN 113988203 A CN113988203 A CN 113988203A CN 202111298174 A CN202111298174 A CN 202111298174A CN 113988203 A CN113988203 A CN 113988203A
Authority
CN
China
Prior art keywords
clustering
sequence
track
trajectory
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111298174.7A
Other languages
Chinese (zh)
Inventor
王超
汪愿愿
罗实
王永恒
傅四维
董子铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Zhejiang Lab
Original Assignee
Zhejiang University ZJU
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Zhejiang Lab filed Critical Zhejiang University ZJU
Priority to CN202111298174.7A priority Critical patent/CN113988203A/en
Publication of CN113988203A publication Critical patent/CN113988203A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of data mining, in particular to a track sequence clustering method based on deep learning, which comprises the following steps: step 1, a pre-training layer: learning a low-dimensional feature representation of the trajectory data using a sequence-to-sequence self-coder model; step 2, initial clustering layer: and executing a multi-time K-Means clustering algorithm on the track characteristic representation obtained by the pre-training layer, and selecting a clustering center in the optimal clustering result as an initial clustering center. Step 3, jointly training an optimization layer: a combined track clustering and depth feature extraction method provides an optimization loss function combining sequence-to-sequence self-encoder model reconstruction errors and clustering errors, and maps track feature representation to a feature space more suitable for clustering.

Description

Track sequence clustering method based on deep learning
Technical Field
The invention relates to the field of data mining, in particular to a track sequence clustering method based on deep learning.
Background
Similarity measurement among tracks is the basis of a space-time track clustering method, most track clustering algorithms divide complete tracks into sections or groups, similarity among the tracks is compared by adopting a point matching mode or a self-defined strategy, similar track objects are gathered into clusters by using a widely popular clustering algorithm, and the accuracy of the clustering mode needs to be improved. The development of deep learning enables the learning of the feature representation of a complex input sequence to be possible, and the method can be applied to the field of track clustering to learn the nonlinear feature representation more suitable for clustering and obtain a clustering result with higher accuracy.
Disclosure of Invention
In order to solve the technical problems in the prior art, the invention provides a track sequence clustering method based on deep learning, which has the following specific technical scheme:
a track sequence clustering method based on deep learning comprises the following steps:
step 1, a pre-training layer: learning a low-dimensional feature representation of the trajectory data using a sequence-to-sequence self-coder model;
step 2, initial clustering layer: executing a multi-time K-Means clustering algorithm on the track characteristic representation obtained by the pre-training layer, and selecting a clustering center in the optimal clustering result as an initial clustering center;
step 3, jointly training an optimization layer: a combined track clustering and depth feature extraction method provides an optimization loss function combining sequence-to-sequence self-encoder model reconstruction errors and clustering errors, maps track feature representations to a feature space more suitable for clustering, and obtains a clustering result from end to end.
Further, the step 1 specifically includes the following steps:
step 1.1, firstly, mapping the track data points to space grids with equal size, and regarding each grid as a discrete marker;
step 1.2, then, embedding the track sequence into a feature space capable of reflecting potential path information of the track sequence by using a sequence-to-sequence self-encoder model, and extracting a low-dimensional vector representing a real path of track data, wherein the vector learning method has robustness for track data sets with non-uniformity, low sampling rate and noise.
Further, the step 1.1 specifically comprises: dividing a research area into spatial grids with equal size, regarding each grid as a discrete mark, representing track points falling into the same grid by using the same mark, regarding the grids as tokens in natural language processing, regarding each grid as a unique mark, and forming a vocabulary V by the set of all the grids.
Further, the step 1.2 specifically includes: the pre-training layer learns the low-dimensional feature representation of the trajectory data using a sequence-to-sequence self-encoder model whose training is equivalent to minimizing the reconstructed trajectory feature distribution PyAnd original trajectory distribution PrKL divergence in between, i.e. KL (P)r||Py) For a given trajectory, the trained objective function is as follows:
Figure BDA0003331104600000021
wherein the content of the first and second substances,
Figure BDA0003331104600000022
is a reconstructed trajectory feature y after trajectory input modeltThe distribution of (a) to (b) is,
Figure BDA0003331104600000023
is the original trajectory rtSpatial proximity distribution of (a) for ytDecoding process, | · | | luminance2Representing Euclidean distance between grid centroid coordinates, wherein theta is a distance proportion parameter for controlling the distribution of an original track r;
thus, for a given data set, the total reconstruction loss is the cumulative sum of the errors in equation (2) for all the trace objects in the data set, denoted as
Figure BDA0003331104600000024
Where N is the size of the data set.
Further, the step 2 specifically comprises:
the loss function of the K-Means clustering algorithm is expressed as:
Figure BDA0003331104600000025
in the formula, ziIs a feature of the trajectory, mu, learned through a pre-training phasekIs the cluster center, sikIs a Boolean type variable if μkIs from ziNearest cluster center, then sikIs 1, otherwise sikIs 0; the continuous representation of formula (3) is performed by selecting the softmax function, and the continuous representation is performed for the given characteristic ziThe cluster loss function is represented in the form of the following, all parameters being derivable:
Figure BDA0003331104600000026
wherein | · | purple sweet2Representing the Euclidean distance, σ determines whether the cluster is a hard or soft assignment, in particular, z is the number when σ is 0iThe method comprises the following steps that the weights of all cluster centers are equal, the cluster is in soft distribution clustering, when sigma is + ∞, the K-Means algorithm is equivalently executed in an embedding space, the cluster is in hard distribution clustering, and in consideration of the fact that a certain distance should be kept between the cluster centers, a cluster center distance loss function is provided and defined as:
Figure BDA0003331104600000031
in the formula, muiAnd mujRepresenting different cluster centers, and usually calculating normalized values;
thus, the final cluster loss function for all trace data in the dataset is:
Figure BDA0003331104600000032
is the sum of the errors of equations (4) and (5) weighted by the parameter γ, and N is the total number of tracks in the data set.
Further, the objective function of the joint training optimization in step 3 is:
L=αLr+βLc (7)
in the formula, LrIs the error, L, of the reconstructed trajectory features output from the encoder model from sequence to sequence and the original trajectory datacThe method is characterized in that the K-Means clustering loss in an embedding space is achieved, alpha and beta are scale factors for balancing reconstruction errors and clustering errors, and whether the learned track characteristic representation is more approximate to original track data or more suitable for clustering is determined.
Drawings
FIG. 1 is a schematic overall flow chart of the track sequence clustering method based on deep learning of the present invention;
FIG. 2 is a pseudo code diagram of step 3 of the deep learning-based trajectory sequence clustering method of the present invention;
3(a) -3(c) are raw data graphs used to demonstrate the effectiveness of the deep learning-based trajectory sequence clustering method of the present invention;
FIG. 4 is a comparison graph of clustering results of the deep learning-based trajectory sequence clustering method and the correlation method of the present invention.
Detailed Description
In order to make the objects, technical solutions and technical effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples.
As shown in fig. 1, the track sequence clustering method based on deep learning of the present invention learns the feature representation of the track data as the clustering object by using the nonlinear feature extraction capability of deep learning on the sequence data, and does not need to use a paired point matching method to calculate the similarity between tracks, so that not only can the track feature representation with fixed length and suitable for clustering be obtained, but also the clustering result can be obtained end to end in the same frame, specifically, the method comprises the following steps:
step 1, firstly, mapping track data points to space grids, then regarding the grids as discrete marks in a self-encoder model from a sequence to a sequence, and converting the discrete marks into vectors through an embedding layer; the sequence-to-sequence self-encoder model is then used to embed the trajectory sequence into a feature space that can reflect its underlying path information.
Specifically, the study area is first divided into equal-sized spatial grids and each grid is treated as a discrete marker, and the trace points falling into the same grid can be represented by the same marker. These grids are regarded as tokens in natural language processing, each grid has a unique identifier, and the collection of all grids constitutes the vocabulary V.
Next, a low-dimensional feature representation of the trajectory data is learned based on the sequence-to-sequence self-encoder model. For a given trajectory x, the prospective model should maximize the conditional probability P (r | x) in order to find its most likely true path r, and thus learn a low sampling rate and noisy trajectory feature representation.
The invention uses the track with high sampling rate to replace the real track, and takes the track with low sampling rate as the model input. Specifically, assume xaAnd xbTwo sampling tracks being true tracks r, where xaIs lower, and xbHas a relatively high sampling rate, has a relatively high sampling rate of the track xbCloser to their true trajectory r. Thus, the goal of maximizing P (r | x) may be replaced with maximizing P (x)b|xa) Sequence-to-sequence based self-encoder model learning x using an encoderaAnd then attempts to recover its corresponding higher sampling rate track x using a decoder based on the feature vb. Based on the above analysis, a set of acquired sampling trajectories is given
Figure BDA0003331104600000041
For each sampling track xbDownsampling to create pairs { xa,xbCombining, using sequence-to-sequence self-coder model maximizationSome { xa,xbThe joint probability of the group:
Figure BDA0003331104600000042
since the KL divergence function can represent the difference between two probability distributions, the present invention uses KL divergence to compare the difference between the reconstructed trajectory feature y and the true trajectory r. Training of the pre-training layer sequence-to-sequence based self-encoder model may be equivalent to minimizing the reconstructed trajectory feature distribution PyAnd original trajectory distribution PrKL divergence in between, i.e. KL (P)r||Fp). For a given trajectory x, the trained objective function is as follows:
Figure BDA0003331104600000043
wherein the content of the first and second substances,
Figure BDA0003331104600000051
is the reconstructed trajectory feature y after trajectory x is input into the modeltThe distribution of (a) to (b) is,
Figure BDA0003331104600000052
is rtSpatial proximity distribution of (a) for ytThe decoding process of (1). Suppose that grid g belongs to vocabulary V, its weight and its weight to target grid ytIs inversely proportional. Thus, the closer to ytThe grid of (2) is given greater weight. Furthermore, since most grids are far from rtAre far away and have smaller weight, so only need calculate from r in advancetWeights of the nearest K grids to reduce the cost of network training are denoted as NK(rt)。||·||2Representing the euclidean distance between the coordinates of the grid centroids, and θ is a distance scale parameter that controls the r distribution. For a given data set, the total reconstruction loss is the cumulative sum of the errors in equation (2) for all the trace objects in the data set, and is noted
Figure BDA0003331104600000053
Where N is the size of the data set.
And 2, executing a K-Means clustering algorithm for multiple times on the track characteristics obtained by the pre-training layer, and selecting a clustering center in the optimal clustering result as an initial clustering center. The loss function of the K-Means clustering algorithm is expressed as:
Figure BDA0003331104600000054
in the formula, ziIs a feature of the trajectory, mu, learned through a pre-training phasekIs the cluster center, sikIs a boolean variable. If μkIs from ziNearest cluster center, then sikIs 1, otherwise sikIs 0. The invention selects the softmax function to continuously express the formula (3). For a given characteristic ziThe cluster loss function can be represented in the following form, all parameters being derivable:
Figure BDA0003331104600000055
wherein | · | purple sweet2Representing the euclidean distance, σ determines whether the cluster is a hard or soft assignment. Specifically, when σ is 0, ziThe weights to all cluster centers are equal and belong to soft distribution clustering. When σ is + ∞, it is equivalent to executing K-Means algorithm in embedding space, and belongs to hard-allocation clustering. Considering that a certain distance should be kept between cluster centers, the invention provides a cluster center distance loss function defined as:
Figure BDA0003331104600000056
in the formula, muiAnd mujRepresenting different cluster centers, normalized values are usually calculated. The final clustering loss function of all the trace data in the data set is shown in formula (6) and is formed by parametersThe sum of the errors of equations (4) and (5) for the gamma trade-off, N, is the total number of tracks in the data set.
Figure BDA0003331104600000061
And 3, performing optimization training on the initial track characteristics obtained in the pre-training stage by utilizing the capability of extracting the characteristic expression of the complex sequence data by using a deep learning technology and combining the advantages of a sequence-to-sequence self-coder model and a K-Means clustering algorithm so as to learn the track characteristic expression more suitable for clustering. The objective function of the joint training optimization is defined as:
L=αLr+βLc (7)
in the formula, LrIs the error, L, of the reconstructed trajectory features output from the encoder model from sequence to sequence and the original trajectory datacThe method is characterized in that the K-Means clustering loss in an embedding space is achieved, alpha and beta are scale factors for balancing reconstruction errors and clustering errors, and whether the learned track characteristic representation is more approximate to original track data or more suitable for clustering is determined. And a back propagation algorithm is utilized in the training process to effectively solve the problem of result optimization. After training is finished, track feature representations which are fixed in length and more suitable for clustering and corresponding cluster centers can be obtained.
The pseudo code for the joint training optimization is shown in FIG. 2. The input of the algorithm of step 3 comprises: weights of sequence-to-sequence self-encoder model networks obtained in the pre-training phase, i.e. initial parameters w of the self-encoder networks in joint training0(ii) a Executing a K-Means clustering algorithm on the track characteristic vector learned in the pre-training stage, and taking the cluster center of the clustering result as the initial cluster center mu0(ii) a Training iteration number (Epoch) M; the batch size (Mini-batch) N decreases with a random gradient. The output of the algorithm is: trained sequence-to-sequence self-coder model weights w, cluster centers μ, and cluster assignments.
The invention verifies the effectiveness of the proposed depth trajectory clustering method by using three data sets, including a simulation data set D1As shown in FIG. 3(a), a computer vision robot researchPublic transport intersection trajectory data in Computer Vision Research (CVRR) data set, namely data set D2As shown in fig. 3(b), and data set D, which is human walking trajectory data in CVRR data set3As shown in fig. 3 (c).
In order to quantitatively compare the quality of the clustering results of the method and other algorithms provided by the invention, two indexes, namely Normalized Mutual Information (NMI) and Adjusted Land Index (ARI), are used for evaluation. The index value is in the range of 0, 1, and the closer to 1, the more accurate the clustering result is. In order to verify the effectiveness of the proposed algorithm, a representative algorithm T2VEC for extracting deep track characteristics and widely popular traditional track clustering methods LCSS, EDR and DTW are selected as comparison models. The invention uniformly uses K-Means clustering algorithm to perform 10 times of clustering on the obtained track similarity matrix or the learned track characteristics, and calculates the mean value and standard deviation of NMI and ARI indexes. For the method provided by the invention, the clustering result can be directly obtained end to end after the network training is finished. As shown in table 1, the method proposed by the present invention achieves the highest NMI and ARI indices with the highest clustering quality on all three data sets.
TABLE 1 clustering results of the methods proposed by the present invention and related methods
Figure BDA0003331104600000071
In a data set D1For example, the clustering result difference between the method provided by the present invention and other comparison methods is illustrated, as shown in fig. 4, it can be seen that the method provided by the present invention distinguishes 10 clusters more accurately.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way. Although the foregoing has described the practice of the present invention in detail, it will be apparent to those skilled in the art that modifications may be made to the practice of the invention as described in the foregoing examples, or that certain features may be substituted in the practice of the invention. All changes, equivalents and modifications which come within the spirit and scope of the invention are desired to be protected.

Claims (6)

1. A track sequence clustering method based on deep learning is characterized by comprising the following steps:
step 1, a pre-training layer: learning a low-dimensional feature representation of the trajectory data using a sequence-to-sequence self-coder model;
step 2, initial clustering layer: executing a multi-time K-Means clustering algorithm on the track characteristic representation obtained by the pre-training layer, and selecting a clustering center in the optimal clustering result as an initial clustering center;
step 3, jointly training an optimization layer: a combined track clustering and depth feature extraction method provides an optimization loss function combining sequence-to-sequence self-encoder model reconstruction errors and clustering errors, maps track feature representations to a feature space more suitable for clustering, and obtains a clustering result from end to end.
2. The track sequence clustering method based on deep learning of claim 1, wherein the step 1 specifically comprises the following steps:
step 1.1, firstly, mapping the track data points to space grids with equal size, and regarding each grid as a discrete marker;
and 1.2, embedding the track sequence into a feature space capable of reflecting potential path information of the track sequence by using a sequence-to-sequence self-encoder model, and extracting a low-dimensional vector representing a real path of the track data.
3. The track sequence clustering method based on deep learning according to claim 2, wherein the step 1.1 specifically comprises: dividing a research area into spatial grids with equal size, regarding each grid as a discrete mark, representing track points falling into the same grid by using the same mark, regarding the grids as tokens in natural language processing, regarding each grid as a unique mark, and forming a vocabulary V by the set of all the grids.
4. The track sequence clustering method based on deep learning according to claim 2, wherein the step 1.2 is specifically as follows: the pre-training layer learns the low-dimensional feature representation of the trajectory data using a sequence-to-sequence self-encoder model whose training is equivalent to minimizing the reconstructed trajectory feature distribution PyAnd original trajectory distribution PrKL divergence in between, i.e. KL (P)r||Py) For a given trajectory, the trained objective function is as follows:
Figure FDA0003331104590000011
wherein the content of the first and second substances,
Figure FDA0003331104590000012
is a reconstructed trajectory feature y after trajectory input modeltThe distribution of (a) to (b) is,
Figure FDA0003331104590000013
is the original trajectory rtSpatial proximity distribution of (a) for ytThe decoding process of | · |)2Representing Euclidean distance between grid centroid coordinates, wherein theta is a distance proportion parameter for controlling the distribution of an original track r;
thus, for a given data set, the total reconstruction loss is the cumulative sum of the errors in equation (2) for all the trace objects in the data set, denoted as
Figure FDA0003331104590000021
Where N is the size of the data set.
5. The track sequence clustering method based on deep learning of claim 4, wherein the step 2 is specifically:
the loss function of the K-Means clustering algorithm is expressed as:
Figure FDA0003331104590000022
in the formula, ziIs a feature of the trajectory, mu, learned through a pre-training phasekIs the cluster center, sikIs a Boolean type variable if μkIs from ziNearest cluster center, then sikIs 1, otherwise sikIs 0; the continuous representation of formula (3) is performed by selecting the softmax function, and the continuous representation is performed for the given characteristic ziThe cluster loss function is represented in the form of the following, all parameters being derivable:
Figure FDA0003331104590000023
wherein |2Representing the Euclidean distance, σ determines whether the cluster is a hard or soft assignment, in particular, z is the number when σ is 0iThe method comprises the following steps that the weights of all cluster centers are equal, the cluster is in soft distribution clustering, when sigma is + ∞, the K-Means algorithm is equivalently executed in an embedding space, the cluster is in hard distribution clustering, and in consideration of the fact that a certain distance should be kept between the cluster centers, a cluster center distance loss function is provided and defined as:
Figure FDA0003331104590000024
in the formula, muiAnd mujRepresenting different cluster centers, and usually calculating normalized values;
thus, the final cluster loss function for all trace data in the dataset is:
Figure FDA0003331104590000025
is the sum of the errors of equations (4) and (5) weighted by the parameter γ, and N is the total number of tracks in the data set.
6. The deep learning-based track sequence clustering method according to claim 1, wherein the objective function of the joint training optimization in step 3 is:
L=αLr+βLc (7)
in the formula, LrIs the error, L, of the reconstructed trajectory features output from the encoder model from sequence to sequence and the original trajectory datacThe method is characterized in that the K-Means clustering loss in an embedding space is achieved, alpha and beta are scale factors for balancing reconstruction errors and clustering errors, and whether the learned track characteristic representation is more approximate to original track data or more suitable for clustering is determined.
CN202111298174.7A 2021-11-01 2021-11-01 Track sequence clustering method based on deep learning Pending CN113988203A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111298174.7A CN113988203A (en) 2021-11-01 2021-11-01 Track sequence clustering method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111298174.7A CN113988203A (en) 2021-11-01 2021-11-01 Track sequence clustering method based on deep learning

Publications (1)

Publication Number Publication Date
CN113988203A true CN113988203A (en) 2022-01-28

Family

ID=79746352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111298174.7A Pending CN113988203A (en) 2021-11-01 2021-11-01 Track sequence clustering method based on deep learning

Country Status (1)

Country Link
CN (1) CN113988203A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114637931A (en) * 2022-03-29 2022-06-17 北京工业大学 Travel mode detection method based on manifold upper sequence subspace clustering
WO2023029461A1 (en) * 2021-08-31 2023-03-09 西南电子技术研究所(中国电子科技集团公司第十研究所) Massive high-dimensional ais trajectory data clustering method
CN115952364A (en) * 2023-03-07 2023-04-11 之江实验室 Route recommendation method and device, storage medium and electronic equipment
CN114462548B (en) * 2022-02-23 2023-07-18 曲阜师范大学 Method for improving accuracy of single-cell deep clustering algorithm
CN117688257A (en) * 2024-01-29 2024-03-12 东北大学 Long-term track prediction method for heterogeneous user behavior mode

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023029461A1 (en) * 2021-08-31 2023-03-09 西南电子技术研究所(中国电子科技集团公司第十研究所) Massive high-dimensional ais trajectory data clustering method
CN114462548B (en) * 2022-02-23 2023-07-18 曲阜师范大学 Method for improving accuracy of single-cell deep clustering algorithm
CN114637931A (en) * 2022-03-29 2022-06-17 北京工业大学 Travel mode detection method based on manifold upper sequence subspace clustering
CN114637931B (en) * 2022-03-29 2024-04-02 北京工业大学 Travel mode detection method based on manifold sequence subspace clustering
CN115952364A (en) * 2023-03-07 2023-04-11 之江实验室 Route recommendation method and device, storage medium and electronic equipment
CN115952364B (en) * 2023-03-07 2023-05-23 之江实验室 Route recommendation method and device, storage medium and electronic equipment
CN117688257A (en) * 2024-01-29 2024-03-12 东北大学 Long-term track prediction method for heterogeneous user behavior mode

Similar Documents

Publication Publication Date Title
CN113988203A (en) Track sequence clustering method based on deep learning
CN109145939B (en) Semantic segmentation method for small-target sensitive dual-channel convolutional neural network
CN112257341B (en) Customized product performance prediction method based on heterogeneous data difference compensation fusion
CN105488528B (en) Neural network image classification method based on improving expert inquiry method
CN107229904A (en) A kind of object detection and recognition method based on deep learning
CN111325750B (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
CN107705556A (en) A kind of traffic flow forecasting method combined based on SVMs and BP neural network
CN111860528B (en) Image segmentation model based on improved U-Net network and training method
CN106600595A (en) Human body characteristic dimension automatic measuring method based on artificial intelligence algorithm
CN112464004A (en) Multi-view depth generation image clustering method
CN114841257A (en) Small sample target detection method based on self-supervision contrast constraint
CN109783887A (en) A kind of intelligent recognition and search method towards Three-dimension process feature
CN113032613B (en) Three-dimensional model retrieval method based on interactive attention convolution neural network
CN105678790B (en) High-resolution remote sensing image supervised segmentation method based on variable gauss hybrid models
CN113344113A (en) Yolov3 anchor frame determination method based on improved k-means clustering
CN115311502A (en) Remote sensing image small sample scene classification method based on multi-scale double-flow architecture
CN115393632A (en) Image classification method based on evolutionary multi-target neural network architecture structure
CN107578448A (en) Blending surfaces number recognition methods is included without demarcation curved surface based on CNN
CN117034060A (en) AE-RCNN-based flood classification intelligent forecasting method
CN113128446A (en) Human body posture estimation method based on belief map enhanced network
CN107133348A (en) Extensive picture concentrates the proximity search method based on semantic consistency
CN116523877A (en) Brain MRI image tumor block segmentation method based on convolutional neural network
CN112101461B (en) HRTF-PSO-FCM-based unmanned aerial vehicle reconnaissance visual information audibility method
CN113469270B (en) Semi-supervised intuitive clustering method based on decomposition multi-target differential evolution superpixel
CN115438871A (en) Ice and snow scenic spot recommendation method and system integrating preference and eliminating popularity deviation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination