WO2023020373A1 - 基于局部化简单多核k-均值的人脸图像聚类方法及系统 - Google Patents

基于局部化简单多核k-均值的人脸图像聚类方法及系统 Download PDF

Info

Publication number
WO2023020373A1
WO2023020373A1 PCT/CN2022/112016 CN2022112016W WO2023020373A1 WO 2023020373 A1 WO2023020373 A1 WO 2023020373A1 CN 2022112016 W CN2022112016 W CN 2022112016W WO 2023020373 A1 WO2023020373 A1 WO 2023020373A1
Authority
WO
WIPO (PCT)
Prior art keywords
localized
clustering
matrix
kernel
objective function
Prior art date
Application number
PCT/CN2022/112016
Other languages
English (en)
French (fr)
Inventor
朱信忠
徐慧英
李苗苗
张毅
殷建平
Original Assignee
浙江师范大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江师范大学 filed Critical 浙江师范大学
Publication of WO2023020373A1 publication Critical patent/WO2023020373A1/zh
Priority to ZA2024/01817A priority Critical patent/ZA202401817B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • This application relates to the field of machine learning technology for face image processing, in particular to a face image clustering method and system based on localized simple multi-kernel k-means.
  • Face clustering usually aggregates the face image information in the database into some different subcategories, so that the similarity between the subcategories is as small as possible, and the similarity within the subcategories is as large as possible.
  • the subcategories with high similarity with the retrieved target are identified one by one, and several records with the greatest similarity are retrieved.
  • K-means clustering is the most widely used method, and kernel k-means clustering has been widely studied because it can learn non-linear information of samples.
  • Multikernel clustering provides an elegant framework to group samples into distinct categories by extracting complementary information from multiple sources. Through efficient and high-quality clustering, the efficiency of data analysis can be greatly improved and labor costs can be saved.
  • SimpleMKKM simple multi-kernel k-means
  • SimleMKKM has the aforementioned advantages, it is observed that it strictly aligns the combined kernel matrix with the "ideal" similarity globally generated by the cluster partition matrix. This indiscriminately forces all pairs of samples to align equally to the same ideal similarity. Therefore, it cannot effectively deal with the relationship between samples, ignores the local structure, and may lead to unsatisfactory clustering performance.
  • the purpose of this application is to provide a face image clustering method and system based on localized simple multi-kernel k-means for the defects of the prior art.
  • a face image clustering method based on localized simple multi-kernel k-means including steps:
  • the localized kernel matrix of each view is calculated, expressed as:
  • a (i) represents n ( ⁇ n)-nearest neighbor matrices;
  • K p represents the p-th given kernel matrix;
  • n represents the number of samples;
  • represents the coefficient vector
  • H represents the partition matrix
  • H T represents the replacement of the partition matrix
  • K ⁇ represents the combined kernel matrix of K p generated by ⁇
  • I k represents the k-order identity matrix.
  • R m represents the m-dimensional real number vector space
  • ⁇ p represents the pth component of ⁇ .
  • the minimum value of the objective function constructed by solving in the step S5 is specifically:
  • the minimum value of the objective function constructed by using the reduced gradient descent method is specifically:
  • the gradient descent method calculates the objective function as follows:
  • d p represents the descending direction.
  • a face image clustering system based on localized simple multi-kernel k-means including:
  • the collection module is used to collect face images, and preprocess the collected face images to obtain the average kernel matrix of each view;
  • the first calculation module is used to calculate n ( ⁇ n)-nearest neighbor matrices according to the obtained average kernel matrix;
  • the second calculation module is used to calculate the localization kernel matrix of each view according to the neighbor matrix
  • the solution module is used to solve the minimum value of the objective function constructed by using the reduced gradient descent method to obtain the optimal clustering partition matrix
  • the clustering module is used to perform k-means clustering on the obtained clustering partition matrix to realize clustering.
  • the localization kernel matrix of each view is calculated in the second calculation module, expressed as:
  • a (i) represents n ( ⁇ n)-nearest neighbor matrices;
  • K p represents the p-th given kernel matrix;
  • n represents the number of samples;
  • represents the coefficient vector.
  • H represents the partition matrix
  • H T represents the replacement of the partition matrix
  • K ⁇ represents the combined kernel matrix of K p generated by ⁇
  • I k represents the k-order identity matrix.
  • R m represents the m-dimensional real number vector space
  • ⁇ p represents the pth component of ⁇ .
  • this application proposes a novel localized simple multi-kernel k-means clustering machine learning method, which includes localized kernel alignment, optimization of the objective function to obtain the optimal combination coefficient ⁇ and the corresponding division Matrix H and other modules.
  • this application enables the optimized kernel combination to represent the information of a single view, and can also better serve view fusion, achieving the purpose of improving the clustering effect.
  • this application localizes each view to strengthen local information.
  • MKKM-MM is the first attempt to improve MKKM with min-max learning, and it does improve MKKM, but to a limited extent.
  • the proposed localized SimpleMKKM outperforms MKKM-MM significantly. This again demonstrates the strength of our formulation and associated optimization strategy. Localized SimpleMKKM consistently and significantly outperforms SimpleMKKM.
  • Fig. 1 is the flow chart of the face image clustering method based on localized simple multi-kernel k-means that embodiment one provides;
  • Fig. 2 is the algorithm flowchart provided by embodiment one;
  • Embodiment 3 is a schematic diagram of kernel coefficients learned by different algorithms provided in Embodiment 2;
  • Fig. 4 is a schematic diagram of the clustering performance of the localized SimpleMKKM learning H iterated on 6 benchmark datasets provided by Embodiment 2;
  • Fig. 5 is a schematic diagram of the variation of the objective function value of the localized SimpleMKKM provided in Embodiment 2 with the number of iterations;
  • Fig. 6 is a schematic diagram of the running time comparison of different algorithms on all the benchmark datasets provided in Example 2;
  • Fig. 7 is a schematic diagram of the influence of the size of the neighbor ratio ⁇ provided in the second embodiment on the clustering performance on six representative data sets.
  • the purpose of this application is to provide a face image clustering method and system based on localized simple multi-kernel k-means for the defects of the prior art.
  • This embodiment provides a face image clustering method based on localized simple multi-core k-means, as shown in Figure 1, including steps:
  • the kernel k-means clustering process is as follows: Let For a data set consisting of n samples, For projecting sample x into a reproducing kernel Hilbert space feature map. The goal of kernel k-means clustering is to minimize the sum of square errors based on the partition matrix B ⁇ ⁇ 0,1 ⁇ n ⁇ k , as shown in the following formula:
  • step S3 the localized kernel matrix of each view is calculated according to the neighbor matrix.
  • the localized kernel matrix of each view is expressed as:
  • a (i) represents n ( ⁇ n)-nearest neighbor matrices;
  • K p represents the p-th given kernel matrix;
  • n represents the number of samples;
  • step S4 a localized simple multi-kernel k-means clustering objective function is constructed according to the calculated localized kernel matrix of each view.
  • represents the coefficient vector
  • H represents the partition matrix
  • H T represents the replacement of the partition matrix
  • K ⁇ represents the combined kernel matrix of K p generated by ⁇
  • I k represents the k-order identity matrix.
  • S (i) ⁇ ⁇ 0, 1 ⁇ n ⁇ round( ⁇ n) represents the ( ⁇ n)-neighbor indicator matrix of the i-th sample, and round( ) is a rounding function.
  • This embodiment defines the local alignment of the i-th sample, expressed as:
  • a localized simple multikernel k-means clustering objective function expressed as:
  • step S5 the minimal value of the constructed objective function is solved by using the reduced gradient descent method to obtain the optimal clustering partition matrix.
  • min-max optimization is transformed into min optimization, where the objective is a kernel k-means optimal value function.
  • S (i) ⁇ ⁇ 0, 1 ⁇ n ⁇ round( ⁇ n) is a positive semidefinite matrix.
  • the element-wise product between two positive semi-definite matrices is still positive semi-definite. So, each is a positive semidefinite matrix.
  • the gradient descent method calculates the objective function as follows:
  • u be the number indicating the largest component of the vector ⁇ , which is believed to provide better numerical stability.
  • the positive constraint of ⁇ is considered in the descending direction, which means:
  • d p represents the direction of descent
  • can be calculated by ⁇ + ⁇ d, where ⁇ is the optimal step size. It can be selected by a one-dimensional line search strategy, such as the Armijo criterion.
  • This example discusses the computational complexity of the proposed locally simplified MKKM.
  • localizing SimpleMKKM needs to solve a kernel kmeans problem, compute the direction of descent, and search for an optimal step size. Therefore, its computational complexity at each iteration is where n 0 is the maximum number of operations required to find the optimal step size.
  • n 0 is the maximum number of operations required to find the optimal step size.
  • localizing Simple MKKM does not significantly increase the computational complexity of existing MKKM and SimpleMKKM algorithms.
  • the convergence of localized SimpleMKKM is briefly discussed. Note that with a given ⁇ , it becomes conventional kernel k-means, which has a global optimum. Under this condition, the gradient calculation in the (3) step is accurate, and the algorithm of the present embodiment carries out simplified gradient descent on the domain of definition, and this function converges on minimum value.
  • Figure 2 shows the algorithm flow chart
  • This embodiment proposes a novel localized simple multi-core k-means clustering machine learning method, which includes modules such as localized kernel alignment, optimization of the objective function to obtain the optimal combination coefficient ⁇ , and the corresponding partition matrix H.
  • this embodiment enables the optimized kernel combination to represent the information of a single view, and can also better serve view fusion, achieving the purpose of improving the clustering effect.
  • this embodiment performs localization processing on each view to strengthen local information.
  • MKKM-MM is the first attempt to improve MKKM with min-max learning, and it does improve MKKM, but to a limited extent.
  • the proposed localized SimpleMKKM outperforms MKKM-MM significantly. This again demonstrates the advantages of the formulas and associated optimization strategies of this example. Localized SimpleMKKM consistently and significantly outperforms SimpleMKKM.
  • a face image clustering system based on localized simple multi-kernel k-means including:
  • the collection module is used to collect face images, and preprocess the collected face images to obtain the average kernel matrix of each view;
  • the first calculation module is used to calculate n ( ⁇ n)-nearest neighbor matrices according to the obtained average kernel matrix;
  • the second calculation module is used to calculate the localization kernel matrix of each view according to the neighbor matrix
  • the solution module is used to solve the minimum value of the objective function constructed by using the reduced gradient descent method to obtain the optimal clustering partition matrix
  • the clustering module is used to perform k-means clustering on the obtained clustering partition matrix to realize clustering.
  • the localization kernel matrix of each view is calculated in the second calculation module, expressed as:
  • a (i) represents n ( ⁇ n)-nearest neighbor matrices;
  • K p represents the p-th given kernel matrix;
  • n represents the number of samples;
  • represents the coefficient vector
  • H represents the partition matrix
  • H T represents the replacement of the partition matrix
  • K ⁇ represents the combined kernel matrix of K p generated by ⁇
  • I k represents the k-order identity matrix.
  • R m represents the m-dimensional real number vector space
  • ⁇ p represents the pth component of ⁇ .
  • Embodiment 1 The face image clustering method based on localized simple multi-kernel k-means provided in this embodiment is different from Embodiment 1 in that:
  • the clustering performance of the method is tested on eight MKKM benchmark datasets, including MSRA, Still, Cal-7, PFold, Nonpl, Flo17, Flo102 and Reuters.
  • MKKM benchmark datasets including MSRA, Still, Cal-7, PFold, Nonpl, Flo17, Flo102 and Reuters.
  • This embodiment adopts average multi-kernel clustering algorithm (A-MKKM), multi-kernel k-means clustering (MKKM), localized multi-kernel k-means clustering (LMKKM), robust multi-kernel clustering (MKKM-MM), with matrix Multikernel k-means clustering with induced regularization (MKKM-MR), Optimal Neighborhood Multikernel Clustering (ONKC), Late Fusion-based Maximum Alignment Multi-View Clustering (MVC-LFA), Local Alignment Maximization Multikernel Clustering class (LKAM).
  • A-MKKM average multi-kernel clustering algorithm
  • MKKM multi-kernel k-means clustering
  • LKKM localized multi-kernel k-means clustering
  • MKKM-MM robust multi-kernel clustering
  • LKAM Local Alignment Maximization Multikernel Clustering class
  • This example uses common clustering accuracy (ACC), normalized mutual information (NMI) and Rand index (RI) to show the clustering performance of each method. All methods are randomly initialized and repeated 50 times and show the best results to reduce the randomness caused by k-means.
  • ACC common clustering accuracy
  • NMI normalized mutual information
  • RI Rand index
  • Table 2 shows the clustering effects of the above methods and comparison algorithms on all data sets. According to the table, it can be observed that: 1. MKKM-MM is the first attempt to improve MKKM through min-max learning. As observed, it does improve MKKM, but the performance improvement over MKKM is limited on all datasets. Meanwhile, the proposed localized simple MKKM outperforms MKKM-MM significantly. This again demonstrates the advantages of the method and associated optimization strategies of this example; 2. In addition to the localized SimpleMKKM of this method, SimpleMKKM also achieved comparable or better performance than the above algorithms on all benchmark datasets. clustering performance. This superiority is due to its new formulation and new optimization algorithm; 3. The proposed localized SimpleMKKM is consistently and significantly better than SimpleMKKM.
  • ACC outperforms the SimpleMKKM algorithm by 4.7%, 5.2%, 8.3%, 1.2%, 17.3%, 17.3%, 1.8%, 1.5%, and 1.1% on 8 benchmark datasets. Improvements in other standards are similar. These results well demonstrate the superiority of the proposed locally simplified MKKM to benefit from exploring and extracting local information of the kernel matrix.
  • Figure 3 shows the kernel coefficients learned by different algorithms.
  • Figure 4 is the clustering performance of localized SimpleMKKM learning H iteratively on 6 benchmark datasets.
  • Figure 5 is the variation of the objective function value of localized SimpleMKKM with the number of iterations.
  • Figure 6 is a comparison of the running time of different algorithms on all benchmark datasets (unit: logarithm of seconds), where the histograms under each dataset are Avg-KKM, MKKM, LMKKM, ONKC, MKKM-MiR, LKAM, LF-MVC, MKKM-MM, SimpleMKKM, LSMKKM.
  • Figure 7 shows the effect of the size of the neighbor ratio ⁇ on the clustering performance on six representative data sets.
  • This embodiment proposes a novel localized simple multi-core k-means clustering machine learning method, which includes modules such as localized kernel alignment, optimization of the objective function to obtain the optimal combination coefficient ⁇ , and the corresponding partition matrix H.
  • this embodiment enables the optimized kernel combination to represent the information of a single view, and can also better serve view fusion, achieving the purpose of improving the clustering effect.
  • this embodiment performs localization processing on each view to strengthen local information.
  • MKKM-MM is the first attempt to improve MKKM with min-max learning, and it does improve MKKM, but to a limited extent.
  • the proposed localized SimpleMKKM outperforms MKKM-MM significantly. This again demonstrates the advantages of the formulas and associated optimization strategies of this example. Localized SimpleMKKM consistently and significantly outperforms SimpleMKKM.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Processing (AREA)

Abstract

本申请公开了基于局部化简单多核k-均值的人脸图像聚类方法及系统。其中涉及的基于局部化简单多核k-均值的人脸图像聚类方法,包括步骤:S1.采集人脸图像,并对采集的人脸图像进行预处理,得到各个视图的平均核矩阵;S2.根据得到的平均核矩阵计算n个(τ×n)-近邻矩阵;S3.根据近邻矩阵计算各个视图的局部化核矩阵;S4.根据计算得到的各个视图的局部化核矩阵构建局部化的简单多核k-均值聚类目标函数;S5.采用简约梯度下降法求解构建的目标函数的极小值,得到最优的聚类划分矩阵;S6.对得到聚类划分矩阵进行k-均值聚类,实现聚类。

Description

基于局部化简单多核k-均值的人脸图像聚类方法及系统 技术领域
本申请涉及人脸图像处理的机器学习技术领域,尤其涉及基于局部化简单多核k-均值的人脸图像聚类方法及系统。
背景技术
随着人脸识别和检索系统应用的推广,系统中人脸图像数据急剧地增长,人脸聚类技术已经成为提高系统检索效率的重要基础。人脸聚类通常是将数据库中的人脸图片信息聚成一些不同的子类,使得子类之间的相似性尽量小,子类内的相似性尽量大,这样在检索时,只需在与被检索目标相似度较高的子类内逐个识别,检索出与之相似性最大的若干记录。
k均值聚类是一种应用最为广泛的方法,其中核k均值聚类因其可以学习到样本非线性信息而被广泛研究。多核聚类提供了一个优雅的框架,通过从多个源提取互补信息来将样本分组到不同的类别中。通过高效高质量的聚类,可以大大提高数据分析的效率,节约人工成本。
通过充分考虑样本之间的关系,一种局部核对齐变量被开发。经过实验验证,这样可以提高聚类性能。通过假设一个最优核位于组合核的邻域,一种最优邻域多核聚类算法被提出,该算法通过提高学习到的最优核的可表示性,提高了聚类性能。最大限度地对准多个基分区与一致分区,具有相当大的算法加速和令人满意的聚类性能。在此基础上,用来处理不完整的多视图数据的一种有效的基于后期融合的算法被提出。
作为多视图聚类的代表,最近提出了一种新的简单多核k均值(SimpleMKKM)。SimpleMKKM没有联合最小化核权值和聚类分区矩阵,而是对核权值进行最小化,对聚类分区矩阵优化框架进行最大化,导致了一个棘手的最小最大优化。然后,其可以被等价地转化为一个最小化问题,并设计了一种简化梯度算法来求解所得到的优化。该算法被证明是有效的优化,对噪声观点的鲁棒性,并引起了许多研究者的广泛关注。
虽然最近提出的SimleMKKM具有上述优点,但观察到它将组合的核矩阵 与聚类分区矩阵全局生成的“理想”相似度严格对齐。这可以不分青红皂白地迫使所有样本对平等地与相同的理想相似性对齐。因此,它不能有效地处理样本之间的关系,忽略了局部结构,可能导致不满意的聚类表现。
发明内容
本申请的目的是针对现有技术的缺陷,提供了基于局部化简单多核k-均值的人脸图像聚类方法及系统。
为了实现以上目的,本申请采用以下技术方案:
基于局部化简单多核k-均值的人脸图像聚类方法,包括步骤:
S1.采集人脸图像,并对采集的人脸图像进行预处理,得到各个视图的平均核矩阵;
S2.根据得到的平均核矩阵计算n个(τ×n)-近邻矩阵;
S3.根据近邻矩阵计算各个视图的局部化核矩阵;
S4.根据计算得到的各个视图的局部化核矩阵构建局部化的简单多核k-均值聚类目标函数;
S5.采用简约梯度下降法求解构建的目标函数的极小值,得到最优的聚类划分矩阵;
S6.对得到聚类划分矩阵进行k-均值聚类,实现聚类。
进一步的,所述步骤S3中计算各个视图的局部化核矩阵,表示为:
Figure PCTCN2022112016-appb-000001
其中,
Figure PCTCN2022112016-appb-000002
表示各个视图的局部化核矩阵;A (i)表示n个(τ×n)-近邻矩阵;K p表示第p个给定的核矩阵;n表示样本数;
Figure PCTCN2022112016-appb-000003
表示元素的相乘。
进一步的,所述步骤S4中简单多核k-均值聚类目标函数,表示为:
Figure PCTCN2022112016-appb-000004
其中,γ表示系数向量;H表示划分矩阵;H T表示换分矩阵的置换;K γ表示由γ生成的K p的组合核矩阵;I k表示k阶单位阵。
进一步的,所述步骤S4中局部化的简单多核k-均值聚类目标函数,表示为:
Figure PCTCN2022112016-appb-000005
s.t.H TH=I k
其中,
Figure PCTCN2022112016-appb-000006
R m表示m维实数向量空间;γ p表示γ的第p个分量。
进一步的,所述步骤S5中求解构建的目标函数的极小值具体为:
将局部化的简单多核k-均值聚类目标函数简化为简单多核k-均值聚类目标函数:
Figure PCTCN2022112016-appb-000007
其中,
Figure PCTCN2022112016-appb-000008
表示元素的相乘;
Figure PCTCN2022112016-appb-000009
表示标准化的核矩阵;
当A (i)的所有元素都设置为1时,简单多核k-均值聚类目标函数表示为:
Figure PCTCN2022112016-appb-000010
其中,
Figure PCTCN2022112016-appb-000011
表示最优值函数。
进一步的,所述步骤S5中采用简约梯度下降法求解构建的目标函数的极小值具体为:
梯度下降法计算目标函数表示为:
Figure PCTCN2022112016-appb-000012
Figure PCTCN2022112016-appb-000013
其中,
Figure PCTCN2022112016-appb-000014
设u为指示向量γ的最大分量的数,γ的正性约束表示:
Figure PCTCN2022112016-appb-000015
其中,d p表示下降方向。
相应的,还提供基于局部化简单多核k-均值的人脸图像聚类系统,包括:
采集模块,用于采集人脸图像,并对采集的人脸图像进行预处理,得到各个视图的平均核矩阵;
第一计算模块,用于根据得到的平均核矩阵计算n个(τ×n)-近邻矩阵;
第二计算模块,用于根据近邻矩阵计算各个视图的局部化核矩阵;
构建模块,用于根据计算得到的各个视图的局部化核矩阵构建局部化的简单多核k-均值聚类目标函数;
求解模块,用于采用简约梯度下降法求解构建的目标函数的极小值,得到最优的聚类划分矩阵;
聚类模块,用于对得到聚类划分矩阵进行k-均值聚类,实现聚类。
进一步的,所述第二计算模块中计算各个视图的局部化核矩阵,表示为:
Figure PCTCN2022112016-appb-000016
其中,
Figure PCTCN2022112016-appb-000017
表示各个视图的局部化核矩阵;A (i)表示n个(τ×n)-近邻矩阵;K p表示第p个给定的核矩阵;n表示样本数;
Figure PCTCN2022112016-appb-000018
表示元素的相乘。
进一步的,所述构建模块中简单多核k-均值聚类目标函数,表示为:
Figure PCTCN2022112016-appb-000019
s.t.H TH=I k.
其中,γ表示系数向量.;H表示划分矩阵;H T表示换分矩阵的置换;K γ表示由γ生成的K p的组合核矩阵;I k表示k阶单位阵。
进一步的,所述构建模块中局部化的简单多核k-均值聚类目标函数,表示为:
Figure PCTCN2022112016-appb-000020
s.t.H TH=I k
其中,
Figure PCTCN2022112016-appb-000021
R m表示m维实数向量空间;γ p表示γ的第p个分量。
与现有技术相比,本申请提出了一种新颖的局部化的简单多核k-均值聚类机器学习方法,该方法包括局部化核对齐、优化目标函数获取最优组合系数γ以及相应的划分矩阵H等模块。通过优化目标函数,本申请使得经过优化后的核组合可以代表单个视图的信息,也能更好地服务于视图融合,达到聚类效果提升的目的。并且,本申请对每个视图做了局部化处理,来强化局部信息。MKKM-MM首次尝试通过min-max学习来改进MKKM,它确实改进了MKKM,但效果有限。所提出的局部化的SimpleMKKM的性能显著优于MKKM-MM。这再次证明了我们的公式和相关的优化策略的优势。局部化的SimpleMKKM始终且显著优于SimpleMKKM。
附图说明
图1是实施例一提供的基于局部化简单多核k-均值的人脸图像聚类方法流程图;
图2是实施例一提供的算法流程图;
图3是实施例二提供的由不同算法学习的核系数示意图;
图4是实施例二提供的在6个基准数据集上迭代的局部化的SimpleMKKM学习H的聚类性能示意图;
图5是实施例二提供的局部化SimpleMKKM的目标函数值随迭代次数的变化示意图;
图6是实施例二提供的所有基准数据集上不同算法的运行时间比较示意图;
图7是实施例二提供的近邻比例τ的大小对6个有代表性的数据集上聚类性能的影响示意图。
具体实施方式
以下通过特定的具体实例说明本申请的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本申请的其他优点与功效。本申请还可以通 过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本申请的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。
本申请的目的是针对现有技术的缺陷,提供了基于局部化简单多核k-均值的人脸图像聚类方法及系统。
实施例一
本实施例提供基于局部化简单多核k-均值的人脸图像聚类方法,如图1所示,包括步骤:
S1.采集人脸图像,并对采集的人脸图像进行预处理,得到各个视图的平均核矩阵;
S2.根据得到的平均核矩阵计算n个(τ×n)-近邻矩阵;
S3.根据近邻矩阵计算各个视图的局部化核矩阵;
S4.根据计算得到的各个视图的局部化核矩阵构建局部化的简单多核k-均值聚类目标函数;
S5.采用简约梯度下降法求解构建的目标函数的极小值,得到最优的聚类划分矩阵;
S6.对得到聚类划分矩阵进行k-均值聚类,实现聚类。
核k-均值聚类过程如下:令
Figure PCTCN2022112016-appb-000022
为由n个样本组成的数据集,
Figure PCTCN2022112016-appb-000023
为将样本x投射到一个再生核希尔伯特空间
Figure PCTCN2022112016-appb-000024
的特征映射。核k均值聚类的目标为最小化基于划分矩阵B∈{0,1} n×k的平方误差和,如下式所示:
Figure PCTCN2022112016-appb-000025
其中,
Figure PCTCN2022112016-appb-000026
Figure PCTCN2022112016-appb-000027
代表属于第c个簇(1≤c≤k)的样本个数。上式可以化为:
Figure PCTCN2022112016-appb-000028
其中K是一个核矩阵,其元素为K ij=φ(x i) Tφ(x j),
Figure PCTCN2022112016-appb-000029
表示所有元素都为1的向量。
由于上式中的变量B是离散的,优化较为困难。令
Figure PCTCN2022112016-appb-000030
并将离散约束转换为实值正交约束,即H TH=I k。可以将目标式转换为:
Figure PCTCN2022112016-appb-000031
其闭式解为核矩阵K前k最大特征值对应的特征向量,可通过对K进行特征分解获得。
在步骤S3中,根据近邻矩阵计算各个视图的局部化核矩阵。
各个视图的局部化核矩阵,表示为:
Figure PCTCN2022112016-appb-000032
其中,
Figure PCTCN2022112016-appb-000033
表示各个视图的局部化核矩阵;A (i)表示n个(τ×n)-近邻矩阵;K p表示第p个给定的核矩阵;n表示样本数;
Figure PCTCN2022112016-appb-000034
表示元素的相乘。
在步骤S4中,根据计算得到的各个视图的局部化核矩阵构建局部化的简单多核k-均值聚类目标函数。
简单多核k-均值聚类目标函数,表示为:
Figure PCTCN2022112016-appb-000035
s.t.H TH=I k.
其中,γ表示系数向量;H表示划分矩阵;H T表示换分矩阵的置换;K γ表示由γ生成的K p的组合核矩阵;I k表示k阶单位阵。
S (i)∈{0,1} n×round(τ×n)表示第i个样本的(τ×n)-近邻指示矩阵,round(·)是一个舍入函数。本实施例定义第i个样本的局部对其,表示为:
Figure PCTCN2022112016-appb-000036
其中,
Figure PCTCN2022112016-appb-000037
表示取出第i个样本的近邻元素。这种局部对齐只要求更可靠的样本保持在一起,这使得它能够更好地利用核矩阵之间的变化进行聚类。通过接对每个样本的局部对齐。
局部化的简单多核k-均值聚类目标函数,表示为:
Figure PCTCN2022112016-appb-000038
s.t.H TH=I k
其中,
Figure PCTCN2022112016-appb-000039
Rm表示…;γp表示…;
Figure PCTCN2022112016-appb-000040
是近邻掩码矩阵。
在步骤S5中,采用简约梯度下降法求解构建的目标函数的极小值,得到最优的聚类划分矩阵。
(1)简单多核k-均值聚类SimpleMKKM的目标函数是上述局部化的简单多核k-均值聚类目标函数的一个特例,则:
Figure PCTCN2022112016-appb-000041
其中,
Figure PCTCN2022112016-appb-000042
表示元素的相乘;
Figure PCTCN2022112016-appb-000043
表示标准化的核矩阵;
Figure PCTCN2022112016-appb-000044
通过对每个基核应用这样的归一化,可以清楚地看到,全局核对齐是局部核对齐准则的一种特例。
从上述内容中可以看出,上述公式当A (i)的所有元素都设置为1时,将简化为SimpleMKKM。在这种情况下,每个样本都将其余的样本作为它的邻居。这意味着SimpleMKKM可以作为上述公式的一个特例,因此上述公式可以同样有效地写为:
Figure PCTCN2022112016-appb-000045
其中,
Figure PCTCN2022112016-appb-000046
这样,min-max优化就转化为了min优化,其中其目标
Figure PCTCN2022112016-appb-000047
是一个核k-means最优值函数。
(2)通过上述归一化,每个
Figure PCTCN2022112016-appb-000048
仍然保持正半定(PSD)。
此次以每个
Figure PCTCN2022112016-appb-000049
是正半定矩阵具体说明。
注意S (i)∈{0,1} n×round(τ×n)
Figure PCTCN2022112016-appb-000050
是一个半正定矩阵。并且,两个半正定矩阵之间的元素上的乘积仍然是半正定矩阵。所以,每个
Figure PCTCN2022112016-appb-000051
是正半定矩阵。
每个
Figure PCTCN2022112016-appb-000052
都通过上述归一化保持正半定性,这保证了
Figure PCTCN2022112016-appb-000053
的可微性。在下面,接下来证明
Figure PCTCN2022112016-appb-000054
的可微性,展示了如何计算其梯度,并使用简化梯度下降算法优化。
(3)
Figure PCTCN2022112016-appb-000055
是可微的,
Figure PCTCN2022112016-appb-000056
的全局唯一性即可得出。
梯度下降法计算目标函数表示为:
Figure PCTCN2022112016-appb-000057
Figure PCTCN2022112016-appb-000058
其中,
Figure PCTCN2022112016-appb-000059
设u为指示向量γ的最大分量的数,它被认为可以提供更好的数值稳定性。
本实施例在下降方向上考虑了γ的正性约束,表示:
Figure PCTCN2022112016-appb-000060
其中,d p表示下降方向,γ就可以通过γ←γ+αd计算,其中α是最佳的步长。它可以通过一维线搜索策略来选择,如Armijo准则。
本实施例讨论了所提出的局部简单化MKKM的计算复杂度。在每次迭代中,局部化SimpleMKKM需要解决一个核kmeans问题,计算下降方向,并搜索最优步长。因此,它在每次迭代时的计算复杂度为
Figure PCTCN2022112016-appb-000061
其中n 0是找到最优步长所需的最大操作数。正如所观察到的,局部化简单MKKM并不会显著增加现有MKKM和SimpleMKKM算法的计算复杂度。然后简要讨论了局部化SimpleMKKM的收敛性。请注意,使用给定的γ,就变成了传统的内核k-均值,它具有全局最优值。在此条件下,第(3)步中的梯度计算是精确的,本实施例的算法在定义域上进行简化梯度下降,该函数收敛于
Figure PCTCN2022112016-appb-000062
的最小值。
如图2所示为算法流程图。
本实施例提出了一种新颖的局部化的简单多核k-均值聚类机器学习方法, 该方法包括局部化核对齐、优化目标函数获取最优组合系数γ以及相应的划分矩阵H等模块。通过优化目标函数,本实施例使得经过优化后的核组合可以代表单个视图的信息,也能更好地服务于视图融合,达到聚类效果提升的目的。并且,本实施例对每个视图做了局部化处理,来强化局部信息。MKKM-MM首次尝试通过min-max学习来改进MKKM,它确实改进了MKKM,但效果有限。所提出的局部化的SimpleMKKM的性能显著优于MKKM-MM。这再次证明了本实施例的公式和相关的优化策略的优势。局部化的SimpleMKKM始终且显著优于SimpleMKKM。
相应的,还提供基于局部化简单多核k-均值的人脸图像聚类系统,包括:
采集模块,用于采集人脸图像,并对采集的人脸图像进行预处理,得到各个视图的平均核矩阵;
第一计算模块,用于根据得到的平均核矩阵计算n个(τ×n)-近邻矩阵;
第二计算模块,用于根据近邻矩阵计算各个视图的局部化核矩阵;
构建模块,用于根据计算得到的各个视图的局部化核矩阵构建局部化的简单多核k-均值聚类目标函数;
求解模块,用于采用简约梯度下降法求解构建的目标函数的极小值,得到最优的聚类划分矩阵;
聚类模块,用于对得到聚类划分矩阵进行k-均值聚类,实现聚类。
进一步的,所述第二计算模块中计算各个视图的局部化核矩阵,表示为:
Figure PCTCN2022112016-appb-000063
其中,
Figure PCTCN2022112016-appb-000064
表示各个视图的局部化核矩阵;A (i)表示n个(τ×n)-近邻矩阵;K p表示第p个给定的核矩阵;n表示样本数;
Figure PCTCN2022112016-appb-000065
表示元素的相乘。
进一步的,所述构建模块中简单多核k-均值聚类目标函数,表示为:
Figure PCTCN2022112016-appb-000066
s.t.H TH=I k.
其中,γ表示系数向量;H表示划分矩阵;H T表示换分矩阵的置换;K γ表示由γ生成的K p的组合核矩阵;I k表示k阶单位阵。
进一步的,所述构建模块中局部化的简单多核k-均值聚类目标函数,表示为:
Figure PCTCN2022112016-appb-000067
s.t.H TH=I k
其中,
Figure PCTCN2022112016-appb-000068
R m表示m维实数向量空间;γ p表示γ的第p个分量。
实施例二
本实施例提供的基于局部化简单多核k-均值的人脸图像聚类方法与实施例一的不同之处在于:
本实施例在8个MKKM基准数据集上测试了本方法的聚类性能,包括MSRA、Still、Cal-7、PFold、Nonpl、Flo17、Flo102和Reuters。数据集的相关信息参见表1。
表1 所使用的数据集
Dataset Samples Kernels Clusters
MSRA 210 6 7
Still 467 3 6
Cal-7 441 6 7
PFD 694 12 27
Nonpl 2732 69 3
Flo17 1360 7 17
Flo102 8189 4 102
Reuters 18758 5 6
本实施例采用平均多核聚类算法(A-MKKM)、多核k均值聚类(MKKM)、局部化的多核k均值聚类(LMKKM)、鲁棒的多核聚类(MKKM-MM)、带矩阵诱导正则化项的多核k均值聚类(MKKM-MR)、最优邻居多核聚类(ONKC)、基于后期融合的最大化对齐多视图聚类(MVC-LFA)、局部对齐最大化的多核聚类(LKAM)。在所有实验中,所有基准核首先被中心化和正则化。对于所有数据集,假设类别数量已知且被设置为聚类类别数量。另外,本实施例使用了网格搜索RMKKM、MKKM-MR、ONKC和MVC-LFA的参数。
本实施例使用了常见的聚类准确度(ACC)、归一化互信息(NMI)和兰德指数(RI)来显示每种方法的聚类性能。所有方法随机初始化并重复50次并显示最佳结果以减少k-means造成的随机性。
表2 八个数据集上不同算法的聚类效果
Figure PCTCN2022112016-appb-000069
表2展示了上述方法以及对比算法在所有数据集上的聚类效果。根据该表可以观察到:1.MKKM-MM首次尝试通过min-max学习来改进MKKM。正如所观察到的,它确实改进了MKKM,但在所有数据集上,比MKKM的性能改进都是有限的。同时,所提出的局部化简单MKKM的性能显著优于MKKM-MM。这再次证明了本实施例跌方法和相关的优化策略的优势;2.除了本方法的本地化SimpleMKKM之外,与所有基准数据集上的上述算法相比,SimpleMKKM还取得了可比性或更好的聚类性能。这种优越性要归因于其新的公式和新的优化算法;3.提出的局部化SimpleMKKM始终且显著优于SimpleMKKM。例如,在8个基准数据集上,ACC超过SimpleMKKM算法4.7%、5.2%、8.3%、1.2%、17.3%、17.3%、1.8%、1.5%和1.1%。在其他标准方面的改进也是相似的。这些结果很好地证明了所提出的局部简单化MKKM 受益于探索和提取核矩阵的局部信息的优越性。
如图3所示示出了由不同算法学习的核系数。图4是在6个基准数据集上迭代的局部化的SimpleMKKM学习H的聚类性能。图5是局部化SimpleMKKM的目标函数值随迭代次数的变化。图6是所有基准数据集上不同算法的运行时间比较(单位:秒数的对数),其中,每个数据集下的柱状图从左到右依次为Avg-KKM、MKKM、LMKKM、ONKC、MKKM-MiR、LKAM、LF-MVC、MKKM-MM、SimpleMKKM、LSMKKM。图7是近邻比例τ的大小对6个有代表性的数据集上聚类性能的影响。
本实施例提出了一种新颖的局部化的简单多核k-均值聚类机器学习方法,该方法包括局部化核对齐、优化目标函数获取最优组合系数γ以及相应的划分矩阵H等模块。通过优化目标函数,本实施例使得经过优化后的核组合可以代表单个视图的信息,也能更好地服务于视图融合,达到聚类效果提升的目的。并且,本实施例对每个视图做了局部化处理,来强化局部信息。MKKM-MM首次尝试通过min-max学习来改进MKKM,它确实改进了MKKM,但效果有限。所提出的局部化的SimpleMKKM的性能显著优于MKKM-MM。这再次证明了本实施例的公式和相关的优化策略的优势。局部化的SimpleMKKM始终且显著优于SimpleMKKM。
注意,上述仅为本申请的较佳实施例及所运用技术原理。本领域技术人员会理解,本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请构思的情况下,还可以包括更多其他等效实施例,而本申请的范围由所附的权利要求范围决定。

Claims (10)

  1. 基于局部化简单多核k-均值的人脸图像聚类方法,其特征在于,包括步骤:
    S1.采集人脸图像,并对采集的人脸图像进行预处理,得到各个视图的平均核矩阵;
    S2.根据得到的平均核矩阵计算n个(τ×n)-近邻矩阵;
    S3.根据近邻矩阵计算各个视图的局部化核矩阵;
    S4.根据计算得到的各个视图的局部化核矩阵构建局部化的简单多核k-均值聚类目标函数;
    S5.采用简约梯度下降法求解构建的目标函数的极小值,得到最优的聚类划分矩阵;
    S6.对得到聚类划分矩阵进行k-均值聚类,实现聚类。
  2. 根据权利要求1所述的基于局部化简单多核k-均值的人脸图像聚类方法,其特征在于,所述步骤S3中计算各个视图的局部化核矩阵,表示为:
    Figure PCTCN2022112016-appb-100001
    其中,
    Figure PCTCN2022112016-appb-100002
    表示各个视图的局部化核矩阵;A (i)表示n个(τ×n)-近邻矩阵;K p表示第p个给定的核矩阵;n表示样本数;
    Figure PCTCN2022112016-appb-100003
    表示元素的相乘。
  3. 根据权利要求2所述的基于局部化简单多核k-均值的人脸图像聚类方法,其特征在于,所述步骤S4中简单多核k-均值聚类目标函数,表示为:
    Figure PCTCN2022112016-appb-100004
    其中,γ表示系数向量;H表示划分矩阵;H T表示换分矩阵的置换;K γ表示由γ生成的K p的组合核矩阵;I k表示k阶单位阵。
  4. 根据权利要求3所述的基于局部化简单多核k-均值的人脸图像聚类方法,其特征在于,所述步骤S4中局部化的简单多核k-均值聚类目标函数,表示为:
    Figure PCTCN2022112016-appb-100005
    其中,
    Figure PCTCN2022112016-appb-100006
    R m表示m维实数向量空间;γ p表示γ的第p个分量。
  5. 根据权利要求4所述的基于局部化简单多核k-均值的人脸图像聚类方法,其特征在于,所述步骤S5中求解构建的目标函数的极小值具体为:
    将局部化的简单多核k-均值聚类目标函数简化为简单多核k-均值聚类目标函数:
    Figure PCTCN2022112016-appb-100007
    其中,
    Figure PCTCN2022112016-appb-100008
    表示元素的相乘;
    Figure PCTCN2022112016-appb-100009
    表示标准化的核矩阵;
    当A (i)的所有元素都设置为1时,简单多核k-均值聚类目标函数表示为:
    Figure PCTCN2022112016-appb-100010
    其中,
    Figure PCTCN2022112016-appb-100011
    表示最优值函数。
  6. 根据权利要求5所述的基于局部化简单多核k-均值的人脸图像聚类方法,其特征在于,所述步骤S5中采用简约梯度下降法求解构建的目标函数的极小值具体为:
    梯度下降法计算目标函数表示为:
    Figure PCTCN2022112016-appb-100012
    Figure PCTCN2022112016-appb-100013
    其中,
    Figure PCTCN2022112016-appb-100014
    设u为指示向量γ的最大分量的数,γ的正性约束表示:
    Figure PCTCN2022112016-appb-100015
    其中,d p表示下降方向。
  7. 基于局部化简单多核k-均值的人脸图像聚类系统,其特征在于,包括:
    采集模块,用于采集人脸图像,并对采集的人脸图像进行预处理,得到各个视图的平均核矩阵;
    第一计算模块,用于根据得到的平均核矩阵计算n个(τ×n)-近邻矩阵;
    第二计算模块,用于根据近邻矩阵计算各个视图的局部化核矩阵;
    构建模块,用于根据计算得到的各个视图的局部化核矩阵构建局部化的简单多核k-均值聚类目标函数;
    求解模块,用于采用简约梯度下降法求解构建的目标函数的极小值,得到最优的聚类划分矩阵;
    聚类模块,用于对得到聚类划分矩阵进行k-均值聚类,实现聚类。
  8. 根据权利要求7所述的基于局部化简单多核k-均值的人脸图像聚类系统,其特征在于,所述第二计算模块中计算各个视图的局部化核矩阵,表示为:
    Figure PCTCN2022112016-appb-100016
    其中,
    Figure PCTCN2022112016-appb-100017
    表示各个视图的局部化核矩阵;A (i)表示n个(τ×n)-近邻矩阵;K p表示第p个给定的核矩阵;n表示样本数;
    Figure PCTCN2022112016-appb-100018
    表示元素的相乘。
  9. 根据权利要求8所述的基于局部化简单多核k-均值的人脸图像聚类系统,其特征在于,所述构建模块中简单多核k-均值聚类目标函数,表示为:
    Figure PCTCN2022112016-appb-100019
    其中,γ表示系数向量;H表示划分矩阵;H T表示换分矩阵的置换;K γ表示由γ生成的K p的组合核矩阵;I k表示k阶单位阵。
  10. 根据权利要求9所述的基于局部化简单多核k-均值的人脸图像聚类系统,其特征在于,所述构建模块中局部化的简单多核k-均值聚类目标函数,表示为:
    Figure PCTCN2022112016-appb-100020
    其中,
    Figure PCTCN2022112016-appb-100021
    R m表示m维实数向量空间;γ p表示γ的第p个分量。
PCT/CN2022/112016 2021-08-17 2022-08-12 基于局部化简单多核k-均值的人脸图像聚类方法及系统 WO2023020373A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
ZA2024/01817A ZA202401817B (en) 2021-08-17 2024-03-01 Face image clustering method and system based on localized simple multiple kernel k-means

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110940777.6A CN113762354A (zh) 2021-08-17 2021-08-17 基于局部化简单多核k-均值的人脸图像聚类方法及系统
CN202110940777.6 2021-08-17

Publications (1)

Publication Number Publication Date
WO2023020373A1 true WO2023020373A1 (zh) 2023-02-23

Family

ID=78789545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/112016 WO2023020373A1 (zh) 2021-08-17 2022-08-12 基于局部化简单多核k-均值的人脸图像聚类方法及系统

Country Status (3)

Country Link
CN (1) CN113762354A (zh)
WO (1) WO2023020373A1 (zh)
ZA (1) ZA202401817B (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762354A (zh) * 2021-08-17 2021-12-07 浙江师范大学 基于局部化简单多核k-均值的人脸图像聚类方法及系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307901A1 (en) * 2016-03-30 2018-10-25 Shenzhen University Non-negative matrix factorization face recognition method and system based on kernel machine learning
CN113762354A (zh) * 2021-08-17 2021-12-07 浙江师范大学 基于局部化简单多核k-均值的人脸图像聚类方法及系统

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307901A1 (en) * 2016-03-30 2018-10-25 Shenzhen University Non-negative matrix factorization face recognition method and system based on kernel machine learning
CN113762354A (zh) * 2021-08-17 2021-12-07 浙江师范大学 基于局部化简单多核k-均值的人脸图像聚类方法及系统

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN LUEFENG; WANG KUANLIN; LI MIN; WU MIN; PEDRYCZ WITOLD; HIROTA KAORU: "K-Means Clustering-Based Kernel Canonical Correlation Analysis for Multimodal Emotion Recognition in Human–Robot Interaction", IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, IEEE SERVICE CENTER, PISCATAWAY, NJ., USA, vol. 70, no. 1, 14 February 2022 (2022-02-14), USA , pages 1016 - 1024, XP011918350, ISSN: 0278-0046, DOI: 10.1109/TIE.2022.3150097 *
MU XINLIANG: "A Fast KPCA Face Recognition Algorithm Based on Mixed Kernel Function", DIANZI KEJI =IT AGE, XI'AN DIANZI KEJI DAXUE, CN, vol. 28, no. 2, 15 February 2015 (2015-02-15), CN , pages 46 - 50, XP093036200, ISSN: 1007-7820, DOI: 10.16180/j.cnki.issn1007-7820.2015.02.013 *
REN SHIJIN, YANG MAOYUN, LIU XIAOPING, XU GUIYUN: "Kernel-induced space selection approach to LPKHDA dimensional reduction algorithm", JOURNAL OF FRONTIERS OF COMPUTER SCIENCE & TECHNOLOGY, vol. 7, no. 3, 1 March 2013 (2013-03-01), pages 272 - 281, XP093036194, ISSN: 1673-9418, DOI: 10.3778/j.issn.1673-9418.1205027 *

Also Published As

Publication number Publication date
ZA202401817B (en) 2024-05-30
CN113762354A (zh) 2021-12-07

Similar Documents

Publication Publication Date Title
WO2021120752A1 (zh) 域自适应模型训练、图像检测方法、装置、设备及介质
US10885379B2 (en) Multi-view image clustering techniques using binary compression
WO2018054283A1 (zh) 人脸模型的训练方法和装置、人脸认证方法和装置
Zhang et al. Learning a self-expressive network for subspace clustering
Wu et al. Set based discriminative ranking for recognition
Liu et al. Localized simple multiple kernel k-means
WO2022199432A1 (zh) 一种基于最优传输的深度缺失聚类机器学习方法及系统
CN112836672A (zh) 一种基于自适应近邻图嵌入的无监督数据降维方法
CN110781766B (zh) 基于特征谱正则化的格拉斯曼流形判别分析图像识别方法
WO2023020391A1 (zh) 一种基于一步后融合多视图的文本聚类方法及系统
WO2023020373A1 (zh) 基于局部化简单多核k-均值的人脸图像聚类方法及系统
WO2022267955A1 (zh) 基于局部最大对齐的后期融合多视图聚类方法及系统
Levin et al. Out-of-sample extension of graph adjacency spectral embedding
CN113158955B (zh) 基于聚类引导和成对度量三元组损失的行人重识别方法
US20240143699A1 (en) Consensus graph learning-based multi-view clustering method
CN110516533A (zh) 一种基于深度度量的行人再辨识方法
Goh et al. Unsupervised Riemannian clustering of probability density functions
WO2022227956A1 (zh) 一种基于局部核的最优邻居多核聚类方法及系统
CN111241326A (zh) 基于注意力金字塔图网络的图像视觉关系指代定位方法
CN108388869B (zh) 一种基于多重流形的手写数据分类方法及系统
CN112001231B (zh) 加权多任务稀疏表示的三维人脸识别方法、系统及介质
Zhang et al. A spectral clustering based method for hyperspectral urban image
CN111027514B (zh) 一种基于矩阵指数的弹性保持投影方法及其应用
Liu et al. Inconsistency distillation for consistency: Enhancing multi-view clustering via mutual contrastive teacher-student leaning
CN109978066B (zh) 基于多尺度数据结构的快速谱聚类方法

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE