CN114742132A - Deep multi-view clustering method, system and device based on shared difference learning - Google Patents

Deep multi-view clustering method, system and device based on shared difference learning Download PDF

Info

Publication number
CN114742132A
CN114742132A CN202210264054.3A CN202210264054A CN114742132A CN 114742132 A CN114742132 A CN 114742132A CN 202210264054 A CN202210264054 A CN 202210264054A CN 114742132 A CN114742132 A CN 114742132A
Authority
CN
China
Prior art keywords
view
common
difference
network
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210264054.3A
Other languages
Chinese (zh)
Inventor
李晓翠
张新玉
史庆宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University of Technology
Original Assignee
Hunan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University of Technology filed Critical Hunan University of Technology
Priority to CN202210264054.3A priority Critical patent/CN114742132A/en
Publication of CN114742132A publication Critical patent/CN114742132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure provides a deep multi-view clustering method, system and device based on common difference learning, belonging to the technical field of data processing, and specifically comprising the following steps: establishing a common difference depth multi-view feature learning network; respectively connecting each view of the multi-view data with a common information extraction network and a difference information extraction network; inputting the common information extraction network of all views of the multi-view data into a common information learning module for training until convergence; inputting a common information extraction network and a difference information extraction network of all views of the multi-view data into a difference information learning module, and obtaining the complementary characteristics of each view of the multi-view data through orthogonal constraint; connecting the consistency features and all the complementary features in series to form a multi-view fusion feature; and inputting the multi-view fusion characteristics into a clustering model based on KL divergence for clustering. By the scheme, the clustering effect and the adaptability of the multi-view data under the condition of initial severe imbalance are improved.

Description

基于共有差异学习的深度多视图聚类方法、系统及设备Deep multi-view clustering method, system and device based on shared difference learning

技术领域technical field

本公开实施例涉及数据处理技术领域,尤其涉及一种基于共有差异学习的深度多视图聚类方法、系统及设备。The embodiments of the present disclosure relate to the technical field of data processing, and in particular, to a deep multi-view clustering method, system and device based on shared difference learning.

背景技术Background technique

目前,聚类根本思想是根据数据集中样本间的相似性将其划分成若干个类簇,同类间样本间相似度要小于异类间的样本相似度。传统的聚类算法主要是针对单视图数据,数据只有一组特征。当数据具有多组特征时,称为多视图数据。多视图数据不仅包含更多丰富且有用的信息,不同视图之间也会带来冗余信息。而目前大多数多视图聚类主要关注于最大化多视图的共有信息,忽视了各个视图上的差异性信息,即没有充分挖掘出多视图数据的互补信息;在多视图数据初始特严重不均衡的情况下,采用现有的方法可能会产生“木桶效应”,即所有视图的共有信息会朝着初始特征最差的视图靠拢,高质量视图的特征没有被充分利用,这也失去了数据从多视图描述的意义。At present, the basic idea of clustering is to divide the samples into several clusters according to the similarity between the samples in the data set, and the similarity between the samples of the same kind is smaller than the similarity between the samples of the different classes. Traditional clustering algorithms are mainly for single-view data, and the data has only one set of features. When the data has multiple sets of features, it is called multi-view data. Multi-view data not only contains more rich and useful information, but also brings redundant information between different views. At present, most multi-view clustering mainly focuses on maximizing the common information of multi-views, ignoring the difference information of each view, that is, the complementary information of multi-view data is not fully mined; the initial multi-view data is seriously unbalanced. In this case, using the existing methods may produce the "cask effect", that is, the common information of all views will move towards the view with the worst initial features, and the features of high-quality views are not fully utilized, which also loses data. Meaning from a multi-view description.

可见,亟需一种聚类效果和适应性高的基于共有差异学习的深度多视图聚类方法。It can be seen that a deep multi-view clustering method based on shared difference learning with high clustering effect and adaptability is urgently needed.

发明内容SUMMARY OF THE INVENTION

有鉴于此,本公开实施例提供一种基于共有差异学习的深度多视图聚类方法、系统及设备,至少部分解决现有技术中存在聚类效果和利用高质量视图特征的适应性较差的问题。In view of this, the embodiments of the present disclosure provide a deep multi-view clustering method, system and device based on shared difference learning, which at least partially solves the problems of clustering effect and poor adaptability for utilizing high-quality view features in the prior art. question.

第一方面,本公开实施例提供了一种基于共有差异学习的深度多视图聚类方法,包括:In a first aspect, an embodiment of the present disclosure provides a deep multi-view clustering method based on shared difference learning, including:

步骤1,建立共有差异深度多视图特征学习网络,其中,所述共有差异深度多视图特征学习网络包括深度特征提取模块、共有信息学习模块和差异信息学习模块,所述深度特征提取模块包括共有信息提取网络和差异信息提取网络;Step 1, establishing a common difference depth multi-view feature learning network, wherein the common difference depth multi-view feature learning network includes a depth feature extraction module, a common information learning module and a difference information learning module, and the deep feature extraction module includes common information. Extraction network and differential information extraction network;

步骤2,获取多视图数据,并将所述多视图数据的每个视图分别连接所述共有信息提取网络和所述差异信息提取网络;Step 2, acquiring multi-view data, and connecting each view of the multi-view data to the common information extraction network and the difference information extraction network respectively;

步骤3,将所述多视图数据的全部视图的共有信息提取网络输入共有信息学习模块进行训练直至收敛,得到所述多视图数据的一致性特征;Step 3, the common information extraction network of all views of the multi-view data is input into the common information learning module for training until convergence, and the consistency feature of the multi-view data is obtained;

步骤4,将所述多视图数据的全部视图的共有信息提取网络和差异信息提取网络输入差异信息学习模块,通过正交约束得到所述多视图数据的每个视图的互补性特征;Step 4, input the common information extraction network and the difference information extraction network of all views of the multi-view data into the difference information learning module, and obtain the complementary feature of each view of the multi-view data through orthogonal constraints;

步骤5,将所述一致性特征和全部所述互补性特征串联形成多视图融合特征;Step 5, connecting the consistent feature and all the complementary features in series to form a multi-view fusion feature;

步骤6,将所述多视图融合特征输入基于KL散度的聚类模型进行聚类。Step 6: Input the multi-view fusion feature into a KL divergence-based clustering model for clustering.

根据本公开实施例的一种具体实现方式,所述共有信息学习模块包括生成对抗网络。According to a specific implementation of the embodiment of the present disclosure, the shared information learning module includes a generative adversarial network.

根据本公开实施例的一种具体实现方式,所述步骤3具体包括:According to a specific implementation manner of the embodiment of the present disclosure, the step 3 specifically includes:

步骤3.1,所述共有信息学习模块将每个视图上的共有信息提取网络作为一个生成器G,最终得到M个生成器;Step 3.1, the shared information learning module uses the shared information extraction network on each view as a generator G, and finally obtains M generators;

步骤3.2,将M个生成器生成的特征数据,传入到M分类的鉴别器D中;Step 3.2, pass the feature data generated by the M generators into the discriminator D of the M classification;

步骤3.3,重复步骤3.1和步骤3.2,直到鉴别器无法区分特征数据对应的视图,得到所述一致性特征。In step 3.3, steps 3.1 and 3.2 are repeated until the discriminator cannot distinguish the views corresponding to the feature data, and the consistent feature is obtained.

根据本公开实施例的一种具体实现方式,所述步骤5的串联方式为

Figure BDA0003551931290000021
其中,hi表示第m个视图中的第i个样本的多视图融合特征,
Figure BDA0003551931290000022
Figure BDA0003551931290000023
分别表示在视图m上提取到的共有信息和差异信息。According to a specific implementation manner of the embodiment of the present disclosure, the series connection manner of the step 5 is:
Figure BDA0003551931290000021
where h i represents the multi-view fusion feature of the i-th sample in the m-th view,
Figure BDA0003551931290000022
and
Figure BDA0003551931290000023
represent the common information and difference information extracted on the view m, respectively.

根据本公开实施例的一种具体实现方式,所述步骤6具体包括:According to a specific implementation manner of the embodiment of the present disclosure, the step 6 specifically includes:

将所述多视图融合特征输入基于KL散度的聚类模型迭代训练所述共有差异深度多视图特征学习网络和聚类网络,对所述多视图数据完成聚类。The multi-view fusion feature is input into a KL divergence-based clustering model to iteratively train the common difference depth multi-view feature learning network and the clustering network, and the multi-view data is clustered.

第二方面,本公开实施例提供了一种基于共有差异学习的深度多视图聚类系统,包括:In a second aspect, an embodiment of the present disclosure provides a deep multi-view clustering system based on shared difference learning, including:

建立模块,用于建立共有差异深度多视图特征学习网络,其中,所述共有差异深度多视图特征学习网络包括深度特征提取模块、共有信息学习模块和差异信息学习模块,所述深度特征提取模块包括共有信息提取网络和差异信息提取网络;The establishment module is used to establish a common difference depth multi-view feature learning network, wherein the common difference depth multi-view feature learning network includes a depth feature extraction module, a common information learning module and a difference information learning module, and the depth feature extraction module includes Common information extraction network and differential information extraction network;

获取模块,用于获取多视图数据,并将所述多视图数据的每个视图分别连接所述共有信息提取网络和所述差异信息提取网络;an acquisition module, configured to acquire multi-view data, and connect each view of the multi-view data to the common information extraction network and the difference information extraction network respectively;

第一学习模块,用于将所述多视图数据的全部视图的共有信息提取网络输入共有信息学习模块进行训练直至收敛,得到所述多视图数据的一致性特征;The first learning module is configured to input the common information extraction network of all views of the multi-view data into the common information learning module for training until convergence, and obtain the consistency features of the multi-view data;

第二学习模块,用于将所述多视图数据的全部视图的共有信息提取网络和差异信息提取网络输入差异信息学习模块,通过正交约束得到所述多视图数据的每个视图的互补性特征;The second learning module is used to input the common information extraction network and the difference information extraction network of all views of the multi-view data into the difference information learning module, and obtain the complementary features of each view of the multi-view data through orthogonal constraints ;

融合模块,用于将所述一致性特征和全部所述互补性特征串联形成多视图融合特征;a fusion module, configured to concatenate the consistent feature and all the complementary features to form a multi-view fusion feature;

聚类模块,用于将所述多视图融合特征输入基于KL散度的聚类模型进行聚类。The clustering module is used for inputting the multi-view fusion feature into a KL divergence-based clustering model for clustering.

第三方面,本公开实施例还提供了一种电子设备,该电子设备包括:In a third aspect, an embodiment of the present disclosure further provides an electronic device, the electronic device comprising:

至少一个处理器;以及,at least one processor; and,

与该至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,

该存储器存储有可被该至少一个处理器执行的指令,该指令被该至少一个处理器执行,以使该至少一个处理器能够执行前述第一方面或第一方面的任一实现方式中的基于共有差异学习的深度多视图聚类方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the aforementioned first aspect or any implementation of the first aspect based on Deep multi-view clustering methods for shared differential learning.

本公开实施例中的基于共有差异学习的深度多视图聚类方案,包括:步骤1,建立共有差异深度多视图特征学习网络,其中,所述共有差异深度多视图特征学习网络包括深度特征提取模块、共有信息学习模块和差异信息学习模块,所述深度特征提取模块包括共有信息提取网络和差异信息提取网络;步骤2,获取多视图数据,并将所述多视图数据的每个视图分别连接所述共有信息提取网络和所述差异信息提取网络;步骤3,将所述多视图数据的全部视图的共有信息提取网络输入共有信息学习模块进行训练直至收敛,得到所述多视图数据的一致性特征;步骤4,将所述多视图数据的全部视图的共有信息提取网络和差异信息提取网络输入差异信息学习模块,通过正交约束得到所述多视图数据的每个视图的互补性特征;步骤5,将所述一致性特征和全部所述互补性特征串联形成多视图融合特征;步骤6,将所述多视图融合特征输入基于KL散度的聚类模型进行聚类。The deep multi-view clustering scheme based on common difference learning in the embodiment of the present disclosure includes: step 1, establishing a common difference depth multi-view feature learning network, wherein the common difference depth multi-view feature learning network includes a deep feature extraction module , a common information learning module and a difference information learning module, the deep feature extraction module includes a common information extraction network and a difference information extraction network; Step 2, obtain multi-view data, and connect each view of the multi-view data to all the The common information extraction network and the difference information extraction network; Step 3, the common information extraction network of all views of the multi-view data is input into the common information learning module for training until convergence, and the consistency feature of the multi-view data is obtained. Step 4, the common information extraction network and the difference information extraction network of all views of the multi-view data are input to the difference information learning module, and the complementary feature of each view of the multi-view data is obtained by orthogonal constraints; Step 5 , the consistent feature and all the complementary features are connected in series to form a multi-view fusion feature; step 6, the multi-view fusion feature is input into a clustering model based on KL divergence for clustering.

本公开实施例的有益效果为:通过本公开的方案,多视图特征学习和多视图聚类策略充分利用多视图数据的一致性和互补信息,且同时减少不同视图间的冗余信息,提高了聚类效果和适应性。The beneficial effects of the embodiments of the present disclosure are: through the solution of the present disclosure, the multi-view feature learning and multi-view clustering strategies make full use of the consistency and complementary information of multi-view data, and at the same time reduce redundant information between different views, and improve the Clustering effect and adaptability.

附图说明Description of drawings

为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to explain the technical solutions of the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings that need to be used in the embodiments. Obviously, the accompanying drawings in the following description are only some embodiments of the present disclosure. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without any creative effort.

图1为本公开实施例提供的一种基于共有差异学习的深度多视图聚类方法的流程示意图;1 is a schematic flowchart of a deep multi-view clustering method based on shared difference learning according to an embodiment of the present disclosure;

图2为本公开实施例提供的另一种基于共有差异学习的深度多视图聚类方法的流程示意图;2 is a schematic flowchart of another deep multi-view clustering method based on shared difference learning provided by an embodiment of the present disclosure;

图3为本公开实施例提供的一种共有差异深度多视图特征学习网络的结构示意图;3 is a schematic structural diagram of a common difference depth multi-view feature learning network according to an embodiment of the present disclosure;

图4为本公开实施例提供的一种深度特征提取模块的结构示意图;FIG. 4 is a schematic structural diagram of a deep feature extraction module according to an embodiment of the present disclosure;

图5为本公开实施例提供的一种共有信息学习模块的结构示意图;5 is a schematic structural diagram of a shared information learning module according to an embodiment of the present disclosure;

图6为本公开实施例提供的一种差异信息学习模块的结构示意图;6 is a schematic structural diagram of a difference information learning module according to an embodiment of the present disclosure;

图7为本公开实施例提供的一种基于共有差异学习的深度多视图聚类系统的结构示意图;7 is a schematic structural diagram of a deep multi-view clustering system based on shared difference learning according to an embodiment of the present disclosure;

图8为本公开实施例提供的电子设备示意图。FIG. 8 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.

具体实施方式Detailed ways

下面结合附图对本公开实施例进行详细描述。The embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.

以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The embodiments of the present disclosure are described below through specific specific examples, and those skilled in the art can easily understand other advantages and effects of the present disclosure from the contents disclosed in this specification. Obviously, the described embodiments are only some, but not all, embodiments of the present disclosure. The present disclosure can also be implemented or applied through other different specific embodiments, and various details in this specification can also be modified or changed based on different viewpoints and applications without departing from the spirit of the present disclosure. It should be noted that the following embodiments and features in the embodiments may be combined with each other under the condition of no conflict. Based on the embodiments in the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present disclosure.

需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。It is noted that various aspects of embodiments within the scope of the appended claims are described below. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is illustrative only. Based on this disclosure, those skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.

还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。It should also be noted that the drawings provided in the following embodiments are only illustrative of the basic concept of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and the number of components in actual implementation. For dimension drawing, the type, quantity and proportion of each component can be changed at will in actual implementation, and the component layout may also be more complicated.

另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。Additionally, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, one skilled in the art will understand that the described aspects may be practiced without these specific details.

本公开实施例提供一种基于共有差异学习的深度多视图聚类方法,所述方法可以应用于数据处理场景中的多视图数据聚类分析过程。Embodiments of the present disclosure provide a deep multi-view clustering method based on shared difference learning, and the method can be applied to a multi-view data clustering analysis process in a data processing scenario.

参见图1,为本公开实施例提供的一种基于共有差异学习的深度多视图聚类方法的流程示意图。如图1和图2所示,所述方法主要包括以下步骤:Referring to FIG. 1 , it is a schematic flowchart of a deep multi-view clustering method based on shared difference learning according to an embodiment of the present disclosure. As shown in Figure 1 and Figure 2, the method mainly includes the following steps:

步骤1,建立共有差异深度多视图特征学习网络,其中,所述共有差异深度多视图特征学习网络包括深度特征提取模块、共有信息学习模块和差异信息学习模块,所述深度特征提取模块包括共有信息提取网络和差异信息提取网络;Step 1, establishing a common difference depth multi-view feature learning network, wherein the common difference depth multi-view feature learning network includes a depth feature extraction module, a common information learning module and a difference information learning module, and the deep feature extraction module includes common information. Extraction network and differential information extraction network;

具体实施时,可以先构建所述共有差异深度多视图特征学习网络,以使得所述共有差异深度多视图特征学习网络可以充分利用多视图数据的一致性和互补信息,且同时减少不同视图间的冗余信息,所述共有差异深度多视图特征学习网络的结构如图3所示,同时,所述共有差异深度多视图特征学习网络包括所述深度特征提取模块如图4所示,所述共有信息学习模块如图5所示,以及,所述差异信息学习模块如图6所示,其中,所述深度特征提取模块包括共有信息提取网络和差异信息提取网络。In specific implementation, the common difference depth multi-view feature learning network can be constructed first, so that the common difference depth multi-view feature learning network can make full use of the consistency and complementary information of multi-view data, and at the same time reduce the difference between different views. redundant information, the structure of the common difference depth multi-view feature learning network is shown in Figure 3, at the same time, the common difference depth multi-view feature learning network includes the deep feature extraction module as shown in Figure 4, the common difference depth multi-view feature learning network The information learning module is shown in Figure 5, and the difference information learning module is shown in Figure 6, wherein the deep feature extraction module includes a common information extraction network and a difference information extraction network.

步骤2,获取多视图数据,并将所述多视图数据的每个视图分别连接所述共有信息提取网络和所述差异信息提取网络;Step 2, acquiring multi-view data, and connecting each view of the multi-view data to the common information extraction network and the difference information extraction network respectively;

例如,设多视图数据为X={X(1),X(2),...,X(M)},其中M表示数据的视图总数,

Figure BDA0003551931290000061
dm表示在第m个视图中样本的特征维度,N表示样本的总数。然后可以将所述多视图数据的每个视图分别连接所述共有信息提取网络和所述差异信息提取网络,则所述共有信息提取网络输出的是每个视图的共有信息,所述差异信息提取网络输出的是每个视图的差异信息。For example, let the multi-view data be X={X (1) ,X (2) ,...,X (M) }, where M represents the total number of views of the data,
Figure BDA0003551931290000061
d m represents the feature dimension of the samples in the mth view, and N represents the total number of samples. Then, each view of the multi-view data can be connected to the common information extraction network and the difference information extraction network respectively, then the common information extraction network outputs the common information of each view, and the difference information extraction network outputs the common information of each view. The output of the network is the difference information for each view.

具体的,深度特征提取模型包含:共有信息提取网络和差异信息提取网络,用于提取所有视图的共有信息以及每个视图中蕴含的差异信息。Specifically, the deep feature extraction model includes: a common information extraction network and a difference information extraction network, which are used to extract the common information of all views and the difference information contained in each view.

设每个视图中的共有信息提取子网络和差异信息提取子网络都包含n+1个全连接层,第k∈[0,n]层分别包含和psk个单元。那么对于第m个视图中的样本x在共有信息网络的第k层的输出可以表示为:It is assumed that the common information extraction sub-network and the difference information extraction sub-network in each view contain n+1 fully connected layers, and the k∈[0,n]th layer contains and p sk units, respectively. Then for the sample x in the mth view, the output of the kth layer of the shared information network can be expressed as:

Figure BDA0003551931290000071
Figure BDA0003551931290000071

其中,

Figure BDA0003551931290000072
Figure BDA0003551931290000073
分别表示共有信息提取子网络中第k层的权值矩阵和偏移向量。
Figure BDA0003551931290000074
是非线性激活函数,常用的有sigmoid和tanh等。in,
Figure BDA0003551931290000072
and
Figure BDA0003551931290000073
respectively represent the weight matrix and offset vector of the kth layer in the common information extraction sub-network.
Figure BDA0003551931290000074
is a nonlinear activation function, commonly used are sigmoid and tanh.

同时,对于第m个视图中的样本x在差异信息网络的第k层的输出可以表示为:Meanwhile, for the sample x in the mth view, the output of the kth layer of the disparity information network can be expressed as:

Figure BDA0003551931290000075
Figure BDA0003551931290000075

其中,

Figure BDA0003551931290000076
Figure BDA0003551931290000077
分别表示差异信息提取子网络中第k层的权值矩阵和偏移向量。in,
Figure BDA0003551931290000076
and
Figure BDA0003551931290000077
respectively represent the weight matrix and offset vector of the kth layer in the difference information extraction sub-network.

因此,对于第m个视图中的第i个样本

Figure BDA0003551931290000078
可以分别得到其对应的共有信息和差异信息,记作:
Figure BDA0003551931290000079
(共有信息)和
Figure BDA00035519312900000710
(差异信息)。So for the ith sample in the mth view
Figure BDA0003551931290000078
The corresponding common information and difference information can be obtained respectively, denoted as:
Figure BDA0003551931290000079
(shared information) and
Figure BDA00035519312900000710
(difference information).

Figure BDA00035519312900000711
Figure BDA00035519312900000711

Figure BDA00035519312900000712
Figure BDA00035519312900000712

步骤3,将所述多视图数据的全部视图的共有信息提取网络输入共有信息学习模块进行训练直至收敛,得到所述多视图数据的一致性特征;Step 3, the common information extraction network of all views of the multi-view data is input into the common information learning module for training until convergence, and the consistency feature of the multi-view data is obtained;

可选的,所述共有信息学习模块包括生成对抗网络。Optionally, the shared information learning module includes a generative adversarial network.

进一步的,所述步骤3具体包括:Further, the step 3 specifically includes:

步骤3.1,所述共有信息学习模块将每个视图上的共有信息提取网络作为一个生成器G,最终得到M个生成器;Step 3.1, the shared information learning module uses the shared information extraction network on each view as a generator G, and finally obtains M generators;

步骤3.2,将M个生成器生成的特征数据,传入到M分类的鉴别器D中;Step 3.2, pass the feature data generated by the M generators into the discriminator D of the M classification;

步骤3.3,重复步骤3.1和步骤3.2,直到鉴别器无法区分特征数据对应的视图,得到所述一致性特征。In step 3.3, steps 3.1 and 3.2 are repeated until the discriminator cannot distinguish the views corresponding to the feature data, and the consistent feature is obtained.

具体实施时,共有信息学习模块将每个视图上的深度共有信息提取子网络作为一个生成器G,最终得到M个生成器。将这M个生成器生成的特征数据,传入到M分类的鉴别器D中。G的目标是生成具有相似分布的数据,使得鉴别器D不能区分来该数据来自哪个视图,而D的目标是尽量能区分出传入的特征数据来自哪个视图或通过哪个生成器G生成的。通过对抗学习,最终使得每个视图中提取到的共有信息足够相似形成所述一致性特征。即最大化不同视图中的共有信息。该模块目标函数如下所示:During specific implementation, the shared information learning module uses the deep shared information extraction sub-network on each view as a generator G, and finally obtains M generators. The feature data generated by the M generators are passed into the discriminator D of the M classification. The goal of G is to generate data with a similar distribution, so that the discriminator D cannot distinguish which view the data comes from, while the goal of D is to try to distinguish which view the incoming feature data comes from or through which generator G is generated. Through adversarial learning, the shared information extracted from each view is finally similar enough to form the consistent feature. That is, to maximize the common information in different views. The module objective function is as follows:

Figure BDA0003551931290000081
Figure BDA0003551931290000081

其中,Gm表示第m个视图上的生成器(共有信息提取网络),

Figure BDA0003551931290000082
表示样本
Figure BDA0003551931290000083
通过生成器Gm生成的样本,
Figure BDA0003551931290000084
表示样本
Figure BDA0003551931290000085
的真实视图标签。
Figure BDA0003551931290000086
的输出为生成的样本来源于视图m的概率值,即:where G m represents the generator (common information extraction network) on the mth view,
Figure BDA0003551931290000082
represent samples
Figure BDA0003551931290000083
The samples generated by the generator G m ,
Figure BDA0003551931290000084
represent samples
Figure BDA0003551931290000085
true view label.
Figure BDA0003551931290000086
The output of is the probability value that the generated sample originates from view m, namely:

Figure BDA0003551931290000087
Figure BDA0003551931290000087

步骤4,将所述多视图数据的全部视图的共有信息提取网络和差异信息提取网络输入差异信息学习模块,通过正交约束得到所述多视图数据的每个视图的互补性特征;Step 4, input the common information extraction network and the difference information extraction network of all views of the multi-view data into the difference information learning module, and obtain the complementary feature of each view of the multi-view data through orthogonal constraints;

具体实施时,可以将所述多视图数据的全部视图的共有信息提取网络和差异信息提取网络输入差异信息学习模块,通过正交约束,使得共有信息和差异信息之间的相关性最小,从而得到每个视图的互补性特征。During specific implementation, the common information extraction network and the difference information extraction network of all views of the multi-view data can be input into the difference information learning module, and the correlation between the common information and the difference information is minimized through orthogonal constraints, thereby obtaining Complementary features for each view.

步骤5,将所述一致性特征和全部所述互补性特征串联形成多视图融合特征;Step 5, connecting the consistent feature and all the complementary features in series to form a multi-view fusion feature;

在上述实施例的基础上,所述步骤5的串联方式为

Figure BDA0003551931290000091
其中,hi表示第m个视图中的第i个样本的多视图融合特征,
Figure BDA0003551931290000092
Figure BDA0003551931290000093
分别表示在视图m上提取到的共有信息和差异信息。On the basis of the above-mentioned embodiment, the series connection method of the step 5 is:
Figure BDA0003551931290000091
where h i represents the multi-view fusion feature of the i-th sample in the m-th view,
Figure BDA0003551931290000092
and
Figure BDA0003551931290000093
represent the common information and difference information extracted on the view m, respectively.

具体实施时,对于第m个视图中的第i个样本

Figure BDA0003551931290000094
Figure BDA0003551931290000095
Figure BDA0003551931290000096
分别表示在视图m上提取到的共有信息和差异信息。然后,将所有视图上提取到的共有信息向量和差异信息向量,通过以下方式融合,得到样本i的共有差异信息hi作为所述多视图融合特征。In specific implementation, for the i-th sample in the m-th view
Figure BDA0003551931290000094
use
Figure BDA0003551931290000095
and
Figure BDA0003551931290000096
represent the common information and difference information extracted on the view m, respectively. Then, the common information vector and the difference information vector extracted from all the views are fused in the following manner to obtain the common difference information h i of the sample i as the multi-view fusion feature.

Figure BDA0003551931290000097
Figure BDA0003551931290000097

其中,hc,i表示所有视图的共有信息,通过以下公式计算,Among them, h c,i represents the common information of all views, which is calculated by the following formula,

Figure BDA0003551931290000098
Figure BDA0003551931290000098

步骤6,将所述多视图融合特征输入基于KL散度的聚类模型进行聚类。Step 6: Input the multi-view fusion feature into a KL divergence-based clustering model for clustering.

可选的,所述步骤6具体包括:Optionally, the step 6 specifically includes:

将所述多视图融合特征输入基于KL散度的聚类模型迭代训练所述共有差异深度多视图特征学习网络和聚类网络,对所述多视图数据完成聚类。The multi-view fusion feature is input into a KL divergence-based clustering model to iteratively train the common difference depth multi-view feature learning network and the clustering network, and the multi-view data is clustered.

具体实施时,可以将多视图的共有差异特征输入到基于KL的聚类模型,迭代训练共有差异信息学习网络和聚类网络,对所述多视图数据完成聚类。基于共有差异学习的深度多视图聚类算法的目标函数为:In specific implementation, the common difference features of multiple views can be input into the KL-based clustering model, the common difference information learning network and the clustering network can be iteratively trained, and the multi-view data can be clustered. The objective function of the deep multi-view clustering algorithm based on shared difference learning is:

L=Lc1Ls2Lclu (8)L=L c1 L s2 L clu (8)

其中,λ1,λ2为平衡因子,用来调节各部分损失在总的目标函数中的比重,Lclu为聚类损失,通过以下公式计算:Among them, λ 1 , λ 2 are balance factors, which are used to adjust the proportion of each part of the loss in the total objective function, and L clu is the clustering loss, which is calculated by the following formula:

Figure BDA0003551931290000101
Figure BDA0003551931290000101

其中K为聚类的类别数,qij为样本i属于聚类j的软指派概率,pij为样本i属于聚类j的目标概率。where K is the number of categories of clusters, q ij is the soft assignment probability that sample i belongs to cluster j, and p ij is the target probability that sample i belongs to cluster j.

qij和pij分别通过以下方式计算:q ij and p ij are calculated by:

Figure BDA0003551931290000102
Figure BDA0003551931290000102

其中uj为第j类的聚类中心,α为一个自由度变量,为了简化计算,将它的值固定为1。Among them, u j is the cluster center of the jth class, and α is a degree of freedom variable. In order to simplify the calculation, its value is fixed to 1.

Figure BDA0003551931290000103
Figure BDA0003551931290000103

其中fj为所有样本属于第j个聚类的软指派概率之和:where f j is the sum of the soft assignment probabilities of all samples belonging to the jth cluster:

Figure BDA0003551931290000104
Figure BDA0003551931290000104

本实施例提供的基于共有差异学习的深度多视图聚类方法,通过深度特征提取子模块,包含两个网络,其中一个用于提取共有信息,另外一个用于提取差异信息。共有信息学习模块,通过融合GAN技术,使得各个视图上提取到的共有信息尽可能的相似;差异信息学习模块,通过正交约束,使得共有信息和差异信息之间的相关性最小。The deep multi-view clustering method based on shared difference learning provided by this embodiment includes two networks through a deep feature extraction sub-module, one of which is used to extract common information, and the other is used to extract difference information. The common information learning module, through the fusion of GAN technology, makes the common information extracted from each view as similar as possible; the difference information learning module, through orthogonal constraints, minimizes the correlation between the common information and the difference information.

然后将共有差异深度多视图特征学习网络应用于多视图聚类,将多视图特征学习网络提取到的共有差异信息,传入到后续的聚类网络,通过迭代训练共有差异信息学习网络和聚类网络,达到对多视图数据聚类的目的,提高了聚类效果和适应性。Then, the common difference depth multi-view feature learning network is applied to multi-view clustering, and the common difference information extracted by the multi-view feature learning network is passed to the subsequent clustering network, and the common difference information learning network and clustering are trained iteratively. The network achieves the purpose of clustering multi-view data, and improves the clustering effect and adaptability.

与上面的方法实施例相对应,参见图7,本公开实施例还提供了一种基于共有差异学习的深度多视图聚类系统70,包括:Corresponding to the above method embodiments, referring to FIG. 7 , an embodiment of the present disclosure further provides a deep multi-view clustering system 70 based on shared difference learning, including:

建立模块701,用于建立共有差异深度多视图特征学习网络,其中,所述共有差异深度多视图特征学习网络包括深度特征提取模块、共有信息学习模块和差异信息学习模块,所述深度特征提取模块包括共有信息提取网络和差异信息提取网络;The establishment module 701 is used to establish a common difference depth multi-view feature learning network, wherein the common difference depth multi-view feature learning network includes a depth feature extraction module, a common information learning module and a difference information learning module, and the depth feature extraction module Including common information extraction network and differential information extraction network;

获取模块702,用于获取多视图数据,并将所述多视图数据的每个视图分别连接所述共有信息提取网络和所述差异信息提取网络;an acquisition module 702, configured to acquire multi-view data, and connect each view of the multi-view data to the common information extraction network and the difference information extraction network respectively;

第一学习模块703,用于将所述多视图数据的全部视图的共有信息提取网络输入共有信息学习模块进行训练直至收敛,得到所述多视图数据的一致性特征;The first learning module 703 is configured to input the common information extraction network of all views of the multi-view data into the common information learning module for training until convergence, and obtain the consistency features of the multi-view data;

第二学习模块704,用于将所述多视图数据的全部视图的共有信息提取网络和差异信息提取网络输入差异信息学习模块,通过正交约束得到所述多视图数据的每个视图的互补性特征;The second learning module 704 is configured to input the common information extraction network and the difference information extraction network of all views of the multi-view data into the difference information learning module, and obtain the complementarity of each view of the multi-view data through orthogonal constraints feature;

融合模块705,用于将所述一致性特征和全部所述互补性特征串联形成多视图融合特征;A fusion module 705, configured to concatenate the consistent feature and all the complementary features to form a multi-view fusion feature;

聚类模块706,用于将所述多视图融合特征输入基于KL散度的聚类模型进行聚类。The clustering module 706 is configured to input the multi-view fusion feature into a KL divergence-based clustering model for clustering.

图7所示系统可以对应的执行上述方法实施例中的内容,本实施例未详细描述的部分,参照上述方法实施例中记载的内容,在此不再赘述。The system shown in FIG. 7 can correspondingly execute the content in the foregoing method embodiment. For the parts not described in detail in this embodiment, reference is made to the content recorded in the foregoing method embodiment, and details are not repeated here.

参见图8,本公开实施例还提供了一种电子设备80,该电子设备包括:至少一个处理器以及与该至少一个处理器通信连接的存储器。其中,该存储器存储有可被该至少一个处理器执行的指令,该指令被该至少一个处理器执行,以使该至少一个处理器能够执行前述方法实施例中的基于共有差异学习的深度多视图聚类方法。Referring to FIG. 8 , an embodiment of the present disclosure further provides an electronic device 80 , where the electronic device includes: at least one processor and a memory communicatively connected to the at least one processor. Wherein, the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the deep multi-view based on mutual difference learning in the foregoing method embodiments clustering method.

本公开实施例还提供了一种非暂态计算机可读存储介质,该非暂态计算机可读存储介质存储计算机指令,该计算机指令用于使该计算机执行前述方法实施例中的基于共有差异学习的深度多视图聚类方法。Embodiments of the present disclosure also provide a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer instructions, and the computer instructions are used to cause the computer to perform the learning based on shared differences in the foregoing method embodiments A deep multi-view clustering method.

本公开实施例还提供了一种计算机程序产品,该计算机程序产品包括存储在非暂态计算机可读存储介质上的计算程序,该计算机程序包括程序指令,当该程序指令被计算机执行时,使该计算机执行前述方法实施例中的基于共有差异学习的深度多视图聚类方法。Embodiments of the present disclosure also provide a computer program product, the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, when the program instructions are executed by a computer, make The computer executes the deep multi-view clustering method based on shared difference learning in the foregoing method embodiments.

下面参考图8,其示出了适于用来实现本公开实施例的电子设备80的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图8示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring next to FIG. 8 , it shows a schematic structural diagram of an electronic device 80 suitable for implementing an embodiment of the present disclosure. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablets), PMPs (portable multimedia players), vehicle-mounted terminals (eg, mobile terminals such as in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in FIG. 8 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.

如图8所示,电子设备80可以包括处理装置(例如中央处理器、图形处理器等)801,其可以根据存储在只读存储器(ROM)802中的程序或者从存储装置808加载到随机访问存储器(RAM)803中的程序而执行各种适当的动作和处理。在RAM 803中,还存储有电子设备80操作所需的各种程序和数据。处理装置801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。As shown in FIG. 8 , electronic device 80 may include processing means (eg, central processing unit, graphics processor, etc.) 801 that may be loaded into random access according to a program stored in read only memory (ROM) 802 or from storage means 808 Various appropriate actions and processes are executed by the programs in the memory (RAM) 803 . In the RAM 803, various programs and data necessary for the operation of the electronic device 80 are also stored. The processing device 801 , the ROM 802 , and the RAM 803 are connected to each other through a bus 804 . An input/output (I/O) interface 805 is also connected to bus 804 .

通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、键盘、鼠标、图像传感器、麦克风、加速度计、陀螺仪等的输入装置806;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储装置808;以及通信装置809。通信装置809可以允许电子设备80与其他设备进行无线或有线通信以交换数据。虽然图中示出了具有各种装置的电子设备80,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Typically, the following devices can be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speakers, An output device 807 of a vibrator or the like; a storage device 808 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 809 . Communication means 809 may allow electronic device 80 to communicate wirelessly or by wire with other devices to exchange data. While the figures show electronic device 80 having various means, it should be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.

特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储装置808被安装,或者从ROM 802被安装。在该计算机程序被处理装置801执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network via the communication device 809 , or from the storage device 808 , or from the ROM 802 . When the computer program is executed by the processing device 801, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.

需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In this disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present disclosure, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device . Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing.

上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.

上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备可以执行上述方法实施例的相关步骤。The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device can execute the relevant steps of the above-mentioned method embodiments.

或者,上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备可以执行上述方法实施例的相关步骤。Alternatively, the above computer-readable medium carries one or more programs, and when the above one or more programs are executed by the electronic device, the electronic device can execute the relevant steps of the above method embodiments.

可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如python、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including object-oriented programming languages—such as python, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).

附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.

描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。The units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner.

应当理解,本公开的各部分可以用硬件、软件、固件或它们的组合来实现。It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.

以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以权利要求的保护范围为准。The above are only specific embodiments of the present disclosure, but the protection scope of the present disclosure is not limited to this. Any person skilled in the art who is familiar with the technical scope of the present disclosure can easily think of changes or substitutions. All should be included within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the claims.

Claims (7)

1. A deep multi-view clustering method based on common difference learning is characterized by comprising the following steps:
step 1, establishing a common difference depth multi-view feature learning network, wherein the common difference depth multi-view feature learning network comprises a depth feature extraction module, a common information learning module and a difference information learning module, and the depth feature extraction module comprises a common information extraction network and a difference information extraction network;
step 2, obtaining multi-view data, and respectively connecting each view of the multi-view data with the common information extraction network and the difference information extraction network;
step 3, inputting the common information of all views of the multi-view data into a common information learning module for training until convergence, and obtaining the consistency characteristic of the multi-view data;
step 4, inputting the common information extraction network and the difference information extraction network of all the views of the multi-view data into a difference information learning module, and obtaining the complementary characteristics of each view of the multi-view data through orthogonal constraint;
step 5, connecting the consistency features and all the complementary features in series to form a multi-view fusion feature;
and 6, inputting the multi-view fusion characteristics into a clustering model based on KL divergence for clustering.
2. The method of claim 1, wherein the common information learning module comprises generating a countermeasure network.
3. The method according to claim 2, wherein the step 3 specifically comprises:
step 3.1, the common information learning module takes a common information extraction network on each view as a generator G to finally obtain M generators;
step 3.2, transmitting the feature data generated by the M generators into a discriminator D of M categories;
and 3.3, repeating the step 3.1 and the step 3.2 until the discriminator cannot distinguish the view corresponding to the characteristic data to obtain the consistency characteristic.
4. The method according to claim 1, wherein the series connection of step 5 is
Figure FDA0003551931280000011
Wherein h isiA multi-view fusion feature representing the ith sample in the mth view,
Figure FDA0003551931280000021
and
Figure FDA0003551931280000022
respectively representing the common information and the disparity information extracted on view m.
5. The method according to claim 1, wherein the step 6 specifically comprises:
inputting the multi-view fusion characteristics into a KL divergence-based clustering model to iteratively train the common difference depth multi-view characteristic learning network and the clustering network, and clustering the multi-view data.
6. A deep multi-view clustering system based on common difference learning, comprising:
the system comprises an establishing module, a judging module and a judging module, wherein the establishing module is used for establishing a common difference depth multi-view feature learning network, the common difference depth multi-view feature learning network comprises a depth feature extracting module, a common information learning module and a difference information learning module, and the depth feature extracting module comprises a common information extracting network and a difference information extracting network;
the acquisition module is used for acquiring multi-view data and respectively connecting each view of the multi-view data with the common information extraction network and the difference information extraction network;
the first learning module is used for inputting the common information extraction network of all the views of the multi-view data into the common information learning module for training until convergence to obtain the consistency characteristic of the multi-view data;
the second learning module is used for inputting the common information extraction network and the difference information extraction network of all the views of the multi-view data into the difference information learning module, and obtaining the complementary characteristics of each view of the multi-view data through orthogonal constraint;
a fusion module for concatenating the consistent features and all of the complementary features to form a multi-view fusion feature;
and the clustering module is used for inputting the multi-view fusion characteristics into a clustering model based on KL divergence for clustering.
7. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the common difference learning based deep multiview clustering method of any of the preceding claims 1-5.
CN202210264054.3A 2022-03-17 2022-03-17 Deep multi-view clustering method, system and device based on shared difference learning Pending CN114742132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210264054.3A CN114742132A (en) 2022-03-17 2022-03-17 Deep multi-view clustering method, system and device based on shared difference learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210264054.3A CN114742132A (en) 2022-03-17 2022-03-17 Deep multi-view clustering method, system and device based on shared difference learning

Publications (1)

Publication Number Publication Date
CN114742132A true CN114742132A (en) 2022-07-12

Family

ID=82276495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210264054.3A Pending CN114742132A (en) 2022-03-17 2022-03-17 Deep multi-view clustering method, system and device based on shared difference learning

Country Status (1)

Country Link
CN (1) CN114742132A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958216A (en) * 2017-11-27 2018-04-24 沈阳航空航天大学 Based on semi-supervised multi-modal deep learning sorting technique
CN112115781A (en) * 2020-08-11 2020-12-22 西安交通大学 Unsupervised pedestrian re-identification method based on anti-attack sample and multi-view clustering
CN112784902A (en) * 2021-01-25 2021-05-11 四川大学 Two-mode clustering method with missing data
CN113094566A (en) * 2021-04-16 2021-07-09 大连理工大学 Deep confrontation multi-mode data clustering method
CN113591879A (en) * 2021-07-22 2021-11-02 大连理工大学 Deep multi-view clustering method, network, device and storage medium based on self-supervision learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958216A (en) * 2017-11-27 2018-04-24 沈阳航空航天大学 Based on semi-supervised multi-modal deep learning sorting technique
CN112115781A (en) * 2020-08-11 2020-12-22 西安交通大学 Unsupervised pedestrian re-identification method based on anti-attack sample and multi-view clustering
CN112784902A (en) * 2021-01-25 2021-05-11 四川大学 Two-mode clustering method with missing data
CN113094566A (en) * 2021-04-16 2021-07-09 大连理工大学 Deep confrontation multi-mode data clustering method
CN113591879A (en) * 2021-07-22 2021-11-02 大连理工大学 Deep multi-view clustering method, network, device and storage medium based on self-supervision learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐慧英: "基于自编码器的多模态深度嵌入式聚类", 《浙江师范大学学报(自然科学版)》 *
赵乾利: "若干数据聚类问题研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
WO2022121801A1 (en) Information processing method and apparatus, and electronic device
CN111625645B (en) Training method and device for text generation model and electronic equipment
WO2022037419A1 (en) Audio content recognition method and apparatus, and device and computer-readable medium
CN109961141A (en) Method and apparatus for generating quantization neural network
CN112752118B (en) Video generation method, device, equipment and storage medium
EP4447006A1 (en) Font recognition method and apparatus, readable medium, and electronic device
US11763204B2 (en) Method and apparatus for training item coding model
CN116432067A (en) Automatic driving test scene extraction method and equipment based on deep embedding clustering
WO2023138468A1 (en) Virtual object generation method and apparatus, device, and storage medium
CN113722738B (en) Data protection method, device, medium and electronic equipment
CN112434064B (en) Data processing method, device, medium and electronic equipment
CN110008926A (en) The method and apparatus at age for identification
CN111626044B (en) Text generation method, text generation device, electronic equipment and computer readable storage medium
CN111967584A (en) Method, device, electronic equipment and computer storage medium for generating countermeasure sample
CN114742132A (en) Deep multi-view clustering method, system and device based on shared difference learning
CN111797822A (en) Character object evaluation method and device and electronic equipment
CN115936980B (en) An image processing method, device, electronic equipment and storage medium
WO2023016290A1 (en) Video classification method and apparatus, readable medium and electronic device
US20250005827A1 (en) Image generation method, apparatus and device, and storage medium
CN112418233B (en) Image processing method and device, readable medium and electronic equipment
WO2022121800A1 (en) Sound source positioning method and apparatus, and electronic device
CN113435528B (en) Method, device, readable medium and electronic equipment for classifying objects
CN117765250A (en) Semantic segmentation method and device for image, electronic equipment and storage medium
CN111292766B (en) Method, apparatus, electronic device and medium for generating voice samples
CN114612909A (en) Character recognition method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220712