CN115545215B - Decentralizing federation cluster learning method, device, equipment and medium - Google Patents

Decentralizing federation cluster learning method, device, equipment and medium Download PDF

Info

Publication number
CN115545215B
CN115545215B CN202211274810.7A CN202211274810A CN115545215B CN 115545215 B CN115545215 B CN 115545215B CN 202211274810 A CN202211274810 A CN 202211274810A CN 115545215 B CN115545215 B CN 115545215B
Authority
CN
China
Prior art keywords
cluster
center
determining
sample
clustering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211274810.7A
Other languages
Chinese (zh)
Other versions
CN115545215A (en
Inventor
孙银银
李仲平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lingshuzhonghe Information Technology Co ltd
Original Assignee
Shanghai Lingshuzhonghe Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lingshuzhonghe Information Technology Co ltd filed Critical Shanghai Lingshuzhonghe Information Technology Co ltd
Priority to CN202211274810.7A priority Critical patent/CN115545215B/en
Publication of CN115545215A publication Critical patent/CN115545215A/en
Priority to PCT/CN2023/079371 priority patent/WO2024082515A1/en
Application granted granted Critical
Publication of CN115545215B publication Critical patent/CN115545215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a decentralised federal cluster learning method, device, equipment and medium. The method comprises the following steps: executing by a task initiator, interacting with at least two data parties based on a preset clustering algorithm and a preset encryption algorithm, and determining an optimal initial clustering center and at least two initial clusters; in the process of iterative updating of the optimal initial clustering center and the initial clusters, determining the distance from each sample in the combined data set to each cluster, acquiring the total distance of each sample in the combined data set relative to the current clustering center based on a preset encryption algorithm, interacting with at least two data parties according to the total distance, and determining the total distance of three parties; and determining whether a preset iteration termination condition is met or not according to the total distance, and if so, determining the cluster center updated by the last iteration as a final cluster center. The efficient joint cluster learning can be realized on the premise of ensuring the privacy safety of the task initiator and the data party.

Description

Decentralizing federation cluster learning method, device, equipment and medium
Technical Field
The invention relates to the field of federal learning, in particular to a decentralised federal cluster learning method, device, equipment and medium.
Background
The longitudinal federation learning is a federation learning technology which performs data mining on common samples with different characteristics and is used by a task initiator and a data party to perform cluster analysis on a fusion data set, and is applied to protecting a data privacy security scene. For example, a common application scenario is that a banking party and a data party providing different characteristic samples perform longitudinal federal learning, and data analysis and fusion are realized on the premise of protecting data privacy.
How to improve the efficiency of longitudinal federal clustering learning and to perform clustering analysis on data jointly on the premise of ensuring the data safety is a problem to be solved at present.
Disclosure of Invention
The invention provides a decentralised federation cluster learning method, device, equipment and medium, which can realize efficient joint cluster learning on the premise of guaranteeing the privacy safety of a task initiator and a data party.
According to an aspect of the present invention, there is provided a decentralised federation cluster learning method performed by a task initiator, comprising:
based on a preset clustering algorithm and a preset encryption algorithm, interacting with at least two data parties, and determining an optimal initial clustering center and at least two initial clusters; the sample numbers of the local data set of the data party are the same as the sample numbers of the local data set of the task initiator, and the sample characteristics are different;
In the iterative updating process of the optimal initial clustering center and the initial clusters, determining the distance from each sample in a joint data set formed by a task initiator and at least two data sides to each cluster, acquiring the total distance of each sample in the joint data set relative to the current clustering center based on a preset encryption algorithm, and generating a current clustering result according to the total distance;
the current clustering result is sent to at least two data parties, and the data parties are used for indicating the data parties to update the locally stored clustering center according to the current clustering result, and calculating the distance between the current updated clustering center and the clustering center of the last iteration;
based on a preset encryption algorithm, obtaining the distance between the current updated cluster center and the last iterative cluster center calculated by each data party, and determining the total distance between the current updated cluster center and the last iterative cluster center corresponding to the combined data set;
and determining whether a preset iteration termination condition is met or not according to the total distance between the current updated cluster center and the last iteration cluster center corresponding to the combined data set, and if so, determining the cluster center updated by the last iteration as a final cluster center.
According to another aspect of the present invention, there is provided a decentralised federal cluster learning apparatus configured in a task initiator, comprising:
the initial determining module is used for interacting with at least two data parties based on a preset clustering algorithm and a preset encryption algorithm, and determining an optimal initial clustering center and at least two initial clusters; the sample numbers of the local data set of the data party are the same as the sample numbers of the local data set of the task initiator, and the sample characteristics are different;
the generation module is used for determining the distance from each sample in the combined data set formed by the task initiator and at least two data parties to each cluster in the process of iteratively updating the optimal initial clustering center and the initial clusters, acquiring the total distance of each sample in the combined data set relative to the current clustering center based on a preset encryption algorithm, and generating a current clustering result according to the total distance;
the sending module is used for sending the current clustering result to at least two data parties, and is used for indicating each data party to update a locally stored clustering center according to the current clustering result and calculating the distance between the current updated clustering center and the clustering center of the last iteration;
The determining module is used for acquiring the distance between the current updated cluster center and the last iterative cluster center calculated by each data party based on a preset encryption algorithm and determining the total distance between the current updated cluster center and the last iterative cluster center corresponding to the combined data set;
and the judging module is used for determining whether a preset iteration termination condition is met or not according to the total distance between the current updated cluster center and the last iteration cluster center corresponding to the combined data set, and if so, determining the cluster center updated by the last iteration as a final cluster center.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the decentralized federation cluster learning method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the decentralised federation cluster learning method of any of the embodiments of the present invention when executed.
According to the technical scheme, a task initiator interacts with at least two data parties based on a preset clustering algorithm and a preset encryption algorithm, and an optimal initial clustering center and at least two initial clusters are determined; in the process of iterative updating of the optimal initial clustering center and the initial clusters, determining the distance from each sample in a joint data set formed by a task initiator and at least two data parties to each cluster, acquiring the total distance of each sample in the joint data set relative to the current clustering center based on a preset encryption algorithm, and generating a current clustering result according to the total distance; the clustering result is sent to at least two data parties, and the data parties are used for indicating the data parties to update the locally stored clustering center according to the clustering result, and calculating the distance between the updated clustering center and the clustering center of the last iteration; based on a preset encryption algorithm, obtaining the distance between the current updated cluster center and the last iterative cluster center calculated by each data party, and determining the total distance between the current updated cluster center and the last iterative cluster center corresponding to the combined data set; and determining whether a preset iteration termination condition is met according to the total distance between the current updated cluster center and the cluster center of the last iteration corresponding to the combined data set, and if so, determining the cluster center updated by the last iteration as a final cluster center. The clustering learning is realized through interaction of the task initiator and the data party, the centralization is removed, participation of a third party can be avoided, the data is not revealed by the third party, the data privacy safety of the task initiator and the data party can be effectively ensured by further combining a preset encryption algorithm, in addition, the optimal initial clustering center is determined through interaction of the two parties, and the learning efficiency of the federal clustering can be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for decentralizing federal cluster learning provided in accordance with an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a decentralised federation cluster learning method according to a second embodiment of the present invention;
FIG. 3 is a block diagram of a decentralized federal cluster learning device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," "target," "candidate," and the like in the description and claims of the present invention and in the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the related art, the initialized clustering center is randomly acquired, and the clustering result is easily trapped into local optimum in such a way; in addition, the task initiator and the data party are often deployed at the client, and a third party deployed at the server participates in realizing cluster learning of the task initiator and the data party, so that the deployment mode of the server and the client has larger potential safety hazard; furthermore, the interaction safety between the task initiator and the data party is not guaranteed, and malicious attacks aiming at the task initiator and the data party cannot be resisted. In consideration of the problems, the task initiator and the data Fang Jiaohu are clustered, so that the participation of a third party is avoided, and the decentralization is realized; the task initiator and the data Fang Jiaohu are enabled to determine the optimal initial clustering center, so that the clustering learning efficiency can be effectively improved; by utilizing the privacy computing technology, interaction is performed between the task initiator and the data party based on a preset encryption algorithm, so that the data privacy security of each party can be effectively ensured. The invention provides a multiparty longitudinal federal learning scheme which is decentralised, suitable for an end-to-end deployment mode and resistant to malicious attack of participants, and a specific implementation process is described in detail in a subsequent embodiment.
Example 1
Fig. 1 is a flowchart of a decentralized federation cluster learning method provided in an embodiment of the present invention, where the embodiment is suitable for a situation that a task initiator interacts with a data party to implement federation cluster learning on the premise of ensuring the security of data privacy of all participants, and the method may be implemented by a decentralized federation cluster learning device, which may be implemented in a software and/or hardware manner, and may be integrated in an electronic device having a decentralized federation cluster learning function. As shown in fig. 1, the method includes:
s101, interacting with at least two data parties based on a preset clustering algorithm and a preset encryption algorithm, and determining an optimal initial clustering center and at least two initial clusters.
The preset clustering algorithm can be a k-means++ clustering algorithm (k-means++ clustering algorithm) algorithm. The preset encryption algorithm may be a verifiable secret sharing algorithm (Verifiable Secret Share, VSS). The task initiator is a start executive of the federal learning task. The data party refers to a party that provides private data required for the federated learning task. The sample numbers of the local data set of the data party are the same as the sample numbers of the local data set of the task initiator, and the sample characteristics are different. The optimal initial cluster center refers to an initial cluster center determined by interaction between a task initiator and a data party. The number of optimal initial cluster centers is at least two. The initial cluster is a cluster associated with an initial cluster center. An optimal initial cluster center corresponds to an initial cluster.
Optionally, based on a preset clustering algorithm and a preset encryption algorithm, interacting with at least two data parties to determine an optimal initial cluster center and at least two initial clusters, including: based on a preset clustering algorithm, randomly acquiring the number of one sample to serve as a target number, sending the target number to at least two data parties, and indicating each data party to serve as a first cluster center for the target sample corresponding to the target number and calculating the distance from each sample to the first cluster center; based on a preset encryption algorithm, acquiring the total distance from each sample in the combined data set to the center of the first cluster, selecting the sample with the largest distance as the center of the second cluster, calculating the total distance from all samples in the combined data set to the center of the second cluster, and determining the center of the third cluster; based on the third cluster center, interacting with at least two data parties, performing iterative updating, and if the clusters with the preset number are detected, determining that the iteration is terminated; and determining an optimal initial cluster center and at least two initial clusters according to the determined total distance of the clusters with the preset number.
Optionally, based on the third cluster center, interacting with at least two data parties to perform iterative updating, including: based on the third cluster center, performing similar operations after the first cluster center and the second cluster center are determined, namely acquiring the total distance from each sample in the joint data set to the cluster center based on a preset encryption algorithm, taking the sample corresponding to the maximum value item in the matrix as a new cluster center if the total distance is a matrix of n rows and 1 columns, wherein n is the number of samples in the local data set of the task initiator, k is the number of cluster centers, and taking the sample corresponding to the maximum value item in the row direction minimum value column direction in the matrix as a new cluster center if the total distance is a matrix of n rows and k columns, wherein the value of k ranges from 2 to n.
Optionally, the process of acquiring the total distance from each sample in the joint data set to the newly added cluster center and determining the new cluster center based on the preset encryption algorithm is repeatedly executed until a preset number of clusters (e.g. 10 clusters) are determined, at which time the iteration is determined to terminate.
Optionally, each cluster with the determined preset number has a corresponding total distance, a cluster with the preset number and the minimum total distance can be selected as an optimal cluster, and a center point of the corresponding optimal cluster is also found, namely, an optimal initial cluster center and at least two initial clusters are determined.
Optionally, determining the optimal initial cluster center and at least two initial clusters according to the determined total distance of the preset number of clusters includes: drawing a curve which takes the clusters as independent variables and the sum of squares of the distances of the clusters as dependent variables according to the total distances of the clusters corresponding to the preset number; and determining a cluster corresponding to the inflection point position in the curve as an initial cluster, and determining an optimal initial cluster center according to the cluster center of each initial cluster.
Where the sum of squares of the distances of the clusters refers to the sum of squares of the errors (The sum of squares due to error, SSE) of each sample in the joint dataset with respect to the center of the corresponding cluster. The curve flattens out when the cluster is larger than the inflection point, and drops sharply when the cluster is smaller than the inflection point.
S102, determining distances from each sample in a combined data set formed by a task initiator and at least two data parties to each cluster in the process of iteratively updating the optimal initial clustering center and the initial clusters, acquiring total distances of each sample in the combined data set relative to the current clustering center based on a preset encryption algorithm, and generating a current clustering result according to the total distances.
The cluster center is determined in any iteration updating process in the iteration process. The total distance may characterize the distance of each sample in the joint dataset relative to the current cluster center.
Optionally, determining a distance from each sample in the joint data set formed by the task initiator and at least two data parties to each cluster, and acquiring a total distance of each sample in the joint data set relative to a current clustering center based on a preset encryption algorithm, where the determining includes: determining the distance from each sample in a joint data set consisting of a task initiator and at least two data parties to each cluster according to the sum variance matrix of the sample relative to the cluster center; based on a preset encryption algorithm, interacting with at least two data parties, and determining the total distance of each sample in the combined data set relative to the current clustering center according to the total distance matrix of the samples relative to the clustering center.
Optionally, after determining the total distance, determining the closest cluster center of each sample in the joint data set according to the total distance, and dividing each sample into clusters to which the closest cluster center belongs, that is, generating the clustering result according to the total distance.
And S103, sending the current clustering result to at least two data parties, and indicating each data party to update the locally stored clustering center according to the current clustering result and calculating the distance between the current updated clustering center and the clustering center of the last iteration.
Optionally, after each data party receives the current clustering result sent by the task initiator, the clustering center of the last time stored locally may be updated according to the clustering result, and specifically, the average value of the samples in each cluster may be used as the updated clustering center according to the clustering result.
Optionally, if the present cluster is the primary cluster, correspondingly, an initial total distance matrix of each sample in the combined data set formed by the task initiator and at least two data parties relative to the optimal initial cluster center can be determined; generating an initial clustering result according to the initial total distance matrix, sending the initial clustering result to each data party, and indicating each data party to calculate the average value of each cluster sample according to the initial clustering result, and updating the locally stored clustering center.
S104, based on a preset encryption algorithm, the distance between the current updated cluster center and the last iterative cluster center calculated by each data party is obtained, and the total distance between the current updated cluster center and the last iterative cluster center corresponding to the combined data set is determined.
Optionally, the task initiator may calculate a distance between the current updated cluster center and the cluster center of the previous iteration, each data party calculates a distance between the current updated cluster center and the cluster center of the previous iteration, and finally, based on a preset encryption algorithm, a sum of the locally calculated distance and the distance calculated by the data party is determined, that is, a total distance between the current updated cluster center corresponding to the combined data set and the cluster center of the previous iteration is determined.
It should be noted that, through the preset encryption algorithm, the task initiator may directly obtain the total distance determined by the task initiator and each data party, but does not know the specific data of the distance determined by the data party, and the same is true for the data party, so that the security of the data privacy of each participant can be effectively ensured.
S105, determining whether a preset iteration termination condition is met according to the total distance between the current updated cluster center and the last iteration cluster center corresponding to the combined data set, and if so, determining the cluster center updated by the last iteration as a final cluster center.
Optionally, determining whether a preset iteration termination condition is met according to a total distance between a current updated cluster center and a last iteration cluster center corresponding to the combined data set, if so, determining the cluster center updated by the last iteration as a final cluster center, including: according to the total distance between the current updated cluster center and the cluster center of the last iteration corresponding to the combined data set, if the total distance is detected to be smaller than a preset distance threshold value or the iteration number is larger than a preset maximum iteration number, determining that a preset iteration termination condition is met; and determining the cluster center updated by the last iteration as a final cluster center.
Optionally, if it is determined that the preset iteration termination condition is not met according to the total distance, the above-mentioned iteration update process of S102-S104 is continued until the preset iteration termination condition is met.
According to the technical scheme, a task initiator interacts with at least two data parties based on a preset clustering algorithm and a preset encryption algorithm, and an optimal initial clustering center and at least two initial clusters are determined; in the process of iterative updating of the optimal initial clustering center and the initial clusters, determining the distance from each sample in a joint data set formed by a task initiator and at least two data parties to each cluster, acquiring the total distance of each sample in the joint data set relative to the current clustering center based on a preset encryption algorithm, and generating a current clustering result according to the total distance; the clustering result is sent to at least two data parties, and the data parties are used for indicating the data parties to update the locally stored clustering center according to the clustering result, and calculating the distance between the updated clustering center and the clustering center of the last iteration; based on a preset encryption algorithm, obtaining the distance between the current updated cluster center and the last iterative cluster center calculated by each data party, and determining the total distance between the current updated cluster center and the last iterative cluster center corresponding to the combined data set; and determining whether a preset iteration termination condition is met according to the total distance between the current updated cluster center and the cluster center of the last iteration corresponding to the combined data set, and if so, determining the cluster center updated by the last iteration as a final cluster center.
The clustering learning is realized through interaction between the task initiator and the data party, the decentralization is realized, the participation of a third party can be avoided, the data is not revealed by the third party, the privacy safety of the task initiator and the data party can be effectively guaranteed by further combining a preset encryption algorithm, the malicious behavior of the participants is prevented, in addition, the optimal initial clustering center is determined by acquiring an initialized clustering center thought and a preset encryption algorithm through a preset kmeans++ algorithm, the learning efficiency of federal clustering can be improved, meanwhile, the total distance of each sample relative to the clustering center is calculated in each iteration process, and the clustering center is updated, so that local optimization can be avoided.
Example two
Fig. 2 is a schematic flow chart of a decentralized federation cluster learning method according to a second embodiment of the present invention, where, based on the foregoing embodiment, a task initiator a is provided, and interacts with data parties B and C to implement a preferred example of federation cluster learning, as shown in fig. 2, the method includes:
s1, a task initiator A interacts with a data party B, C, an optimal initial cluster center is determined, the initial cluster number is k, and sample numbers of the determined k optimal initial cluster centers are sent to the data party B, C.
Wherein the optimal initial cluster center sample number may be stored in an id_list, which refers to a list storing initialized cluster center sample IDs (unique codes, identity document). k may be a value of 2 or more.
Alternatively, the task initiator a may randomly acquire a sample as a cluster center and send the sample number to the data side B, C.
Specifically, S1 may include the following steps:
s1.1, the task initiator a randomly acquires a sample id=i as a cluster center, and sends the sample id=i to the data party B, C. A. B, C calculates the sample-to-cluster sum variance SSE to obtain dist_ A, dist _ B, dist _C, respectively.
S1.2, calculating the total distance dist_total from each sample to the cluster by using VSS joint addition.
For example, samples with id=i correspond to SSE in dist_ A, dist _ B, dist _c being ai, bi, and ci, respectively, and by using VSS joint addition, a result with ri=ai+bi+ci can be obtained on the a side, where ri is the total distance of the i-th sample, while SSE calculated by the respective samples of the three parties will not leak.
S1.3, A calculates total SSE according to dist_total, selects a sample id with the largest distance at dist_total as a center point c2 of a second cluster, and sends c2 to B, C, wherein id_list= [ c1, c2]. Illustratively, when k=1, the total sse=r1+r2+ … +rn.
S1.4 and A, B, C obtain cluster centers according to the id_list, respectively calculating SSEs of samples to the cluster centers in the id_list, and obtaining updated dist_ A, dist _ B, dist _C.
S1.5, calculating the total distance dist_total from each updated sample to the cluster by using verifiable secret sharing combined addition according to the updated dist_ A, dist _ B, dist _C, taking the minimum value of the updated dist_total in each row direction, taking the sample with the maximum value corresponding to id in the column direction as the center of the newly added cluster, calculating the total SSE corresponding to the updated total distance dist_total, and circularly executing the steps S1.4 and S1.5 until the preset number of clusters are determined. And drawing a curve with clusters as independent variables and the total SSE of each cluster as the dependent variables.
S1.6: the task initiator A obtains curve inflection points as the clustering center number according to the curve, determines an optimal initial clustering center and at least two initial clusters, and sends the optimal initial clustering center and the at least two initial clusters to the data side B, C.
S2, the task initiator A, the data party B and the data party C calculate Euclidean distances from the samples in the local data set to the optimal initial clustering center according to the optimal initial clustering center respectively.
S3, the task initiator A calculates the total Euclidean distance from each sample to the clustering center by using a verifiable secret sharing algorithm.
S4, the task initiator A calculates a clustering result according to the total Euclidean distance and sends the clustering result to the data party B and the data party C; and the task initiator A, the data party B and the data party C update the clustering center points according to the clustering result, calculate the distance between the updated clustering center points and the last center point, and calculate the total distance of A, B, C by using a verifiable secret sharing algorithm.
S5, judging whether convergence is achieved: if the total distance is smaller than 10E-6, or the iteration number is larger than the set maximum iteration number, S6 is executed, and if not, S2 is executed.
S6, the task initiator A, the data party B and the data party C acquire the latest clustering result according to the final clustering center.
The technical scheme of the embodiment of the invention provides a task initiator A, interacts with data parties B and C, realizes an implementation mode of federal cluster learning, removes a third party, is easy to deploy end to end, adopts a Kmeans++ clustering method when determining an initialization distance center, can avoid being trapped into local optimum in the algorithm training process, and can effectively protect data privacy and intermediate parameters in the training process by calculating the distance between SSE and clusters of multiparty fusion data to each cluster by using verifiable secret sharing combined addition.
Example III
FIG. 3 is a block diagram of a decentralized federal cluster learning device according to a third embodiment of the present invention; the decentralized federation cluster learning device provided by the embodiment of the invention can execute the decentralized federation cluster learning method provided by any embodiment of the invention, has corresponding functional modules and beneficial effects of the execution method, and can be configured in a task initiator.
As shown in fig. 3, the apparatus includes:
the initial determining module 301 is configured to interact with at least two data parties based on a preset clustering algorithm and a preset encryption algorithm, and determine an optimal initial clustering center and at least two initial clusters; the sample numbers of the local data set of the data party are the same as the sample numbers of the local data set of the task initiator, and the sample characteristics are different;
the generating module 302 is configured to determine a distance from each sample in a joint data set formed by a task initiator and at least two data parties to each cluster in a process of iteratively updating an optimal initial clustering center and an initial cluster, obtain a total distance between each sample in the joint data set and a current clustering center based on a preset encryption algorithm, and generate a current clustering result according to the total distance;
The sending module 303 is configured to send the current clustering result to at least two data parties, and is configured to instruct each data party to update a locally stored clustering center according to the current clustering result, and calculate a distance between the current updated clustering center and a clustering center in a last iteration;
the determining module 304 is configured to obtain, based on a preset encryption algorithm, a distance between a current updated cluster center and a last iteration cluster center calculated by each data party, and determine a total distance between each sample in the joint data set and the last iteration cluster center;
and the judging module 305 is configured to determine whether a preset iteration termination condition is met according to a total distance between each of the cluster centers updated in this time and the cluster center of the last iteration in the joint data set, and if yes, determine that the cluster center updated in the last iteration is the final cluster center.
According to the technical scheme, a task initiator interacts with at least two data parties based on a preset clustering algorithm and a preset encryption algorithm, and an optimal initial clustering center and at least two initial clusters are determined; in the process of iterative updating of the optimal initial clustering center and the initial clusters, determining the distance from each sample in a joint data set formed by a task initiator and at least two data parties to each cluster, acquiring the total distance of each sample in the joint data set relative to the current clustering center based on a preset encryption algorithm, and generating a current clustering result according to the total distance; the clustering result is sent to at least two data parties, and the data parties are used for indicating the data parties to update the locally stored clustering center according to the clustering result, and calculating the distance between the updated clustering center and the clustering center of the last iteration; based on a preset encryption algorithm, obtaining the distance between the current updated cluster center and the last iterative cluster center calculated by each data party, and determining the total distance between each sample in the combined data set and the last iterative cluster center; and determining whether a preset iteration termination condition is met according to the total distance between the clustering center updated by each sample in the joint data set and the clustering center of the last iteration, and if so, determining the clustering center updated by the last iteration as a final clustering center. The cluster learning is realized through interaction of the task initiator and the data party, participation of a third party can be avoided, data is not revealed by the third party, privacy safety of the task initiator and the data party can be effectively guaranteed by further combining a preset encryption algorithm, in addition, an optimal initial cluster center is determined through interaction of the two parties, and the learning efficiency of federal clustering can be improved.
Further, the initial determining module 301 may include:
the sending unit is used for randomly acquiring the number of one sample based on a preset clustering algorithm to serve as a target number, sending the target number to at least two data parties, and indicating each data party to serve as a first cluster center for the target sample corresponding to the target number and calculating the distance from each sample to the first cluster center;
the computing unit is used for acquiring the total distance from each sample in the combined data set to the center of the first cluster based on a preset encryption algorithm, selecting the sample with the largest distance as the center of the second cluster, computing the total distance from all samples in the combined data set to the center of the second cluster and determining the center of the third cluster;
the judging unit is used for interacting with at least two data parties based on the third cluster center, carrying out iterative updating, and determining that the iteration is ended if the clusters with the preset number are detected;
the determining unit is used for determining an optimal initial cluster center and at least two initial clusters according to the determined total distance of the clusters with the preset number.
Further, the determining unit is specifically configured to:
drawing a curve which takes the clusters as independent variables and the sum of squares of the distances of the clusters as dependent variables according to the total distances of the clusters corresponding to the preset number;
And determining a cluster corresponding to the inflection point position in the curve as an initial cluster, and determining an optimal initial cluster center according to the cluster center of each initial cluster.
Further, the generating module 302 is specifically configured to:
determining the distance from each sample in a joint data set consisting of a task initiator and at least two data parties to each cluster according to the sum variance matrix of the sample relative to the cluster center;
based on a preset encryption algorithm, interacting with at least two data parties, and determining the total distance of each sample in the combined data set relative to the current clustering center according to the distance matrix of the sample relative to the clustering center.
Further, the device is also used for:
determining an initial total distance matrix of each sample in a joint data set formed by a task initiator and at least two data parties relative to an optimal initial clustering center;
and generating an initial clustering result according to the initial total distance matrix, sending the initial clustering result to each data party, and indicating each data party to calculate the average value of each cluster sample according to the initial clustering result so as to update the locally stored clustering center.
Further, the judging module 305 is specifically configured to:
according to the total distance between the cluster center updated at this time and the cluster center in the last iteration of each sample in the combined data set, if the total distance is detected to be smaller than a preset distance threshold value or the iteration number is larger than a preset maximum iteration number, determining that a preset iteration termination condition is met;
And determining the cluster center updated by the last iteration as a final cluster center.
Example IV
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. Fig. 4 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 4, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the decentralised federal cluster learning method.
In some embodiments, the decentralised federal cluster learning method can be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the decentralized federation cluster learning method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the decentralised federal cluster learning method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. The decentralised federation cluster learning method is characterized by being executed by a task initiator, wherein the task initiator is a banking party and is deployed in a client, and the method comprises the following steps:
based on a preset clustering algorithm and a preset encryption algorithm, interacting with at least two data parties, and determining an optimal initial clustering center and at least two initial clusters; the sample numbers of the local data set of the data party are the same as the sample numbers of the local data set of the task initiator, and the sample characteristics are different; the data party is deployed in the client;
In the iterative updating process of the optimal initial clustering center and the initial clusters, determining the distance from each sample in a joint data set formed by a task initiator and at least two data sides to each cluster, acquiring the total distance of each sample in the joint data set relative to the current clustering center based on a preset encryption algorithm, and generating a current clustering result according to the total distance;
the current clustering result is sent to at least two data parties, and the data parties are used for indicating the data parties to update the locally stored clustering center according to the current clustering result, and calculating the distance between the current updated clustering center and the clustering center of the last iteration;
based on a preset encryption algorithm, obtaining the distance between the current updated cluster center and the last iterative cluster center calculated by each data party, and determining the total distance between the current updated cluster center and the last iterative cluster center corresponding to the combined data set;
and determining whether a preset iteration termination condition is met or not according to the total distance between the current updated cluster center and the last iteration cluster center corresponding to the combined data set, and if so, determining the cluster center updated by the last iteration as a final cluster center.
2. The method of claim 1, wherein interacting with at least two parties based on a pre-set clustering algorithm and a pre-set encryption algorithm, determining an optimal initial cluster center and at least two initial clusters, comprises:
based on a preset clustering algorithm, randomly acquiring the number of one sample to serve as a target number, sending the target number to at least two data parties, and indicating each data party to serve as a first cluster center for the target sample corresponding to the target number and calculating the distance from each sample to the first cluster center;
based on a preset encryption algorithm, acquiring the total distance from each sample in the combined data set to the center of the first cluster, selecting the sample with the largest distance as the center of the second cluster, calculating the total distance from all samples in the combined data set to the center of the second cluster, and determining the center of the third cluster;
based on the third cluster center, interacting with at least two data parties, performing iterative updating, and if the clusters with the preset number are detected, determining that the iteration is terminated;
and determining an optimal initial cluster center and at least two initial clusters according to the determined total distance of the clusters with the preset number.
3. The method of claim 2, wherein determining the optimal initial cluster center and at least two initial clusters based on the determined total distance of the preset number of clusters comprises:
Drawing a curve which takes the clusters as independent variables and the sum of squares of the distances of the clusters as dependent variables according to the total distances of the clusters corresponding to the preset number;
and determining a cluster corresponding to the inflection point position in the curve as an initial cluster, and determining an optimal initial cluster center according to the cluster center of each initial cluster.
4. The method of claim 1, wherein determining the distance from each sample in the joint data set formed by the task initiator and the at least two data parties to each cluster, and obtaining the total distance of each sample in the joint data set relative to the current cluster center based on a preset encryption algorithm, comprises:
determining the distance from each sample in a joint data set consisting of a task initiator and at least two data parties to each cluster according to the sum variance matrix of the sample relative to the cluster center;
based on a preset encryption algorithm, interacting with at least two data parties, and determining the total distance of each sample in the combined data set relative to the current clustering center according to the distance matrix of the sample relative to the clustering center.
5. The method as recited in claim 1, further comprising:
determining an initial total distance matrix of each sample in a joint data set formed by a task initiator and at least two data parties relative to an optimal initial clustering center;
And generating an initial clustering result according to the initial total distance matrix, sending the initial clustering result to each data party, and indicating each data party to calculate the average value of each cluster sample according to the initial clustering result so as to update the locally stored clustering center.
6. The method of claim 1, wherein determining whether a preset iteration termination condition is satisfied according to a total distance between a current updated cluster center and a cluster center of a last iteration corresponding to the joint dataset, and if so, determining the cluster center updated by the last iteration as a final cluster center comprises:
according to the total distance between the current updated cluster center and the cluster center of the last iteration corresponding to the combined data set, if the total distance is detected to be smaller than a preset distance threshold value or the iteration number is larger than a preset maximum iteration number, determining that a preset iteration termination condition is met;
and determining the cluster center updated by the last iteration as a final cluster center.
7. The utility model provides a decentralizing federation cluster learning device which characterized in that, the device disposes in the task initiating side, wherein, the task initiating side is the bank side, disposes in the customer end, includes:
The initial determining module is used for interacting with at least two data parties based on a preset clustering algorithm and a preset encryption algorithm, and determining an optimal initial clustering center and at least two initial clusters; the sample numbers of the local data set of the data party are the same as the sample numbers of the local data set of the task initiator, and the sample characteristics are different; the data party is deployed in the client;
the generation module is used for determining the distance from each sample in the combined data set formed by the task initiator and at least two data parties to each cluster in the process of iteratively updating the optimal initial clustering center and the initial clusters, acquiring the total distance of each sample in the combined data set relative to the current clustering center based on a preset encryption algorithm, and generating a current clustering result according to the total distance;
the sending module is used for sending the current clustering result to at least two data parties, and is used for indicating each data party to update a locally stored clustering center according to the current clustering result and calculating the distance between the current updated clustering center and the clustering center of the last iteration;
the determining module is used for acquiring the distance between the current updated cluster center and the last iterative cluster center calculated by each data party based on a preset encryption algorithm and determining the total distance between the current updated cluster center and the last iterative cluster center corresponding to the combined data set;
And the judging module is used for determining whether a preset iteration termination condition is met or not according to the total distance between the current updated cluster center and the last iteration cluster center corresponding to the combined data set, and if so, determining the cluster center updated by the last iteration as a final cluster center.
8. The apparatus of claim 7, wherein the initial determination module comprises:
the sending unit is used for randomly acquiring the number of one sample based on a preset clustering algorithm to serve as a target number, sending the target number to at least two data parties, and indicating each data party to serve as a first cluster center for the target sample corresponding to the target number and calculating the distance from each sample to the first cluster center;
the computing unit is used for acquiring the total distance from each sample in the combined data set to the center of the first cluster based on a preset encryption algorithm, selecting the sample with the largest distance as the center of the second cluster, computing the total distance from all samples in the combined data set to the center of the second cluster and determining the center of the third cluster;
the judging unit is used for interacting with at least two data parties based on the third cluster center, carrying out iterative updating, and determining that the iteration is ended if the clusters with the preset number are detected;
The determining unit is used for determining an optimal initial cluster center and at least two initial clusters according to the determined total distance of the clusters with the preset number.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the decentralised federal cluster learning method of any one of claims 1-6.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the decentralised federal cluster learning method of any one of claims 1-6.
CN202211274810.7A 2022-10-18 2022-10-18 Decentralizing federation cluster learning method, device, equipment and medium Active CN115545215B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211274810.7A CN115545215B (en) 2022-10-18 2022-10-18 Decentralizing federation cluster learning method, device, equipment and medium
PCT/CN2023/079371 WO2024082515A1 (en) 2022-10-18 2023-03-02 Decentralized federated clustering learning method and apparatus, and device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211274810.7A CN115545215B (en) 2022-10-18 2022-10-18 Decentralizing federation cluster learning method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN115545215A CN115545215A (en) 2022-12-30
CN115545215B true CN115545215B (en) 2023-10-27

Family

ID=84734602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211274810.7A Active CN115545215B (en) 2022-10-18 2022-10-18 Decentralizing federation cluster learning method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN115545215B (en)
WO (1) WO2024082515A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545215B (en) * 2022-10-18 2023-10-27 上海零数众合信息科技有限公司 Decentralizing federation cluster learning method, device, equipment and medium
CN118200990A (en) * 2024-05-17 2024-06-14 江西师范大学 Multi-target collaborative service caching method based on spectral clustering

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101579A (en) * 2020-11-18 2020-12-18 杭州趣链科技有限公司 Federal learning-based machine learning method, electronic device, and storage medium
CN113344220A (en) * 2021-06-18 2021-09-03 山东大学 User screening method, system, equipment and storage medium based on local model gradient in federated learning
CN113657525A (en) * 2021-08-23 2021-11-16 同盾科技有限公司 KMeans-based cross-feature federated clustering method and related equipment
CN114386071A (en) * 2022-01-12 2022-04-22 平安科技(深圳)有限公司 Decentered federal clustering method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210174257A1 (en) * 2019-12-04 2021-06-10 Cerebri AI Inc. Federated machine-Learning platform leveraging engineered features based on statistical tests
CN111444545B (en) * 2020-06-12 2020-09-04 支付宝(杭州)信息技术有限公司 Method and device for clustering private data of multiple parties
CN112231760A (en) * 2020-11-20 2021-01-15 天翼电子商务有限公司 Privacy-protecting distributed longitudinal K-means clustering
CN115545215B (en) * 2022-10-18 2023-10-27 上海零数众合信息科技有限公司 Decentralizing federation cluster learning method, device, equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101579A (en) * 2020-11-18 2020-12-18 杭州趣链科技有限公司 Federal learning-based machine learning method, electronic device, and storage medium
WO2022105022A1 (en) * 2020-11-18 2022-05-27 杭州趣链科技有限公司 Federated learning-based machine learning method, electronic device and storage medium
CN113344220A (en) * 2021-06-18 2021-09-03 山东大学 User screening method, system, equipment and storage medium based on local model gradient in federated learning
CN113657525A (en) * 2021-08-23 2021-11-16 同盾科技有限公司 KMeans-based cross-feature federated clustering method and related equipment
CN114386071A (en) * 2022-01-12 2022-04-22 平安科技(深圳)有限公司 Decentered federal clustering method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Federated Learning for Open Banking;Guodong Long et al.;Federated Learning;第240-254页 *
群体智能中的联邦学习算法综述;杨强 等;智能科学与技术学报;第4卷(第1期);第29-44页 *

Also Published As

Publication number Publication date
WO2024082515A1 (en) 2024-04-25
CN115545215A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN115545215B (en) Decentralizing federation cluster learning method, device, equipment and medium
CN113537508B (en) Processing method and device for federal calculation, electronic equipment and storage medium
CN113657289B (en) Training method and device of threshold estimation model and electronic equipment
CN113010896B (en) Method, apparatus, device, medium and program product for determining abnormal object
CN112598138B (en) Data processing method and device, federal learning system and electronic equipment
CN113657397A (en) Training method for circularly generating network model, and method and device for establishing word stock
CN112615852A (en) Data processing method, related device and computer program product
CN113312560B (en) Group detection method and device and electronic equipment
CN117195060B (en) Telecom fraud recognition method and model training method based on multiparty security calculation
CN114157480A (en) Method, device, equipment and storage medium for determining network attack scheme
CN112737777A (en) Threshold signature and signature verification method, device, equipment and medium based on secret key
CN112597379A (en) Data identification method and device, storage medium and electronic device
CN114186669B (en) Training method, device, equipment and storage medium of neural network model
CN115664839B (en) Security monitoring method, device, equipment and medium for privacy computing process
CN114115640B (en) Icon determination method, device, equipment and storage medium
CN113591088B (en) Identification recognition method and device and electronic equipment
US20230419118A1 (en) Intelligent scaling factors for use with evolutionary strategies-based artificial intelligence (ai)
Han et al. Maintaining Track Continuity for Extended Targets Using Gaussian‐Mixture Probability Hypothesis Density Filter
CN113992729B (en) Cloud mobile phone control method, related device and computer program product
CN117610052A (en) Data merging calculation method, device, equipment and storage medium
CN116305324A (en) Host safety protection method, device, equipment and storage medium
CN116824707A (en) Signature verification method, device, equipment, storage medium and product
CN116304771A (en) KMeas-based federal clustering model determination method, KMeas-based federal clustering model determination device, KMeas-based federal clustering model determination equipment and KMeas-based federal clustering model determination medium
CN117113323A (en) Verification method, device, equipment and storage medium
CN116319039A (en) Identity verification method and device in edge computing scene, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant