CN104994170B - Distributed clustering method based on hybrid cytokine analysis model in sensor network - Google Patents

Distributed clustering method based on hybrid cytokine analysis model in sensor network Download PDF

Info

Publication number
CN104994170B
CN104994170B CN201510414218.6A CN201510414218A CN104994170B CN 104994170 B CN104994170 B CN 104994170B CN 201510414218 A CN201510414218 A CN 201510414218A CN 104994170 B CN104994170 B CN 104994170B
Authority
CN
China
Prior art keywords
node
data
nodes
iteration
analysis model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510414218.6A
Other languages
Chinese (zh)
Other versions
CN104994170A (en
Inventor
魏昕
周亮
周全
陈建新
王磊
赵力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tian Gu Information Technology Co ltd
Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201510414218.6A priority Critical patent/CN104994170B/en
Publication of CN104994170A publication Critical patent/CN104994170A/en
Application granted granted Critical
Publication of CN104994170B publication Critical patent/CN104994170B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses the distributed clustering methods based on hybrid cytokine analysis model in sensor network, this method models data to be clustered at each node in sensor network with hybrid cytokine analysis model, each node is based on its data and calculates local sufficient statistic, amount diffusion is then broadcast to its neighbor node, after node receives all local sufficient statistics from neighbor node, it can obtain joint sufficient statistic, and the parameters in hybrid cytokine analysis model are estimated based on the statistic, it is based ultimately upon the model estimated and completes cluster.The present invention establishes hybrid cytokine analysis model can complete the dimensionality reduction of data while cluster, using Distributed Cluster mode, avoid the periods of network disruption brought in traditional centralized processing mode by Centroid.In distributed clustering method of the present invention, what is transmitted between each node is sufficient statistic rather than data, and communication overhead is not only greatly saved, but also can preferably protect the privacy information in data.

Description

Distributed clustering method based on mixed factor analysis model in sensor network
Technical Field
The invention relates to a distributed clustering method based on a mixed factor analysis model in a sensor network, and belongs to the technical field of parallel distributed processing methods and application of data.
Background
The sensor network is composed of a large number of stationary or mobile miniature sensor nodes deployed in a monitoring area, and the capacity of collecting, storing, processing and transmitting data of each sensor node is very limited. Therefore, for data processing in sensor networks, improvements to conventional data processing are needed. Currently, there are two main ways of data processing in sensors, centralized processing and distributed processing. In a centralized processing mode, one node is designated as a central node, other nodes transmit and gather the acquired original data to the central node, the data processing is completed at the central node, and then the processing result is returned to each node. The disadvantage of this approach is that the entire network is fatally affected once the central node fails. Another approach is distributed processing. In the method, all nodes have the same status, and the data processing task is finally completed through communication and cooperation among the neighbor nodes. Compared with centralized data processing, distributed processing can avoid adverse effects caused by failure of the central node, and the robustness of the whole network is stronger. The present invention can solve the above problems well.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a distributed clustering method based on a mixed factor analysis model in a sensor network, wherein clustering refers to a process of dividing data into a plurality of classes by a certain method. The class generated by a cluster is a collection of a set of data objects that are similar to objects in the same cluster and distinct from objects in other clusters. Since the class label to which the data belongs is unknown in the clustering, the clustering of the data is an unsupervised learning process in the field of machine learning. The existing data clustering methods are many, but most of the existing data clustering methods assume that all data clustering is completed on one processing center, and in a sensor network, distributed processing is very critical, so that the method is designed based on a mixed factor analysis model to solve the problem. Its advantages mainly include: (1) the mixed factor analysis model can effectively process high-dimensional data; (2) by designing a cooperation mode among nodes, a satisfactory clustering result can be obtained only by transmitting an intermediate result, and compared with a mode of transmitting original data, the method reduces the communication overhead, protects the privacy information on the data and ensures the data security in the network.
The technical scheme adopted by the invention for solving the technical problems is as follows: a distributed clustering method based on a mixed factor analysis model in a sensor network comprises the following steps:
if M sensor nodes exist in the sensor network, the mth node collects NmThe number of the data is one,is shown asWherein y ism,nRepresenting the nth data at node m with dimension p. Description of Y with Mixed factor analysis Model (MFA)mDistribution of (M1...., M), note that the data of all nodes share the same MFA. MFA is a mixed model with component I; for each data ym,nIt can be expressed as:
ym,n=μi+Aium,n+em,n,iwith probability pii(i=1,...,I),
Wherein, muiIs the p-dimensional mean vector of the ith mixture component, um,nIs and data ym,nCorresponding factor in low dimensional space with dimension q (q < p) obeying Gaussian distribution N (u)m,n|0,Iq) The value of q is selected according to the size of p in the specific problem, and is generally an arbitrary integer between p/6 and p/2; a. theiA factor load matrix of (p × q); error variable em,n,iObeying Gaussian distribution N (e)m,n,i|0,Di) Wherein D isiA diagonal matrix of (p × p); probability piiSatisfy the requirement ofThen the parameter set Θ of the MFA is { π }i,Aii,Di}i=1,...,I. Note that the values of the individual parameters in the MFA parameter set that it is to estimate are the same for all nodes.
In addition, the data transmission range of each node is set as W, for the current node m, all nodes with the distance smaller than W are neighbor nodes, and the neighbor node set of the node m is represented as Rm. The relationship between nodes in a sensor network is shown in fig. 1, where circles represent nodes, and if two nodes are connected by edges, it means that the two nodes can communicate with each other to transmit information. The dotted boxes in FIG. 1 represent the R of m for a nodem. In the present invention, the netThe network topology is well established before distributed clustering is implemented and interworking between any two nodes is guaranteed either directly or over multiple hops.
After the sensor network topology and the MFA describing the data distribution are established, the distributed clustering process is started, as shown in fig. 2, and the specific steps include:
step 1: initializing; there are M sensor nodes in the sensor network, and the M-th node acquires NmData, expressed asWherein y ism,nRepresenting the nth data at the node m with the dimension p; the network topology is determined in advance, and the neighbor node set of the node m is represented as Rm(ii) a Description of Y with Mixed factor analysis Model (MFA)mA distribution of (M ═ 1.. said., M), the data of all nodes sharing the same MFA; the parameter set of the MFA is { pii,Aii,Di}i=1,...,IIn which piiIs the weight of the ith mixture component, AiA (p × q) factor load matrix of the ith mixed component, wherein q is the dimension of a low-dimensional factor and is an arbitrary integer between p/6 and p/2; mu.siIs the p-dimensional mean vector of the ith mixture component, DiA covariance matrix which is an error of the ith mixture component;
firstly, setting a mixed fraction I in the MFA, which is also the number of categories to be clustered; setting initial values of parameters in the MFA according to I, p and q; wherein at each node Randomly selected from the data collected by the node,andeach element in (a) is generated from a standard normal distribution N (0, 1); in addition, the number N of data collected by each node llBroadcast to its neighbor nodes; when a certain node m receives all its neighbor nodes l (l is belonged to R)m) After the number of data broadcast, the node calculates the weight c according to the following equationlm
After the initialization is completed, the iteration counter iter is equal to 1, and the iteration process is started;
step 2: local calculation; at each node l, based on the data Y it has collectedlFirst, the intermediate variable g is calculatedi,ΩiAnd<zl,n,i>,(n=1,...,Nl;i=1,...,I):
wherein,for the parameter values obtained after the previous iteration is completed (initial values of the parameters at the first iteration)<zl,n,i>Represents the nth data y at the node ll,nProbability of belonging to the ith class (mixture component);
next, the node calculates local sufficient statisticsThe method comprises the following steps:
and step 3: broadcasting diffusion; each node l in the sensor network will calculate the local sufficient statistic LSSlBroadcasting and diffusing to the neighbor nodes;
and 4, step 4: performing joint calculation; when node M (M ═ 1.. times.m) receives information from all its neighbor nodes l (l ∈ R)m) LSS oflThen, the node m calculates the joint sufficient statistic
And 5: estimating parameters; the node M (M1.., M) is calculated according to the CSS calculated in the previous stepmEstimate Θ to be ═ pii,Aii,Di}i=1,...,IWherein, { pi-ii}i=1,...,IThe estimation process of (2) is as follows:
for { Ai,Di}i=1,...,IThe process is as follows:
step 6: judging convergence; node M (M ═ 1.., M) computes the log-likelihood value at the current iteration:
if logp (Y)m|Θ)-logp(Ymold) If the value is less than epsilon, convergence is carried out, and iteration is stopped; otherwise, executing step 2, and starting the next iteration (iter + 1); where Θ represents the value of the parameter estimated for the current iteration, ΘoldRepresenting the value of the parameter estimated in the last iteration, i.e. the pair of two adjacent iterationsThe number likelihood value is smaller than a threshold value epsilon, and the algorithm converges; taking 10 from epsilon-5~10-6Any value of (1); because each node in the network processes data in parallel, all nodes cannot converge at the same time in one iteration; when node l has converged and node m has not, then node l no longer sends LSSlAnd the information transmitted by the neighbor node is not received any more; the node m uses the LSS sent by the last received node llUpdating its CSSm(ii) a The nodes which are not converged continue to iterate until all the nodes in the network are converged;
and 7: clustering and outputting; after steps 1 to 6, node M (M1.., M) obtains each data thereofCorresponding to<zm,n,i>(n=1,...,Nm(ii) a I1.., I), will be<zm,n,i>The sequence number corresponding to the maximum value in (I ═ 1.. multidot.i) is defined as ym,nClass C to which it is finally assignedm,nNamely:
obtaining clustering results of all data on all nodes in this way
The method comprises the steps of modeling data to be clustered at each node in a sensor network by using a mixed factor analysis model, calculating local sufficient statistics by each node based on own data, then broadcasting the local sufficient statistics to neighbor nodes in a diffusion mode, obtaining joint sufficient statistics by the nodes after the nodes receive all the local sufficient statistics from the neighbor nodes, estimating each parameter in the mixed factor analysis model based on the statistics, and finally finishing clustering based on the estimated model. The mixed factor analysis model established by the invention can complete the dimensionality reduction of data while clustering, and a distributed clustering mode is adopted, so that the network collapse caused by a central node in the traditional centralized processing mode is avoided.
Has the advantages that:
1. the mixed factor analyzer adopted in the invention can reduce the dimension of the high-dimensional data, thereby smoothly completing clustering while reducing the dimension and obtaining better clustering performance.
2. The distributed clustering method based on the mixed factor analysis model adopted by the invention enables each node in the sensor network to fully utilize information contained in data of other nodes, and the clustering performance is superior to that of a centralized method.
3. According to the distributed clustering method based on the mixed factor analysis model, local sufficient statistics are exchanged instead of directly transmitting original data in the node cooperation process, and the number and the dimensionality of the local sufficient statistics are far smaller than those of the data, so that the method saves communication overhead on one hand, and is beneficial to fully protecting privacy information in the data on the other hand, and the safety performance of a system adopting the method is improved.
Drawings
FIG. 1 is a set R of neighbor nodes of a node m in a sensor network according to the present inventionmAnd a schematic diagram of locally sufficient statistics (i.e., LSS) transceived between nodes.
FIG. 2 is a flowchart of a distributed clustering method based on a mixed factor analysis model in a sensor network according to the present invention.
Fig. 3 is a schematic diagram of a data clustering result at each node in the embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
In order to better illustrate the distributed clustering method based on the mixed factor analysis model in the sensor network, the method is applied to the clustering of the wine composition data. In some countries, several detection stations are distributed in different areas for detecting the content of the ingredients in the wine. The types of wine sent to the testing station vary. It is therefore desirable to cluster similar classes of wine. If the detection station can form a sensor network with other detection stations, the data of wine in other detection stations can be fully utilized through mutual cooperation, and therefore clustering accuracy is improved. Here, the wine data to be clustered is derived from the UCI machine learning database, where there are 178 data, for a total of 3 classes. The dimension of each datum is 13, and represents the content of each component in the wine. The sensor network has 8 nodes, the average number of neighbor nodes of each node is 2, and the network is connectable (direct or indirect arriving paths exist between any two nodes). Thus in this example, M is 8, p is 13, I is 3 and q is 3. Further, the data amount at each node: n is a radical of1=21,N2=22,N3=21,N4=21,N5=22,N6=22,N7=21,N828; neighbor nodes of each node: r1={3,5,6},R2={3,5},R3={1,4,2},R4={3},R5={1,2},R6={1,7},R7={6,8},R8={3,7}。
According to the flow of the inventive content (shown in fig. 2), distributed clustering is started:
(1) initialization: initial values of parameters in the MFA are set. Wherein at each node Is randomly selected from the data of the node,andis generated from a standard normal distribution N (0, 1). In addition, each node l (1.,. M) has the number N of data collected by itlBroadcast to its neighbor nodes. When a node m receives the number of data broadcast by all its neighbor nodes, the node calculates the weight clm
The meaning of the weight is that each neighbor node l (l belongs to R) for measuring the node mm) The importance of each transmitted message at node m. After the initialization is completed, the iteration counter iter is equal to 1, and the iteration process is started.
(2) Local calculation: this step does not require information of the neighbor nodes. At each node l, based on the data Y it has collectedlFirst, g is calculatedi,ΩiAnd<zl,n,i>,(n=1,...,Nl;i=1,...,I):
whereinFor the parameter values obtained after the previous iteration is completed (initial values of the parameters at the first iteration)<zl,n,i>Represents the nth data y at the node ll,nProbability of belonging to the ith class (mixture component).
Second, the node calculates local sufficient statisticsThe following were used:
(3) broadcasting diffusion: each node l in the sensor network will calculate the local statistic LSSlThe broadcast is flooded to its neighbor nodes as shown in fig. 1.
(4) Joint calculation: when node M (M ═ 1.. times.m) receives information from all its neighbor nodes l (l ∈ R)m) LSS oflThen, the node m calculates the joint sufficient statistic
(5) And (3) estimating parameters: the node M (M1.., M) is calculated according to the CSS calculated in the previous stepmEstimate Θ to be ═ pii,Aii,Di}i=1,...,IWherein, { pi-ii}i=1,...,IThe estimation process of (2) is as follows:
for { Ai,Di}i=1,...,IThe process is as follows:
(6) and (4) judging convergence: node M (M ═ 1.., M) computes the log-likelihood value at the current iteration:
if logp (Y)m|Θ)-logp(Ymold) If the value is less than epsilon, convergence is carried out, and iteration is stopped; otherwise, executing step (2), and starting the next iteration (iter + 1); where Θ represents the value of the parameter estimated for the current iteration, ΘoldRepresenting the parameter value estimated in the last iteration, namely, the logarithm likelihood value of two adjacent iterations is less than a threshold value epsilon, and the algorithm converges; taking 10 from epsilon-5~10-6Any value of (1); it is worth noting that since each node in the network processes data in parallel, it is not possible to converge simultaneously in one iteration; for example, when node l has converged and node m has not, then node l no longer sends LSSlAnd the information transmitted by the neighbor node is not received any more; the node m uses the LSS sent by the last received node llUpdating its CSSm(ii) a The non-converged nodes continue to iterate until all nodes in the network converge.
(7) And (6) clustering and outputting. After steps (1) - (6), node M (M ═ 1.., M), with which each datum { y } is obtainedm,n}n=1,...,NmCorresponding to<zm,n,i>(n=1,...,Nm(ii) a I1.., I), will be<zm,n,i>I is 1, the serial number corresponding to the maximum value in I is used as ym,nClass C to which it is finally assignedm,nNamely:
obtaining clustering results of all data on all nodes in this way
Performance evaluation:
the results obtained by the clustering method according to the inventionAnd the correct generic result, so that the effectiveness and accuracy of the method can be evaluated and measured. The clustering result at each node is shown in fig. 3, in which the abscissa indicates 178 data, the non-vacant position indicates the node to which the data is classified, and the ordinate indicates the category number (total 3 categories) to which the data is classified. In the figure, "o" indicates correctly clustered data and "x" indicates incorrectly clustered data. From fig. 3, the clustering accuracy at 8 nodes is: 100%, 100%, 95.2%, 95.5%, 100%, 95.5%, 100%, 92.9%. Only five data in total were clustered by error, with an average accuracy of 97.2% for the entire network. Compared with the result (98%) obtained by adopting the centralized transmission method, the accuracy is basically the same. The disadvantage of centralized transmission is quite obvious, and firstly, once a central node fails, the whole network crashes; secondly, each node directly transmits the original data to the central node, so that the communication burden in the network is increased, and the privacy in the data is easily revealed. Therefore, the method of the invention overcomes the defects and obtains good distributed clustering performance.
The scope of the invention is not limited to the description of the embodiments.

Claims (2)

1. A distributed clustering method based on a mixed factor analysis model in a sensor network is characterized by comprising the following steps:
step 1: initializing; there are M sensor nodes in the sensor network, and the M-th node acquires NmData, expressed asWherein y ism,nRepresenting the nth data at the node m with the dimension p; the network topology has been determined in advance, the neighborhood of node mThe set of nodes is denoted Rm(ii) a Description of Y with Mixed factor analysis Model (MFA)mA distribution of (M ═ 1.. said., M), the data of all nodes sharing the same MFA; the parameter set of the MFA is { pii,Aii,Di}i1,...,IIn which piiIs the weight of the ith mixture component, AiA (p × q) factor load matrix of the ith mixed component, wherein q is the dimension of a low-dimensional factor and is an arbitrary integer between p/6 and p/2; mu.siIs the p-dimensional mean vector of the ith mixture component, DiA covariance matrix which is an error of the ith mixture component;
firstly, setting a mixed fraction I in the MFA, which is also the number of categories to be clustered; setting initial values of parameters in the MFA according to I, p and q; wherein at each node Randomly selected from the data collected by the node,andeach element in (a) is generated from a standard normal distribution N (0, 1); in addition, the number N of data collected by each node llBroadcast to its neighbor nodes; when a certain node m receives all its neighbor nodes l (l is belonged to R)m) After the number of data broadcast, the node calculates the weight c according to the following equationlm
After the initialization is completed, the iteration counter iter is equal to 1, and the iteration process is started;
step 2: local calculation;at each node l, based on the data Y it has collectedlFirst, the intermediate variable g is calculatedi,ΩiAnd<zl,n,i>,(n=1,...,Nl;i=1,...,I):
wherein,the initial value of the parameter is obtained after the previous iteration is completed and is the initial value of the parameter during the first iteration<zl,n,i>Represents the nth data y at the node ll,nProbability of belonging to the ith class of mixed components;
next, the node calculates local sufficient statisticsThe method comprises the following steps:
and step 3: broadcasting diffusion; in a sensor networkEach node l will calculate the local sufficient statistic LSSlBroadcasting and diffusing to the neighbor nodes;
and 4, step 4: performing joint calculation; when node M (M ═ 1.. times.m) receives information from all its neighbor nodes l (l ∈ R)m) LSS oflThen, the node m calculates the joint sufficient statistic
And 5: estimating parameters; the node M (M1.., M) is calculated according to the previous stepEstimate outWherein, { pi-ii}i=1,...,IThe estimation process of (2) is as follows:
for { Ai,Di}i=1,...,IThe process is as follows:
step 6: judging convergence; node M (M ═ 1.., M) computes the log-likelihood value at the current iteration:
if logp (Y)m|Θ)-logp(Ymold) If the value is less than epsilon, convergence is carried out, and iteration is stopped; otherwise, executing step 2, and starting the next iteration (iter + 1); where Θ represents the value of the parameter estimated for the current iteration, ΘoldRepresenting the parameter value estimated in the last iteration, namely, the logarithm likelihood value of two adjacent iterations is less than a threshold value epsilon, and the algorithm converges; taking 10 from epsilon-5~10-6Any value of (1); because each node in the network processes data in parallel, all nodes cannot converge at the same time in one iteration; when node l has converged and node m has not, then node l no longer sends LSSlAnd the information transmitted by the neighbor node is not received any more; the node m uses the LSS sent by the last received node llUpdating its CSSm(ii) a The nodes which are not converged continue to iterate until all the nodes in the network are converged;
and 7: clustering and outputting; after steps 1 to 6, node M (M1.., M) obtains each data thereofCorresponding to<zm,n,i>(n=1,...,Nm(ii) a I1.., I), will be<zm,n,i>The sequence number corresponding to the maximum value in (I ═ 1.. multidot.i) is defined as ym,nClass C to which it is finally assignedm,nNamely:
obtaining the clustering result of all data on all nodes
2. The distributed clustering method based on the mixed factor analysis model in the sensor network according to claim 1, characterized in that: the method is applied to clustering of wine ingredient data.
CN201510414218.6A 2015-07-15 2015-07-15 Distributed clustering method based on hybrid cytokine analysis model in sensor network Expired - Fee Related CN104994170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510414218.6A CN104994170B (en) 2015-07-15 2015-07-15 Distributed clustering method based on hybrid cytokine analysis model in sensor network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510414218.6A CN104994170B (en) 2015-07-15 2015-07-15 Distributed clustering method based on hybrid cytokine analysis model in sensor network

Publications (2)

Publication Number Publication Date
CN104994170A CN104994170A (en) 2015-10-21
CN104994170B true CN104994170B (en) 2018-06-05

Family

ID=54305921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510414218.6A Expired - Fee Related CN104994170B (en) 2015-07-15 2015-07-15 Distributed clustering method based on hybrid cytokine analysis model in sensor network

Country Status (1)

Country Link
CN (1) CN104994170B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550704B (en) * 2015-12-10 2019-01-01 南京邮电大学 Distributed high dimensional data classification method based on mixing common factor analyzer

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404664A (en) * 2008-11-05 2009-04-08 湖南大学 Network positioning and optimizing algorithm based on node clustering
CN102752784A (en) * 2012-06-19 2012-10-24 电子科技大学 Detection method of distribution type event domain based on graph theory in wireless sensor network
WO2013036892A1 (en) * 2011-09-08 2013-03-14 Attagene, Inc. Systems and methods for assessment of biosimilarity
CN103226595A (en) * 2013-04-17 2013-07-31 南京邮电大学 Clustering method for high dimensional data based on Bayes mixed common factor analyzer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404664A (en) * 2008-11-05 2009-04-08 湖南大学 Network positioning and optimizing algorithm based on node clustering
WO2013036892A1 (en) * 2011-09-08 2013-03-14 Attagene, Inc. Systems and methods for assessment of biosimilarity
CN102752784A (en) * 2012-06-19 2012-10-24 电子科技大学 Detection method of distribution type event domain based on graph theory in wireless sensor network
CN103226595A (en) * 2013-04-17 2013-07-31 南京邮电大学 Clustering method for high dimensional data based on Bayes mixed common factor analyzer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于混合因子分析的分布估计算法;杨晔宏 等;《信息与控制》;20060831;第35卷(第4期);全文 *

Also Published As

Publication number Publication date
CN104994170A (en) 2015-10-21

Similar Documents

Publication Publication Date Title
CN111556454B (en) Weighted DV _ Hop node positioning method based on minimum mean square error criterion
CN104105196B (en) The method and system positioned based on radio-frequency fingerprint
Oreshkin et al. Asynchronous distributed particle filter via decentralized evaluation of Gaussian products
CN112243249B (en) LTE new access anchor point cell parameter configuration method and device under 5G NSA networking
CN113411213B (en) Ad hoc network topology control method and cooperative monitoring method based on Internet of things
CN106507275B (en) A kind of robust Distributed filtering method and apparatus of wireless sensor network
CN113411766B (en) Intelligent Internet of things comprehensive sensing system and method
CN115358487A (en) Federal learning aggregation optimization system and method for power data sharing
CN106998299B (en) The recognition methods of the network equipment, apparatus and system in data center network
EP3188373A1 (en) Fast distributed clustering for cooperation in telecommunications networks
CN104994170B (en) Distributed clustering method based on hybrid cytokine analysis model in sensor network
CN116862023A (en) Robust federal learning abnormal client detection method based on spectral clustering
CN112235376B (en) Electric vehicle information monitoring method and device and electric vehicle management system
CN112654063A (en) Uplink capacity assessment method and device
CN116633462A (en) Federal learning resource optimization design method based on statistical channel state information
CN115499115A (en) Active user detection method based on orthogonal pilot frequency under CF-mMIMO scene
Huff et al. DHA-FL: Enabling efficient and effective AIoT via decentralized hierarchical asynchronous federated learning
US20100250736A1 (en) Connection State Estimating Device, Connection State Estimating Method and Storage Medium
Fathollahnejad et al. On the probability of unsafe disagreement in group formation algorithms for vehicular ad hoc networks
CN104469811A (en) Clustering cooperative spectrum sensing hard fusion method for cognitive wireless sensor network
CN113194502A (en) Distributed center selection and communication method for unmanned aerial vehicle cluster
CN106487764B (en) Data transmission method for uplink, device and system for network evaluation
CN106685512B (en) Data transmission method and device based on distributed constellation
CN110798350A (en) System reliability model construction and evaluation method based on incomplete state monitoring data learning
CN105550704B (en) Distributed high dimensional data classification method based on mixing common factor analyzer

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201207

Address after: Room 214, building D5, No. 9, Kechuang Avenue, Zhongshan Science and Technology Park, Jiangbei new district, Nanjing, Jiangsu Province

Patentee after: Nanjing Tian Gu Information Technology Co.,Ltd.

Address before: Yuen Road Qixia District of Nanjing City, Jiangsu Province, No. 9 210023

Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS

Effective date of registration: 20201207

Address after: Gulou District of Nanjing City, Jiangsu Province, Beijing Road No. 20 210024

Patentee after: STATE GRID JIANGSU ELECTRIC POWER Co.,Ltd. INFORMATION & TELECOMMUNICATION BRANCH

Address before: Room 214, building D5, No. 9, Kechuang Avenue, Zhongshan Science and Technology Park, Jiangbei new district, Nanjing, Jiangsu Province

Patentee before: Nanjing Tian Gu Information Technology Co.,Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180605

CF01 Termination of patent right due to non-payment of annual fee