CN104992188A - Distributed handwritten digit recognition method based on t mixed factor analysis - Google Patents

Distributed handwritten digit recognition method based on t mixed factor analysis Download PDF

Info

Publication number
CN104992188A
CN104992188A CN201510415750.XA CN201510415750A CN104992188A CN 104992188 A CN104992188 A CN 104992188A CN 201510415750 A CN201510415750 A CN 201510415750A CN 104992188 A CN104992188 A CN 104992188A
Authority
CN
China
Prior art keywords
rsqb
lsqb
node
centerdot
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510415750.XA
Other languages
Chinese (zh)
Other versions
CN104992188B (en
Inventor
魏昕
周亮
周全
陈建新
王磊
赵力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tian Gu Information Technology Co ltd
Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201510415750.XA priority Critical patent/CN104992188B/en
Publication of CN104992188A publication Critical patent/CN104992188A/en
Application granted granted Critical
Publication of CN104992188B publication Critical patent/CN104992188B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention discloses a distributed handwritten digit recognition method based on t mixed factor analysis. The distributed handwritten digit recognition method comprises the steps that: features of acquired handwritten digits are extracted at each node; feature data corresponding to each digit used for training are initialized; each node calculates local statistics based on its training data and broadcasts the local statistics to its neighbour nodes; each node calculates joint statistics according to the received local statistics from all neighbour nodes, and estimates parameters in t mixed factor analysis on the basis of the joint statistics; and the distributed training process is completed. In the distributed recognition state, the data for test can be input to any mode, a log likelihood value of tMFA corresponding to each trained digit is calculated, and the digit corresponding to the maximum log likelihood value is taken as the recognition result. By adopting the tMFA, the distributed handwritten digit recognition method has high robustness for off-group points in the data, adopts the distributed training and recognizing method, and avoids network collapse caused by a center node.

Description

A kind of distributed Handwritten Digit Recognition method analyzed based on t hybrid cytokine
Technical field
The present invention relates to a kind of distributed Handwritten Digit Recognition method analyzed based on t hybrid cytokine, belong to the parallel and distributed process method of data and the technical field of application.
Background technology
At present, Handwritten Digit Recognition be in pattern-recognition category one have a challenging problem, main research be enable computing machine independently recognize the numeral that staff is write.Handwritten Digit Recognition has application in a lot, as: the many-sides such as extensive data statistics, finance, the tax, finance and sorting mail are all widely used.There is existing accomplished in many ways Handwritten Digit Recognition at present.But when data volume is larger, single computing machine cannot process fast and effectively, therefore needs handwritten numeral data to be divided into multiple part, stores on a different computer respectively, by the collaboration method between designing a calculating machine, thus achieve the distributed treatment of data.
In pattern-recognition, machine learning field, Handwritten Digit Recognition belongs to the category of supervised learning.In the training stage, based on markd sample, set up and train the model of the distribution that can describe handwritten numeral characteristic, at ensuing cognitive phase, numerical characteristic data to be identified and all kinds of characteristic model of representative trained are compared, thus find most suitable model, as final recognition result.Needing to carry out in the scene of distributed treatment, in the training stage, how by the cooperation between computer node, making each computer node can make full use of the data of other each computer nodes, thus estimating consistent model; At cognitive phase, whether handwritten numeral data to be identified can be inputted any one node, correct recognition result can be obtained, very crucial.The method that the present invention proposes can solve the problem effectively, obtains good recognition correct rate.
Summary of the invention
The object of the invention is the defect solving prior art, propose a kind of in sensor network the distributed clustering method based on hybrid cytokine analytical model.
The present invention solves the technical scheme that its technical matters takes: a kind of distributed Handwritten Digit Recognition method analyzed based on t hybrid cytokine, the method comprises the steps:
Step 1: the collection of data and feature extraction;
Be provided with M platform computing machine/computing node (that is: node), form a network, m node collects from numeral 0 ~ 9 by the handwriting pad be attached thereto, the raw data of totally 10 classes, handwriting pad records the two-dimensional coordinate position of each point on each character writing track automatically, gets the coordinate of 8 points in the first-class compartment of terrain of track as the characteristic s corresponding to each raw data, totally 16 dimensions.In order to represent convenient, if the collection of node m place the training dataset of the digital d obtained through feature extraction are wherein represent node m place, for n-th characteristic of handwritten numeral d of training, dimension is p, for the training data number of digital d.
Describe by t hybrid cytokine analytical model (tMFA) distribution, note, all Nodes carry out modeling about the public same tMFA of training data of digital d.TMFA is a component number is the mixture model of I; For each data it can be expressed as:
s m , n ( d ) = μ i + A i u m , n ( d ) + e m , n , i ( d ) With probability π i(i=1 ..., I),
Wherein, μ ibe i-th blending constituent mean value vector, dimension is p; for with the factor in corresponding lower dimensional space, its dimension is q (q < < p), obeys t distribution the value of q is chosen according to the size of p in particular problem, generally gets the arbitrary integer between q=p/5 ~ p/3; A iit is the Factor load-matrix of (p × q) of i-th blending constituent; Error variance obey t distribution wherein D ifor the diagonal matrix of (p × p), ν iit is the degree of freedom of i-th blending constituent; The weight π of each blending constituent imeet so the parameter sets Θ of tMFA is { π i, A i, μ i, D i, ν i} i=1 ..., I.Note, for all nodes, in its tMFA parameter sets to be estimated, parameters value is identical.Here it should be noted that t distribution can be launched into the integration of Gaussian distribution and Gamma distribution:
t ( u m , n ( d ) | 0 , I q , &nu; i ) = &Integral; N ( u m , n ( d ) | 0 , I q / w m , n , i ( d ) ) &CenterDot; G a m m a ( w m , n , i ( d ) | &nu; i / 2 , &nu; i / 2 ) dw m , n , i ( d )
t ( e m , n , i ( d ) | 0 , I q , &nu; i ) = &Integral; N ( e m , n , i ( d ) | 0 , D i / w m , n , i ( d ) ) &CenterDot; G a m m a ( w m , n , i ( d ) | &nu; i / 2 , &nu; i / 2 ) dw m , n , i ( d )
Wherein, be with corresponding integration hidden variable.
In addition, the data transmission range of each node is set to Dis, and for present node m, all nodes being less than Dis with its distance are its neighbor node, and the neighbor node set expression of node m is R m.Illustrate the relation between each node in certain network in Fig. 1, wherein computer icon representation node, if there is limit to be connected between two nodes, then represents and can communicate mutually between two nodes, transmission information.The R of the m of the empty wire frame representation node in Fig. 1 m.In the present invention, network topology determines in advance, only needs to ensure at least to exist between any two nodes one directly or the path that can arrive through multi-hop.
Step 2: distributed training, will for distributed training, obtain the tMFA parameter sets of every class numeral corresponding to d &Theta; ( d ) = { &pi; i ( d ) , A i ( d ) , &mu; i ( d ) , D i ( d ) , &nu; i ( d ) } i = 1 , ... , I , ( d = 0 , ... , 9 ) ;
After the tMFA that network topology and data of description distribute establishes, then start distributed training.Here be trained for example with digital d, as shown in Figure 2, its concrete steps are as follows for training process:
Step 2-1: initialization; Setting tMFA in be mixed into mark I.Here I determines the complexity of tMFA model, and I can get the arbitrary integer in 3 ~ 8, gets I=5 and can obtain good performance in Handwritten Digit Recognition.The initial value of parameter in MFA is set according to the dimension p of I and data.Wherein, each Nodes ( &pi; 1 0 , ... , &pi; i 0 , ... , &pi; I 0 ) = ( 1 / I , ... , 1 / I , ... , 1 / I ) ; { &mu; 1 0 , ... , &mu; i 0 , ... , &mu; I 0 } Random selecting from the data that this node collects; with in each element generate from standardized normal distribution N (0,1); this group parameter gets the arbitrary integer between 1 ~ 5.In addition, and each node l (l=1 ..., M) data amount check that collected be broadcast to its neighbor node.After certain node m receives its data amount check of all neighbor nodes broadcast, this node calculate weight c lm:
c l m = N l ( d ) &Sigma; l &prime; &Element; R m N l &prime; ( d ) ,
The implication of this weight is each neighbor node l (the l ∈ R for weighing node m m) each importance of information at node m place transmitted.After initialization completes, iteration count iter=1, starts iterative process.
Step 2-2: calculate local statistic and broadcast; This step does not need the information of neighbor node.At each node l place, based on the data that it collects first g is calculated i, Ω i, with
g i = &lsqb; A i o l d ( A i o l d ) T + D i o l d &rsqb; - 1 &CenterDot; A i o l d ,
&Omega; i = I q - g i T A i o l d ,
u &OverBar; l , n , i ( d ) = g i T ( s l , n ( d ) - &mu; i o l d ) ,
W l , n , i ( d ) = ( s l , n ( d ) - &mu; i o l d ) T &CenterDot; &lsqb; A i o l d ( A i o l d ) T + D i o l d &rsqb; &CenterDot; ( s l , n ( d ) - &mu; i o l d )
< z l , n , i ( d ) > = &pi; i o l d &CenterDot; t ( s l , n ( d ) | &mu; i o l d , A i o l d ( A i o l d ) T + D i o l d , &nu; i o l d ) &Sigma; i &prime; = 1 I &pi; i &prime; o l d &CenterDot; t ( s l , n ( d ) | &mu; i &prime; o l d , A i &prime; o l d ( A i &prime; o l d ) T + D i &prime; o l d , &nu; i &prime; o l d ) ,
< w l , n , i ( d ) > = &nu; i o l d + p &nu; i o l d + W l , n , i ( d ) ,
Wherein for a front iteration complete after obtain parameter value (that is: first iteration time be the initial value of parameter represent node l place n-th data belong to the probability of i-th class (that is: blending constituent), for expectation value.
Had the result of calculation of above-mentioned variable, node calculate goes out local statistic (LS) LS l = { LS l ( 1 ) &lsqb; i &rsqb; , LS l ( 2 ) &lsqb; i &rsqb; , LS l ( 3 ) &lsqb; i &rsqb; , LS l ( 4 ) &lsqb; i &rsqb; , LS l ( 5 ) &lsqb; i &rsqb; } i = 1 , ... , I , As follows:
LS l ( 1 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > ,
LS l ( 2 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > ,
LS l ( 3 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > &CenterDot; s l , n ( d ) ,
LS l ( 4 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > &CenterDot; s l , n ( d ) &CenterDot; ( s l , n ( d ) ) T ,
LS l ( 5 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; l o g < w l , n , i ( d ) > .
Finally, the local statistic LS that will calculate of each node l lbroadcast is spread to its neighbor node, as shown in Figure 1.
Step 2-3: calculate associating statistic; When node m (m=1 ..., M) receive from its all neighbor node l (l ∈ R m), node m calculates associating statistic CS m = { CS m ( 1 ) &lsqb; i &rsqb; , CS m ( 2 ) &lsqb; i &rsqb; , CS m ( 3 ) &lsqb; i &rsqb; , CS m ( 4 ) &lsqb; i &rsqb; , CS m ( 5 ) &lsqb; i &rsqb; } i = 1 , ... , I :
CS m ( 1 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 1 ) &lsqb; i &rsqb; ,
CS m ( 2 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 2 ) &lsqb; i &rsqb; ,
CS m ( 3 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 3 ) &lsqb; i &rsqb; ,
CS m ( 4 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 4 ) &lsqb; i &rsqb; ,
CS m ( 5 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 5 ) &lsqb; i &rsqb; .
Step 2-4: each parameter in estimation model; Node m (m=1 ..., M) CS that calculates according to previous step m, estimate each parameter Θ={ π i, A i, μ i, D i, ν i} i=1 ..., I, wherein, { π i, μ i} i=1 ..., Iestimation procedure as follows:
&pi; i = CS m ( 1 ) &lsqb; i &rsqb; &Sigma; i &prime; = 1 I CS m ( 1 ) &lsqb; i &prime; &rsqb; ,
&mu; i = CS m ( 3 ) &lsqb; i &rsqb; CS m ( 2 ) &lsqb; i &rsqb; ;
For { A i, D i} i=1 ..., Iestimation, process is as follows:
V i = CS m ( 4 ) &lsqb; i &rsqb; - 2 CS m ( 3 ) &lsqb; i &rsqb; &CenterDot; &mu; i T + CS m ( 2 ) &lsqb; i &rsqb; &CenterDot; &mu; i &CenterDot; &mu; i T CS m ( 1 ) &lsqb; i &rsqb; ,
A i = V i g i ( g i T V i g i + &Omega; i ) - 1 ,
D i = d i a g { V i - A i ( g i T V i g i + &Omega; i ) A i T } ;
In addition, for { ν i} i=1 ..., I, obtained by solution equation below:
log ( &nu; i 2 ) - &psi; ( &nu; i 2 ) + 1 - CS m ( 5 ) &lsqb; i &rsqb; - CS m ( 2 ) &lsqb; i &rsqb; CS m ( 1 ) &lsqb; i &rsqb; - log ( &nu; i o l d + p 2 ) + &psi; ( &nu; i o l d + p 2 ) = 0 ,
Wherein ψ () the digamma function that is standard, specifically solves employing Newton method.
Step 2-5: judge whether convergence; Node m (m=1 ..., M) calculate log-likelihood under current iteration iter:
log p ( S m ( d ) | &Theta; ) = &Sigma; n = 1 N m l o g ( &Sigma; i = 1 I &pi; i &CenterDot; t ( s m , n ( d ) | &mu; i , A i A i T + D i , &nu; i ) ) ,
If then algorithm convergence, stops iteration; Otherwise perform step 2-2, start next iteration (iter=iter+1; ).Wherein Θ represents the parameter value that current iteration estimates, Θ oldrepresent the parameter value estimated in last iteration.That is, the log-likelihood of adjacent twice iteration is less than threshold epsilon, algorithm convergence.ε gets 10 -5~ 10 -6in arbitrary value.It should be noted that because node each in network is parallel data processing, therefore allly can not to restrain in an iteration simultaneously.Such as, when when node l restrains, node m not yet restrains, then node l no longer sends LS l, also no longer receive the information of neighbor node transmission.Node m is then with the LS that the node l received for the last time sends lupgrade its CS m.The node of not restraining continues iteration, until all nodes are all restrained in network.
After above-mentioned steps 2-1 ~ step 2-5, the corresponding model tMFA (that is: representing with parameter Θ during training convergence) obtained by the training data of handwritten numeral d.Repeat above-mentioned steps 10 times, thus obtain 10 numerals tMFA model corresponding separately, in order to represent convenient, and being distinguished, using &Theta; ( d ) = { &pi; i ( d ) , A i ( d ) , &mu; i ( d ) , D i ( d ) , &nu; i ( d ) } i = 1 , ... , I , ( d = 0 , 1 , ... , 9 ) The tMFA model that representative digit d is corresponding.Distributed training completes.
Step 3: Distributed identification; When the arbitrary computer acquisition in network to new for identify handwritten numeral time, first obtain its character pair by step (1), be expressed as s', then calculate about Θ (d)(d=0,1 ..., 9) log-likelihood logp (s'| Θ (d)) (d=0,1 ..., 9):
log p ( s &prime; | &Theta; ( d ) ) = l o g ( &Sigma; i = 1 I &pi; i ( d ) &CenterDot; t ( s &prime; | &mu; i ( d ) , A i ( d ) ( A i ( d ) ) T + D i ( d ) , &nu; i ( d ) ) ) ;
Using the recognition result d' of sequence number corresponding for max log likelihood value as s':
d &prime; = arg max d = 0 9 log p ( s &prime; | &Theta; ( d ) ) .
The idiographic flow of Distributed identification method of the present invention as shown in Figure 2.
Beneficial effect:
1. the t hybrid cytokine analysis adopted in the present invention, to the outlier existed in data, there is higher robustness, and high dimensional data can be described better, thus better obtain the model corresponding with data, thus also can obtain better training and recognition performance.
2. the distributed training process based on t hybrid cytokine analytical model adopted in the present invention, the information comprised in each computer node in network can be made full use of data that other computer node collects, thus the more accurate model of training place.
3. the distributed training process based on t hybrid cytokine analytical model adopted in the present invention; in computer node cooperating process; exchange local statistic instead of directly transmit raw data; because the quantity of local statistic and dimension are much smaller than data; therefore this mode saves the expense of communication on the one hand; on the other hand, be conducive to the privacy information adequately protected in data, improve the security performance of the system adopting this method.
4. the Distributed identification process based on t hybrid cytokine analytical model adopted in the present invention, can gather new data in any computer node place in a network, can obtain identical recognition result.
Accompanying drawing explanation
Fig. 1 is the neighbor node collection R of nodes m of the present invention m, and between node, receive and dispatch the schematic diagram of local statistic.
Fig. 2 is the process flow diagram of the distributed Handwritten Digit Recognition method based on the analysis of t hybrid cytokine that the present invention relates to.
Fig. 3 is method of the present invention and centralized tMFA, without confusion matrix (that is: ConfusionMatrix) the hinton schematic diagram that cooperation tMFA method recognition result is corresponding.
Fig. 4 is method of the present invention and centralized tMFA, without average and the variance schematic diagram of the recognition correct rate of cooperation tMFA method.
Embodiment
Below in conjunction with Figure of description, the invention is described in further detail.
In order to a kind of distributed Handwritten Digit Recognition method analyzed based on t hybrid cytokine that the present invention relates to is described better.We describe with a concrete application example.
(1) collection of data and feature extraction: establish and always have 44 people, everyone each digital handwriting 25 times, is total up to 25*10*44=11000 raw data.20 computing machine/nodes (M=20) altogether in network, the neighbor node number of each node is 3, has directly or multihop path intercommunication between any two nodes.The raw data (250*30=7500) of the wherein handwritten numeral of 30 people, for distributed training, is divided into 20 parts, is assigned randomly in 20 nodes.For each raw data, equally spaced get the coordinate of 8 points on its track as the characteristic s corresponding to this raw data, totally 16 dimensions.In order to represent convenient, if the training dataset of digital d that node m place obtains through feature extraction is wherein represent node m place, for n-th characteristic of handwritten numeral d of training, dimension is p=16, for the training data number of digital d.
(2) distributed training: after step (1) completes, start distributed training.Here be trained for example with digital d, carry out the distribution of Modelling feature data with tMFA as shown in Figure 2, its concrete steps are as follows for training process:
(2-1) initialization: setting tMFA in be mixed into mark I=5.The initial value of parameter in MFA is set according to the dimension p of I and data.Wherein, each Nodes ( &pi; 1 0 , ... , &pi; i 0 , ... , &pi; I 0 ) = ( 1 / I , ... , 1 / I , ... , 1 / I ) ; { &mu; 1 0 , ... , &mu; i 0 , ... , &mu; I 0 } Random selecting from the data that this node collects; with in each element generate from standardized normal distribution N (0,1); in addition, and each node l (l=1 ..., M) data amount check that collected be broadcast to its neighbor node.After certain node m receives its data amount check of all neighbor nodes broadcast, this node calculate weight c lm:
c l m = N l ( d ) &Sigma; l &prime; &Element; R m N l &prime; ( d ) ,
The implication of this weight is each neighbor node l (the l ∈ R for weighing node m m) each importance of information at node m place transmitted.After initialization completes, iteration count iter=1, starts iterative process.
(2-2) calculate local statistic and broadcast: this step does not need the information of neighbor node.At each node l place, based on the data that it collects first intermediate variable g is calculated i, Ω i, u &OverBar; l , n , i ( d ) , W l , n , i ( d ) , < z l , n , i ( d ) > With < w l , n , i ( d ) > ( n = 1 , ... , N l ( d ) ; i = 1 , ... , I ) :
g i = &lsqb; A i o l d ( A i o l d ) T + D i o l d &rsqb; - 1 &CenterDot; A i o l d ,
&Omega; i = I q - g i T A i o l d ,
u &OverBar; l , n , i ( d ) = g i T ( s l , n ( d ) - &mu; i o l d ) ,
W l , n , i ( d ) = ( s l , n ( d ) - &mu; i o l d ) T &CenterDot; &lsqb; A i o l d ( A i o l d ) T + D i o l d &rsqb; &CenterDot; ( s l , n ( d ) - &mu; i o l d )
< z l , n , i ( d ) > = &pi; i o l d &CenterDot; t ( s l , n ( d ) | &mu; i o l d , A i o l d ( A i o l d ) T + D i o l d , &nu; i o l d ) &Sigma; i &prime; = 1 I &pi; i &prime; o l d &CenterDot; t ( s l , n ( d ) | &mu; i &prime; o l d , A i &prime; o l d ( A i &prime; o l d ) T + D i &prime; o l d , &nu; i &prime; o l d ) ,
< w l , n , i ( d ) > = &nu; i o l d + p &nu; i o l d + W l , n , i ( d ) ,
Wherein for last iteration complete after the parameter value that obtains (be the initial value of parameter first during iteration represent node l place n-th data belong to the probability of i-th class (blending constituent), for expectation value.
Had the result of calculation of above-mentioned intermediate variable, node calculate goes out local statistic (LS) LS l = { LS l ( 1 ) &lsqb; i &rsqb; , LS l ( 2 ) &lsqb; i &rsqb; , LS l ( 3 ) &lsqb; i &rsqb; , LS l ( 4 ) &lsqb; i &rsqb; , LS l ( 5 ) &lsqb; i &rsqb; } i = 1 , ... , I , As follows:
LS l ( 1 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > ,
LS l ( 2 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > ,
LS l ( 3 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > &CenterDot; s l , n ( d ) ,
LS l ( 4 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > &CenterDot; s l , n ( d ) &CenterDot; ( s l , n ( d ) ) T ,
LS l ( 5 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; l o g < w l , n , i ( d ) > .
Finally, the local statistic LS that will calculate of each node l lbroadcast is spread to its neighbor node, as shown in Figure 1.
(2-3) calculate associating statistic: when node m (m=1 ..., M) receive from its all neighbor node l (l ∈ R m), node m calculates associating statistic CS m { CS m ( 1 ) &lsqb; i &rsqb; , CS m ( 2 ) &lsqb; i &rsqb; , CS m ( 3 ) &lsqb; i &rsqb; , CS m ( 4 ) &lsqb; i &rsqb; , CS m ( 5 ) &lsqb; i &rsqb; } i = 1 , ... , I :
CS m ( 1 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 1 ) &lsqb; i &rsqb; ,
CS m ( 2 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 2 ) &lsqb; i &rsqb; ,
CS m ( 3 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 3 ) &lsqb; i &rsqb; ,
CS m ( 4 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 4 ) &lsqb; i &rsqb; ,
CS m ( 5 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 5 ) &lsqb; i &rsqb; .
(2-4) each parameter in estimation model: node m (m=1 ..., M) CS that calculates according to previous step m, estimate Θ={ π i, A i, μ i, D i, ν i} i=1 ..., I, wherein, { π i, μ i} i=1 ..., Iestimation procedure as follows:
&pi; i = CS m ( 1 ) &lsqb; i &rsqb; &Sigma; i &prime; = 1 I CS m ( 1 ) &lsqb; i &prime; &rsqb; ,
&mu; i = CS m ( 3 ) &lsqb; i &rsqb; CS m ( 2 ) &lsqb; i &rsqb; ;
For { A i, D i} i=1 ..., Iestimation, process is as follows:
V i = CS m ( 4 ) &lsqb; i &rsqb; - 2 CS m ( 3 ) &lsqb; i &rsqb; &CenterDot; &mu; i T + CS m ( 2 ) &lsqb; i &rsqb; &CenterDot; &mu; i &CenterDot; &mu; i T CS m ( 1 ) &lsqb; i &rsqb; ,
A i = V i g i ( g i T V i g i + &Omega; i ) - 1 ,
D i = d i a g { V i - A i ( g i T V i g i + &Omega; i ) A i T } ;
In addition, for { ν i} i=1 ..., I, obtained by solution equation below:
log ( &nu; i 2 ) - &psi; ( &nu; i 2 ) + 1 - CS m ( 5 ) &lsqb; i &rsqb; - CS m ( 2 ) &lsqb; i &rsqb; CS m ( 1 ) &lsqb; i &rsqb; - log ( &nu; i o l d + p 2 ) + &psi; ( &nu; i o l d + p 2 ) = 0 ,
Wherein ψ () the digamma function that is standard, generally with the above-mentioned equation of Newton method solution.
(2-5) convergence is judged whether: node m (m=1 ..., M) calculate log-likelihood under current iteration iter:
log p ( S m ( d ) | &Theta; ) = &Sigma; n = 1 N m l o g ( &Sigma; i = 1 I &pi; i &CenterDot; t ( s m , n ( d ) | &mu; i , A i A i T + D i , &nu; i ) ) ,
If then algorithm convergence, stops iteration; Otherwise perform step (2-2), start next iteration (iter=iter+1; ).Wherein Θ represents the parameter value that current iteration estimates, Θ oldrepresent the parameter value estimated in last iteration.That is, the log-likelihood of adjacent twice iteration is less than threshold epsilon, algorithm convergence.ε gets 10 -5~ 10 -6in arbitrary value.It should be noted that because node each in network is parallel data processing, therefore allly can not to restrain in an iteration simultaneously.Such as, when when node l restrains, node m not yet restrains, then node l no longer sends LS l, also no longer receive the information of neighbor node transmission.Node m is then with the LS that the node l received for the last time sends lupgrade its CS m.The node of not restraining continues iteration, until all nodes are all restrained in network.
After above-mentioned steps (2-1) ~ (2-5), the corresponding model tMFA (representing with parameter Θ during training convergence) obtained by the training data of handwritten numeral d.Repeat above-mentioned steps 10 times, thus obtain 10 numerals tMFA model corresponding separately, in order to represent convenient, and being distinguished, using &Theta; ( d ) = { &pi; i ( d ) , A i ( d ) , &mu; i ( d ) , D i ( d ) , &nu; i ( d ) } i = 1 , ... , I , ( d = 0 , 1 , ... , 9 ) The tMFA model that representative digit d is corresponding.Distributed training completes.
(3) Distributed identification: when the arbitrary computer acquisition in network to new for identify handwritten numeral time, first obtain its character pair by step (1), be expressed as s', then calculate about Θ (d)(d=0,1 ..., 9) log-likelihood logp (s'| Θ (d)) (d=0,1 ..., 9):
log p ( s &prime; | &Theta; ( d ) ) = l o g ( &Sigma; i = 1 I &pi; i ( d ) &CenterDot; t ( s &prime; | &mu; i ( d ) , A i ( d ) ( A i ( d ) ) T + D i ( d ) , &nu; i ( d ) ) ) ;
Using the recognition result d' of sequence number corresponding for max log likelihood value as s':
d &prime; = arg max d = 0 9 log p ( s &prime; | &Theta; ( d ) ) ,
Distributed identification flow process of the present invention as shown in Figure 2.
Performance evaluation:
Because digital realistic value to be identified is known, compare adopting recognition methods involved in the present invention and its true value, obtain recognition correct rate (that is: quantity/(20*3500) of the handwritten numeral that recognition correct rate=all nodes correctly identify), thus can evaluate and weigh out validity and the accuracy of method involved in the present invention.In order to compare the present invention propose based on the distributed Handwritten Digit Recognition method (being called for short distributed tMFA) of tMFA and the performance of additive method, here with based on the centralized Handwritten Digit Recognition method (being called for short centralized tMFA) of tMFA, compare without the Handwritten Digit Recognition method (being called for short without cooperation tMFA) of cooperation between each node based on tMFA.It should be noted that; in centralized tMFA; all nodes need by original data transmissions to certain Centroid, adopt traditional MFA to carry out training and identified, then again result is returned to each node by Centroid; this mode in practice little; one is that transmission raw data communication expense is very large, once occur that packet loss or packet damage, very large on the impact of last recognition performance; two is the secret protections be unfavorable in data, and network security causes anxiety.Here object is whether the recognition methods in order to compare the distributed tMFA that the present invention proposes can reach the same performance of centralized tMFA.Recognition result represents by quantitative and qualitative analysis two kinds of modes respectively.In the qualitative representation of result, adopt the hinton figure of confusion matrix, as shown in Figure 3.In the figure, each recognition result of row representative digit 0 ~ 9 and the true value of each row representative digit 0 ~ 9.Blockage on principal diagonal represents correct situation about identifying, the size of blockage shows that more greatly the numeral of this identification correct is more, and other positions occurs blockage shows to exist the situation of wrong identification.As can be seen from this figure, centralized tMFA and distributed tMFA of the present invention (only provide node 3 wherein as space is limited, other nodes come to the same thing) better performances, and without cooperation tMFA poor-performing.In the quantificational expression of result, adopt average and the variance two indices of discrimination, as shown in Figure 4.In the figure, the recognition correct rate of the distributed tMFA of the present invention's design is substantially identical with the average of the recognition correct rate of centralized tMFA, and without cooperating the poor of tMFA, the variance of the recognition correct rate of distributed tMFA is also much smaller than nothing cooperation tMFA in addition.Therefore, adopt method of the present invention to overcome the shortcoming of the Handwritten Digit Recognition of traditional centralized tMFA, achieve Distributed identification and there is good performance.
The scope of request protection of the present invention is not limited only to the description of this embodiment.

Claims (2)

1., based on the distributed Handwritten Digit Recognition method that t hybrid cytokine is analyzed, it is characterized in that, described method comprises the steps:
Step 1: the collection of data and feature extraction: be provided with M platform computing machine, that is: node, form a network, the topology of network determines in advance, as long as it meets and to exist directly between any two nodes or multi-hop forwards and the path of intercommunication, the neighbor node set expression of node m is R m; M node (m=1, ..., M) collect handwritten numeral 0 ~ 9 by the handwriting pad be attached thereto, the raw data of totally 10 classes, handwriting pad records the two-dimensional coordinate position of each point on each character writing track automatically, gets the coordinate of 8 points in the first-class compartment of terrain of track as the characteristic s corresponding to each raw data, totally 16 dimensions; If node m place gathers and the training dataset of the digital d obtained through feature extraction is wherein represent node m place, for n-th characteristic of handwritten numeral d of training, dimension is p=16, for the training data number of digital d;
Analyze (tMFA) with a public t hybrid cytokine and describe characteristic data set relevant to digital d in all nodes distribution; The parameter sets of tMFA is { π i, A i, μ i, D i, ν i} i=1 ..., I, wherein I is for being mixed into mark, π ibe the weight of i-th blending constituent, A ibe the Factor load-matrix of (p × q) of i-th blending constituent, q is the dimension of the low-dimensional factor, gets the arbitrary integer between q=p/5 ~ p/3, μ ibe the p dimension mean value vector of i-th blending constituent, D iit is the covariance matrix of (p × p) of the error of i-th blending constituent; ν iit is the degree of freedom of i-th blending constituent;
Step 2: distributed training, will for distributed training, obtain the tMFA parameter sets of every class numeral corresponding to d &Theta; ( d ) = { &pi; i ( d ) , A i ( d ) , &mu; i ( d ) , D i ( d ) , &nu; i ( d ) } i = 1 , ... , I ( d = 0 , ... , 9 ) ;
Step 3: Distributed identification, when any one node in network collect new for identify handwritten numeral time, first obtain its character pair by above-mentioned steps 1, be expressed as s', then calculate s' about Θ (d)(d=0,1 ..., 9) log-likelihood logp (s'| Θ (d)) (d=0,1 ..., 9):
l o g p ( s &prime; | &Theta; ( d ) ) = l o g ( &Sigma; i = 1 I &pi; i ( d ) &CenterDot; t ( s &prime; | &mu; i ( d ) , A i ( d ) ( A i ( d ) ) T + D i ( d ) , &nu; i ( d ) ) ) ;
Using the recognition result d' of sequence number corresponding for max log likelihood value as s':
d &prime; = arg max d = 0 9 l o g p ( s &prime; | &Theta; ( d ) ) .
2. a kind of distributed Handwritten Digit Recognition method analyzed based on t hybrid cytokine according to claim 1, it is characterized in that, described step 2 comprises the steps:
Step 2-1: initialization; Setting tMFA in be mixed into mark I, set the initial value of each parameter in MFA according to I, p and q { &pi; i 0 , A i 0 , &mu; i 0 , D i 0 , &nu; i 0 } i = 1 , ... , I ; Wherein, each Nodes ( &pi; 1 0 , ... , &pi; i 0 , ... , &pi; I 0 ) = ( 1 / I , ... , 1 / I , ... , 1 / I ) ; random selecting from the data that this node collects; with in each element generate from standardized normal distribution N (0,1); this group parameter gets the arbitrary integer between 1 ~ 5; In addition, and each node l (l=1 ..., M) data amount check that collected be broadcast to its neighbor node; After certain node m receives its data amount check of all neighbor nodes broadcast, this node calculate weight c lm:
c l m = N l ( d ) &Sigma; l &prime; &Element; R m N l &prime; ( d ) ,
After initialization completes, iteration count iter=1, starts iterative process;
Step 2-2: calculate local statistic and broadcast; At each node l place, based on the data that it collects first intermediate variable g is calculated i, Ω i, with < w l , n , i ( d ) > ( n = 1 , ... , N l ( d ) ; i = 1 , ... , I ) :
g i = &lsqb; A i o l d ( A i o l d ) T + D i o l d &rsqb; - 1 &CenterDot; A i o l d ,
&Omega; i = I q - g i T A i o l d ,
u &OverBar; l , n , i ( d ) = g i T ( s l , n ( d ) - &mu; i o l d ) ,
W l , n , i ( d ) = ( s l , n ( d ) - &mu; i o l d ) T &CenterDot; &lsqb; A i o l d ( A i o l d ) T + D i o l d &rsqb; &CenterDot; ( s l , n ( d ) - &mu; i o l d )
< z l , n , i ( d ) > = &pi; i o l d &CenterDot; t ( s l , n ( d ) | &mu; i o l d , A i o l d ( A i o l d ) T + D i o l d , &nu; i o l d ) &Sigma; i &prime; = 1 I &pi; i &prime; o l d &CenterDot; t ( s l , n ( d ) | &mu; i &prime; o l d , A i &prime; o l d ( A i &prime; o l d ) T + D i &prime; o l d , &nu; i &prime; o l d ) ,
< w l , n , i ( d ) > = &nu; i o l d + p &nu; i o l d + W l , n , i ( d ) ,
Wherein for last iteration complete after the parameter value that obtains, that is: first iteration time be the initial value of parameter represent node l place n-th data belong to i-th class, that is: the probability of blending constituent, for the hidden variable in tMFA expectation value;
Had the result of calculation of above-mentioned intermediate variable, node calculate goes out local statistic, i.e. LS, LS l = { LS l ( 1 ) &lsqb; i &rsqb; , LS l ( 2 ) &lsqb; i &rsqb; , LS l ( 3 ) &lsqb; i &rsqb; , LS l ( 4 ) &lsqb; i &rsqb; , LS l ( 5 ) &lsqb; i &rsqb; } i = 1 , ... , I , Comprise:
LS l ( 1 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > ,
LS l ( 2 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > ,
LS l ( 3 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > &CenterDot; s l , n ( d ) ,
LS l ( 4 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; < w l , n , i ( d ) > &CenterDot; s l , n ( d ) &CenterDot; ( s l , n ( d ) ) T ,
LS l ( 5 ) &lsqb; i &rsqb; = &Sigma; n = 1 N l < z l , n , i ( d ) > &CenterDot; log < w l , n , i ( d ) > ,
Finally, the local statistic LS that will calculate of each node l lbroadcast diffusion is to its neighbor node;
Step 2-3: calculate associating statistic; When node m (m=1 ..., M) receive from its all neighbor node l (l ∈ R m) LS lafter, node m calculates associating statistic CS m = { CS m ( 1 ) &lsqb; i &rsqb; , CS m ( 2 ) &lsqb; i &rsqb; , CS m ( 3 ) &lsqb; i &rsqb; , CS m ( 4 ) &lsqb; i &rsqb; , CS m ( 5 ) &lsqb; i &rsqb; } i = 1 , ... , I :
CS m ( 1 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 1 ) &lsqb; i &rsqb; ,
CS m ( 2 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 2 ) &lsqb; i &rsqb; ,
CS m ( 3 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 3 ) &lsqb; i &rsqb; ,
CS m ( 4 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 4 ) &lsqb; i &rsqb; ,
CS m ( 5 ) &lsqb; i &rsqb; = &Sigma; l &Element; R m c l m &CenterDot; LS l ( 5 ) &lsqb; i &rsqb; ;
Step 2-4: each parameter in estimation model; Node m (m=1 ..., M) CS that calculates according to previous step m, estimate Θ={ π i, A i, μ i, D i, ν i} i=1 ..., I, wherein, { π i, μ i} i=1 ..., Iestimation procedure comprise:
&pi; i = CS m ( 1 ) &lsqb; i &rsqb; &Sigma; i &prime; = 1 I CS m ( 1 ) &lsqb; i &prime; &rsqb; ,
&mu; i = CS m ( 3 ) &lsqb; i &rsqb; CS m ( 2 ) &lsqb; i &rsqb; ;
For { A i, D i} i=1 ..., Iestimation, process comprises:
V i = CS m ( 4 ) &lsqb; i &rsqb; - 2 CS m ( 3 ) &lsqb; i &rsqb; &CenterDot; &mu; i T + CS m ( 2 ) &lsqb; i &rsqb; &CenterDot; &mu; &CenterDot; &mu; i T CS m ( 1 ) &lsqb; i &rsqb; ,
A i = V i g i ( g i T V i g i + &Omega; i ) - 1 ,
D i = d i a g { V i - A i ( g i T V i g i + &Omega; i ) A i T } ;
In addition, for { ν i} i=1 ..., I, obtained by solution equation below, comprising:
log ( &nu; i 2 ) - &psi; ( &nu; i 2 ) + 1 - CS m ( 5 ) &lsqb; i &rsqb; - CS m ( 2 ) &lsqb; i &rsqb; CS m ( 1 ) &lsqb; i &rsqb; - log ( &nu; i o l d + p 2 ) + &psi; ( &nu; i o l d + p 2 ) = 0 ,
Wherein ψ () the digamma function that is standard, adopts Newton method when specifically solving;
Step 2-5: judge whether convergence; Node m (m=1 ..., M) calculate log-likelihood under current iteration:
l o g p ( S m ( d ) | &Theta; ) = &Sigma; n = 1 N m l o g ( &Sigma; i = 1 I &pi; i &CenterDot; t ( s m , n ( d ) | &mu; i , A i A i T + D i , &nu; i ) ) ,
If then algorithm convergence, stops iteration; Otherwise perform step 2-2, start next iteration (iter=iter+1; ); Wherein Θ represents the parameter value that current iteration estimates, Θ oldrepresent the parameter value estimated in last iteration; That is, the log-likelihood of adjacent twice iteration is less than threshold epsilon, algorithm convergence; ε gets 10 -5~ 10 -6in arbitrary value; Because node each in network is parallel data processing, therefore allly can not to restrain in an iteration simultaneously; When node l restrains, node m not yet restrains, then node l no longer sends LS l, also no longer receive the information of neighbor node transmission; Node m is then with the LS that the node l received for the last time sends lupgrade its CS m; The node of not restraining continues iteration, until all nodes are all restrained in network;
After above-mentioned steps 2-1 ~ step 2-5, the corresponding model tMFA obtained by the training data of handwritten numeral d, that is: represent with parameter Θ during training convergence; Repeat above-mentioned steps 10 times, thus obtain 10 numerals tMFA model corresponding separately, in order to represent convenient, and being distinguished, using &Theta; ( d ) = { &pi; i ( d ) , A i ( d ) , &mu; i ( d ) , D i ( d ) , &nu; i ( d ) } i = 1 , ... , I ( d = 0 , 1 , ... , 9 ) The tMFA model that representative digit d is corresponding, so far, distributed training completes.
CN201510415750.XA 2015-07-15 2015-07-15 A kind of distributed Handwritten Digit Recognition method based on the analysis of t hybrid cytokines Expired - Fee Related CN104992188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510415750.XA CN104992188B (en) 2015-07-15 2015-07-15 A kind of distributed Handwritten Digit Recognition method based on the analysis of t hybrid cytokines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510415750.XA CN104992188B (en) 2015-07-15 2015-07-15 A kind of distributed Handwritten Digit Recognition method based on the analysis of t hybrid cytokines

Publications (2)

Publication Number Publication Date
CN104992188A true CN104992188A (en) 2015-10-21
CN104992188B CN104992188B (en) 2018-04-20

Family

ID=54304001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510415750.XA Expired - Fee Related CN104992188B (en) 2015-07-15 2015-07-15 A kind of distributed Handwritten Digit Recognition method based on the analysis of t hybrid cytokines

Country Status (1)

Country Link
CN (1) CN104992188B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604121A (en) * 2003-09-29 2005-04-06 阿尔卡特公司 Method, system, client, server for distributed handwriting recognition
EP2515257A1 (en) * 2009-12-15 2012-10-24 Fujitsu Frontech Limited Character recognition method, character recognition device, and character recognition program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1604121A (en) * 2003-09-29 2005-04-06 阿尔卡特公司 Method, system, client, server for distributed handwriting recognition
EP2515257A1 (en) * 2009-12-15 2012-10-24 Fujitsu Frontech Limited Character recognition method, character recognition device, and character recognition program

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WANG W L等: "An efficient ECM algorithm for maximum likelihood estimation in mixtures of t-factor analyzers", 《ACM》 *
双小川等: "基于统计和结构特征的手写数字识别研究", 《计算机工程与设计》 *

Also Published As

Publication number Publication date
CN104992188B (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN101478534B (en) Network exception detecting method based on artificial immunity principle
CN102810161B (en) Method for detecting pedestrians in crowding scene
CN104966104A (en) Three-dimensional convolutional neural network based video classifying method
CN106156145A (en) The management method of a kind of address date and device
CN108183956B (en) Method for extracting key path of propagation network
CN103795723A (en) Distributed type internet-of-things safety situation awareness method
CN109949176A (en) It is a kind of based on figure insertion social networks in abnormal user detection method
CN109492076B (en) Community question-answer website answer credible evaluation method based on network
CN104596519A (en) RANSAC algorithm-based visual localization method
CN103810288A (en) Method for carrying out community detection on heterogeneous social network on basis of clustering algorithm
CN102970692A (en) Method for detecting boundary nodes of wireless sensor network event
CN104008420A (en) Distributed outlier detection method and system based on automatic coding machine
CN113422695B (en) Optimization method for improving robustness of topological structure of Internet of things
CN106131154A (en) Compression method of data capture based on kernel function in mobile wireless sensor network
CN106503631A (en) A kind of population analysis method and computer equipment
CN102855488A (en) Three-dimensional gesture recognition method and system
CN107623924A (en) It is a kind of to verify the method and apparatus for influenceing the related Key Performance Indicator KPI of Key Quality Indicator KQI
CN106296315A (en) Context aware systems based on user power utilization data
CN106154221A (en) A kind of semi-supervised localization method based on WLAN
CN105374047A (en) Improved bilateral filtering and clustered SAR based image change detection method
CN103226825B (en) Based on the method for detecting change of remote sensing image of low-rank sparse model
CN110046941A (en) A kind of face identification method, system and electronic equipment and storage medium
CN109783805A (en) A kind of network community user recognition methods and device
CN105912602A (en) True-value finding method based on entity attributes
CN105046275A (en) Large-scale high-dimensional outlier data detection method based on angle variance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201201

Address after: Room 214, building D5, No. 9, Kechuang Avenue, Zhongshan Science and Technology Park, Jiangbei new district, Nanjing, Jiangsu Province

Patentee after: Nanjing Tian Gu Information Technology Co.,Ltd.

Address before: Yuen Road Qixia District of Nanjing City, Jiangsu Province, No. 9 210023

Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS

Effective date of registration: 20201201

Address after: Gulou District of Nanjing City, Jiangsu Province, Beijing Road No. 20 210024

Patentee after: STATE GRID JIANGSU ELECTRIC POWER Co.,Ltd. INFORMATION & TELECOMMUNICATION BRANCH

Address before: Room 214, building D5, No. 9, Kechuang Avenue, Zhongshan Science and Technology Park, Jiangbei new district, Nanjing, Jiangsu Province

Patentee before: Nanjing Tian Gu Information Technology Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180420