CN105550704B - Distributed high dimensional data classification method based on mixing common factor analyzer - Google Patents

Distributed high dimensional data classification method based on mixing common factor analyzer Download PDF

Info

Publication number
CN105550704B
CN105550704B CN201510916426.6A CN201510916426A CN105550704B CN 105550704 B CN105550704 B CN 105550704B CN 201510916426 A CN201510916426 A CN 201510916426A CN 105550704 B CN105550704 B CN 105550704B
Authority
CN
China
Prior art keywords
data
node
high dimensional
class
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510916426.6A
Other languages
Chinese (zh)
Other versions
CN105550704A (en
Inventor
魏昕
丁平船
张胜男
周亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tian Gu Information Technology Co ltd
Information and Telecommunication Branch of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201510916426.6A priority Critical patent/CN105550704B/en
Publication of CN105550704A publication Critical patent/CN105550704A/en
Application granted granted Critical
Publication of CN105550704B publication Critical patent/CN105550704B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Complex Calculations (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses the distributed high dimensional data classification method based on mixing common factor analyzer, this method realizes the classification of high dimensional data by the cooperation between nodes different in network in a distributed way.The distribution for modeling its high dimensional data at each node with mixing common factor analyzer is entirely divided into two parts of training and identification.In the training process, then calculate three groups of intermediate variables are carried out broadcast diffusion by the initialization and local calculation for carrying out model parameter first, when node receives the intermediate variable that the broadcast of its neighbor node comes, it calculates joint statistic and completes the estimation of parameter, the continuous iteration of the process is until convergence.In cognitive phase, data to be sorted input any node, the log-likelihood of its model corresponding to the every a kind of data trained are calculated, using the corresponding classification of max log likelihood value as recognition result.The distributed dimensionality reduction of high dimensional data may be implemented using this method, each node in network obtains higher and consistent classification performance, the privacy that interaction intermediate variable can be effectively protected data is only transmitted between this exterior node.

Description

Distributed high dimensional data classification method based on mixing common factor analyzer
Technical field
The present invention relates to a kind of distributed high dimensional data classification methods based on mixing common factor analyzer, belong to data The technical field of processing and application.
Background technique
With the continuous development of acquisition and storage technology, the dimension and quantity of data constantly increase, and higher-dimension big data is continuous It emerges in large numbers.For example, facial image, video and webpage text commonplace in large-scale image retrieval and file retrieval based on content This, biological tissue is carried out in the high dimensional feature vectors that unavoidably occur in voice and Audio Signal Processing, bioinformatics Gene data etc. in clustering.It will be apparent that dimension is higher, data volume is bigger, can portray more fully hereinafter described Object and preferably resolution object.It is born however, excessively high dimension, excessive data volume bring high processing and transmission Load, especially in sensor network, the storage of individual node, processing and transmission communication capacity are all extremely limited, therefore, logarithm According to analysis and the design of processing method propose new higher requirement and challenge.Specifically, on the one hand, for high-dimensional For data or feature, traditional model and and its algorithm for estimating be easy to appear " dimension disaster " problem so that relevant issues are difficult To understand and indicate, visualization is unlikely realized.Therefore, how to realize and high dimensional data accurately and efficiently analyzed and is handled, Have become an extremely challenging basic research problems;On the other hand, when data volume is very big, single sensor section Point is often unable to complete the analysis and processing task of data, and big data can be divided into different parts at this time, be respectively stored in It is common to complete specified task by reasonably communicating and cooperating on multiple sensor nodes.How for big data design association Strategy is dealt with, is also a problem to be solved.
Classification, which refers to the process of, is divided into multiple classes for data by certain method, in machine learning field, to data Classification is the process of a supervised learning.In existing document and patent, the method for having had already appeared a large amount of classification, still Under data volume is very big or the limited situation of individual node processing capacity, need to distribute data across on multiple nodes, at this time How distributed treatment is completed, it is very crucial.Therefore, this patent is proposed method designs precisely in order to solve the problems, such as this It is a kind of based on mixing common factor analyzer distributed high dimensional data classification method (1) hybrid cytokine analysis model can be effective Processing high dimensional data;(2) it by cooperation mode between design node, only transmits intermediate result and is obtained with satisfied cluster knot Fruit had not only reduced the expense of communication, but also protect the privacy information in data compared with transmitting initial data mode, it is ensured that Data safety in network.
Summary of the invention
Present invention aims at the defects for solving the above-mentioned prior art, propose a kind of based on mixing common factor analysis The distributed high dimensional data classification method of device, this method comprises the following steps:
Step 1: the acquisition of data;
A network is formed equipped with M node, each collected data of node come from V class, data dimension p.Its In, in the collected all data of node m, the data set from v-th of class is WhereinIt indicates at node m, the nth data from v-th of class,For the training data number of digital v-th of class;This Outside, the neighbor node set expression of node m is Rm
Step 2, training: public for the data mixing in all nodes from v-th of class Factor minute parser (MCFA) describes its distribution, and completes the training of model using distributed way, estimates parameterIn the same way, it estimates corresponding to every a kind of data The parameter set Θ of MCFA(v)(v=1 ..., V), training process are completed;
Step 3, identify: when any node in network collects new data x' for identification, calculate x' about Θ(v)Log-likelihood log p (the x'| Θ of (v=1 ..., V)(v)) (v=1 ..., V):
Using the corresponding serial number of max log likelihood value as the recognition result v' of x':
The process of training of data described in step 2 of the present invention for v-th of class includes the following:
It in order to indicate succinct, and will not influence understanding and implementation, omitWith In subscript " (v) ").
Step 2-1, initialization: the initial parameter value in setting MCFAWherein, each node (the w at place1,...,wg,...,wG)=(1/G ..., 1/G ..., 1/G);Each of L and E matrix element all from standard just State, which is distributed in N (0,1), to be generated;{ξ1,...,ξg,...,ξGIn each element generate from the standardized normal distribution N (0,1); Ω1=...=Ωg=...=ΩG=Iq, wherein IqFor the unit matrix of (q × q).
Step 2-2, broadcast data number: each node l (l=1,2 ..., M) is by its collected data amount check NlExtensively It broadcasts and gives its neighbor node.After the data amount check that all neighbor nodes broadcast that some node m receives it comes, which is calculated Weight coefficient clm:
In addition, iteration count iter=1, starts iterative process;
Step 2-3, local calculation: at each node l in sensor network, according to the data X at current nodel With the parameter value Θ estimated after last iterationold, i.e., (as iter=1, ΘoldFor the parameter value after initialization), Calculate al,n,g,bg,hgWithIts formula are as follows:
Wherein,
Step 2-4, broadcast diffusion: each node l in sensor network calculate three groups of intermediate variables, it may be assumed that
It is placed in a data packet, so Other backward node broadcasts spread the data packet.
Step 2-5, combined calculation: when node m (m=1 ..., M) is received from its all neighbor node l (l ∈ Rm) hair After the data packet containing intermediate variable come, joint statistic is calculatedThat is:
Step 2-6, parameter Estimation: node m (m=1 ..., M) is according to the calculated joint statistic of step 2-5 and step 2-3 is calculated to be estimatedThat is:
Step 2-7, differentiate convergence: node m (m=1 ..., M) calculates the log-likelihood under current iteration, it may be assumed that
Wherein ΘnewIndicate the parameter value that current iteration estimates, ΘoldIndicate the estimation parameter value in last iteration. If logp (Xmnew)-logp(Xmold) < ε, wherein ε=10-5, node m enters final state.Otherwise, step is turned to
2-3 starts next iteration.
After above-mentioned steps 2-1~step 2-7, Θ is estimated(v)
The method of the present invention is applied to the parallel and distributed process of data.
The utility model has the advantages that
1. the mixing common factor analyzer that the present invention uses can carry out dimensionality reduction to high dimensional data, thus in the same of dimensionality reduction When smoothly complete the modelings of data, obtain better classification performance, and reduce computational complexity.In addition, the present invention only passes Defeated results of intermediate calculations rather than initial data greatly protect the privacy of transmission data.
2. the training and identification process based on mixing common factor analyzer that the present invention uses, so that each in network Node can make full use of information included in the data of other nodes, so that classification performance is substantially superior to centralization side Method.
Detailed description of the invention
Fig. 1 is the process of the distributed high dimensional data classification method of the present invention based on mixing common factor analyzer Figure.
Fig. 2 is the qualitative comparative result schematic diagram of the classification performance of method according to the present invention and other methods.
Fig. 3 is the quantitative comparison result schematic diagram of the classification performance of method according to the present invention and other methods.
Specific embodiment
The invention is described in further detail with reference to the accompanying drawings of the specification.
As shown in Figure 1-3, the present invention provides a kind of distributed high dimensional datas point based on mixing common factor analyzer Class method, this method comprises the following steps:
Step 1: the acquisition of data;
Equipped with M platform computer/calculate node (that is: node), a network is formed, each collected data of node come from V class, data dimension p.Wherein, in the collected all data of node m, the data set from v-th of class isWhereinIt indicates at node m, the nth data from v-th of class,For The training data number of digital v-th of class.
In addition, the data transmission range of each node is set as Dis, it is all to be less than Dis with its distance for present node m Node be its neighbor node, the neighbor node set expression of node m is Rm.In the present invention, the connection relationship between node (network topology) is determined in advance, it is only necessary to guarantee at least to have one directly between any two node or can arrive through multi-hop The path reached.
Step 2: training;
For the data set in all nodes from v-th of classWith mixing common factor analysis Device (mixture of common factor analyzers, abbreviation MCFA) describes its distribution.It is relevant to v-th of class Its parameter set of MCFA model isWhereinFor mixed weight-value, meet (q n dimensional vector n) and((q × q) matrix) is respectively that q denapon corresponding with p dimension data is obeyed The mean value and covariance matrix of Gaussian Profile, q take the arbitrary integer between p/2~p/8.It is completed using following distributed way Training, specific training process are following (here with v class dataTraining process for, in order to indicate succinct, and will not It influences to understand and implement, omitted in step belowWithIn subscript " (v) "):
Step 2-1, initialization: the initial parameter value in setting MCFAWherein, each node (the w at place1,...,wg,...,wG)=(1/G ..., 1/G ..., 1/G);Each of L and E matrix element all from standard just State, which is distributed in N (0,1), to be generated;{ξ1,...,ξg,...,ξGIn each element generate from the standardized normal distribution N (0,1); Ω1=...=Ωg=...=ΩG=Iq, wherein IqFor the unit matrix of (q × q).
Step 2-2, broadcast data number: each node l (l=1,2 ..., M) is by its collected data amount check NlExtensively It broadcasts and gives its neighbor node.After the data amount check that all neighbor nodes broadcast that some node m receives it comes, which is calculated Weight coefficient clm:
The meaning of the weight is each neighbor node l (l ∈ R for measuring node mm) information transmitted every time is in node m The importance at place.In addition, iteration count iter=1, starts iterative process.
Step 2-3, local calculation: at each node l in sensor network, according to the data X at current nodel With the parameter value Θ estimated after last iterationold(as iter=1, ΘoldFor the parameter value after initialization), it calculates A outl,n,g,bg,hgWith
Wherein,That is:
Step 2-4, broadcast diffusion: each node l in sensor network calculate three intermediate variables, it may be assumed that
It is placed in a data packet, then The data packet is spread to other node broadcasts.
Step 2-5, combined calculation: when node m (m=1 ..., M) is received from its all neighbor node l (l ∈ Rm) hair After the data packet containing intermediate variable come, joint statistic is calculatedThat is:
Step 2-6, parameter Estimation: node m (m=1 ..., M) is according to the calculated joint statistic of step (5) and step (3) calculated to estimate
Step 2-7, differentiate convergence: node m (m=1 ..., M) calculates the log-likelihood under current iteration, it may be assumed that
Wherein ΘnewIndicate the parameter value that current iteration estimates, ΘoldIndicate the estimation parameter value in last iteration. If logp (Xmnew)-logp(Xmold) < ε, wherein ε=10-5, node m enters final state.Otherwise, step is turned to 2-3 starts next iteration.
The present invention passes through after step 2-1~step 2-7, estimates Θ(v).In the same way, every a kind of number is estimated According to the parameter set Θ of corresponding MCFA(v)(v=1 ..., V), training process are completed.
Step 3: identification;When any node in network (is assumed to be node m) and collects new data x' for identification When, x' is calculated about Θ(v)Log-likelihood logp (the x'| Θ of (v=1 ..., V)(v)) (v=1 ..., V), it may be assumed that
Using the corresponding serial number of max log likelihood value as the recognition result v' of x':
Performance evaluation of the invention includes the following:
Equipped with 15 nodes, form a network, guarantee in network between any two node can directly or indirectly into Row communication.3844 0~9 handwritten numeral training datas collected in UCI data set are assigned randomly on 20 nodes, are instructed The feature for practicing data is equally spaced to take the two-dimensional coordinate of 8 points to form on the digital handwriting track, totally 16 dimension.So, M= 15, p=16, q take 6,Using training side of the present invention Method obtains Θ(0)(1),...,Θ(9).After the completion of training, in order to test the overall discrimination of all nodes, by each test Data x'(3500 in total) distinguish on all the nodes, it is identified using identification process of the present invention.Due to wait know It is correct to obtain identification it is known that will be compared using recognition methods according to the present invention and its true value for other digital realistic value Rate (that is: recognition correct rate=all nodes correctly identify quantity/(15*3500) of handwritten numeral), so as to evaluating and Measure out the validity and accuracy of method according to the present invention.
For the Distributed Classification (abbreviation D-MCFA) more proposed by the present invention based on MCFA and other methods Performance, the centralized Handwritten Digit Recognition method (abbreviation C-MCFA) here and based on MCFA, nothing between each node based on MCFA The Handwritten Digit Recognition method (abbreviation N-MCFA) of cooperation, the Handwritten Digit Recognition method based on distributed gauss hybrid models (abbreviation D-GMM) is compared.It should be noted that all nodes are needed original data transmissions to some in C-MCFA Central node is completed training using traditional MCFA by central node and identified, result is then returned to each node again, this Kind of mode in practice seldom, first is that transmission raw data communication expense is very big, once there is packet loss or data packet damage, Very big on last recognition performance influence, two are detrimental to the secret protection in data, and network security causes anxiety.Recognition result difference It is indicated with qualitative and quantitative two ways.In the qualitative representation of result, schemed using the hinton of confusion matrix, such as Fig. 2 institute Show.In the figure, the recognition result of each list registration word 0~9 and each row indicate the true value of number 0~9.It is small on leading diagonal Square indicates the case where correct identification, and the size of small cube shows that more greatly the number of the correct identification is more, and in other positions There is small cube and shows the case where there are wrong identifications.It can be seen from this figure that C-MCFA and D-MCFA of the invention are (here only One of node is provided, other node results are identical) better performances, and D-GMM performance is poor, that worst is N-MCFA. In the quantificational expression of result, using the mean value and variance two indices of discrimination, as shown in Figure 3.In the figure, the present invention is set The recognition correct rate and variance and C-MCFA of the D-MCFA of meter is essentially identical, and the variance of each node recognition correct rate compared with It is low, and N-MCFA and D-GMM recognition correct rate and variance performance are all poor.In addition, will be using distributed of the present invention D-MCFA and the existing distributed Handwritten Digit Recognition method based on the analysis of t hybrid cytokine are compared, the results showed that this hair The discrimination of bright method is suitable with the distributed Handwritten Digit Recognition method recognition performance analyzed based on t hybrid cytokine, but D- MCFA completes to train the required time to be 180 seconds, and the distributed Handwritten Digit Recognition method based on the analysis of t hybrid cytokine is completed It is 420 seconds the time required to training, therefore, D-MCFA improves arithmetic speed.

Claims (3)

1. the distributed high dimensional data classification method based on mixing common factor analyzer, which is characterized in that the described method includes:
Step 1, the acquisition of data;
A network is formed equipped with M node, each collected data of node come from V class, data dimension p;Wherein, it saves In the collected all data of point m, the data set from v-th of class isWhereinIt indicates at node m, the nth data from v-th of class,For the training data number of digital v-th of class;In addition, section The neighbor node set expression of point m is Rm
Step 2, training: for the data in all nodes from v-th of classWith mixing common factor Analyzer (MCFA) describes its distribution, and completes the training of model using distributed way, estimates parameterThe parameter set of MCFA corresponding to every a kind of data is estimated in the same way Θ(v)(v=1 ..., V);
WhereinFor mixed weight-value, meet (q n dimensional vector n) and((q × q) matrix) is respectively to tie up with p The mean value and covariance matrix for the Gaussian Profile that corresponding q denapon of data is obeyed, q takes any whole between p/2~p/8 Number, each of L and E matrix element are all generated from standardized normal distribution N (0,1);
Step 3, it identifies: when any node in network collects new data x' for identification, calculating x' about Θ(v) Log-likelihood logp (the x'| Θ of (v=1 ..., V)(v)) (v=1 ..., V):
Using the corresponding serial number of max log likelihood value as the recognition result v' of x':
2. the distributed high dimensional data classification method according to claim 1 based on mixing common factor analyzer, special Sign is that the training process of the data in the step 2 for v-th of class includes:
Step 2-1, initialization: the initial parameter value in setting MCFAWherein, at each node (w1,...,wg,...,wG)=(1/G ..., 1/G ..., 1/G);Each of L and E matrix element is all from standard normal point It is generated in cloth N (0,1);{ξ1,...,ξg,...,ξGIn each element generate from the standardized normal distribution N (0,1);Ω1 =...=Ωg=...=ΩG=Iq, wherein IqFor the unit matrix of (q × q);
Step 2-2, broadcast data number: each node l (l=1,2 ..., M) is by its collected data amount check NlIt is broadcast to it Neighbor node;After the data amount check that all neighbor nodes broadcast that some node m receives it comes, which calculates weight system Number clm:
In addition, iteration count iter=1, starts iterative process;
Step 2-3, local calculation: at each node l in sensor network, according to the data X at current nodelWith it is upper The parameter value Θ estimated after an iterationold, as iter=1, ΘoldFor the parameter value after initialization, calculate al,n,g,bg,hgWithIts formula are as follows:
Wherein,
Step 2-4, broadcast diffusion: each node l in sensor network calculate three groups of intermediate variables:It is placed in a data packet, is then saved to other The data packet is spread in point broadcast;
Step 2-5, combined calculation: when node m (m=1 ..., M) is received from its all neighbor node l (l ∈ Rm) containing of sending After having the data packet of intermediate variable, joint statistic is calculated
Step 2-6, parameter Estimation: node m (m=1 ..., M) is according to the calculated joint statistic of step 2-5 and step 2-3 Calculated bg,gg,It estimates
Step 2-7, differentiate convergence: node m (m=1 ..., M) calculates the log-likelihood under current iteration, it may be assumed that
Wherein ΘnewIndicate the parameter value that current iteration estimates, ΘoldIndicate the estimation parameter value in last iteration;If logp(Xmnew)-logp(Xmold) < ε, wherein ε=10-5, node m enters final state;Otherwise, step 2-3 is turned to Start next iteration;
After step 2-1~step 2-7, Θ is estimated(v), complete the training of the data of v-th of class.
3. the distributed high dimensional data classification method according to claim 1 based on mixing common factor analyzer, special Sign is: the method is applied to the parallel and distributed process of data.
CN201510916426.6A 2015-12-10 2015-12-10 Distributed high dimensional data classification method based on mixing common factor analyzer Expired - Fee Related CN105550704B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510916426.6A CN105550704B (en) 2015-12-10 2015-12-10 Distributed high dimensional data classification method based on mixing common factor analyzer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510916426.6A CN105550704B (en) 2015-12-10 2015-12-10 Distributed high dimensional data classification method based on mixing common factor analyzer

Publications (2)

Publication Number Publication Date
CN105550704A CN105550704A (en) 2016-05-04
CN105550704B true CN105550704B (en) 2019-01-01

Family

ID=55829887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510916426.6A Expired - Fee Related CN105550704B (en) 2015-12-10 2015-12-10 Distributed high dimensional data classification method based on mixing common factor analyzer

Country Status (1)

Country Link
CN (1) CN105550704B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8873836B1 (en) * 2012-06-29 2014-10-28 Emc Corporation Cluster-based classification of high-resolution data
CN104484409A (en) * 2014-12-16 2015-04-01 芜湖乐锐思信息咨询有限公司 Data mining method for big data processing
CN104994170A (en) * 2015-07-15 2015-10-21 南京邮电大学 Distributed clustering method based on mixed factor analysis model in sensor network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8873836B1 (en) * 2012-06-29 2014-10-28 Emc Corporation Cluster-based classification of high-resolution data
CN104484409A (en) * 2014-12-16 2015-04-01 芜湖乐锐思信息咨询有限公司 Data mining method for big data processing
CN104994170A (en) * 2015-07-15 2015-10-21 南京邮电大学 Distributed clustering method based on mixed factor analysis model in sensor network

Also Published As

Publication number Publication date
CN105550704A (en) 2016-05-04

Similar Documents

Publication Publication Date Title
CN112085952B (en) Method and device for monitoring vehicle data, computer equipment and storage medium
WO2021017303A1 (en) Person re-identification method and apparatus, computer device and storage medium
CN102810161B (en) Method for detecting pedestrians in crowding scene
CN102571486B (en) Traffic identification method based on bag of word (BOW) model and statistic features
US10037495B2 (en) Clustering coefficient-based adaptive clustering method and system
CN109815788A (en) A kind of picture clustering method, device, storage medium and terminal device
CN105814582B (en) Method and system for recognizing human face
WO2014106363A1 (en) Mobile device positioning system and method
CN110166344B (en) Identity identification method, device and related equipment
CN105005760A (en) Pedestrian re-identification method based on finite mixture model
CN112016605A (en) Target detection method based on corner alignment and boundary matching of bounding box
CN111915015B (en) Abnormal value detection method and device, terminal equipment and storage medium
CN113422695B (en) Optimization method for improving robustness of topological structure of Internet of things
CN105024993A (en) Protocol comparison method based on vector operation
CN103390151A (en) Face detection method and device
CN107392254A (en) A kind of semantic segmentation method by combining the embedded structural map picture from pixel
CN106503631A (en) A kind of population analysis method and computer equipment
CN103310235A (en) Steganalysis method based on parameter identification and estimation
CN114783021A (en) Intelligent detection method, device, equipment and medium for wearing of mask
CN108304852B (en) Method and device for determining road section type, storage medium and electronic device
CN102722732A (en) Image set matching method based on data second order static modeling
CN103942526A (en) Linear feature extraction method for discrete data point set
CN103002472B (en) The method that event boundaries in a kind of heterogeneous body sensor network detects, device and intelligent communication equipment
CN105550704B (en) Distributed high dimensional data classification method based on mixing common factor analyzer
CN104462826B (en) The detection of multisensor evidences conflict and measure based on Singular Value Decomposition Using

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 210003 new model road 66, Gulou District, Nanjing, Jiangsu

Applicant after: NANJING University OF POSTS AND TELECOMMUNICATIONS

Address before: 210023 9 Wen Yuan Road, Qixia District, Nanjing, Jiangsu.

Applicant before: NANJING University OF POSTS AND TELECOMMUNICATIONS

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201207

Address after: Gulou District of Nanjing City, Jiangsu Province, Beijing Road No. 20 210024

Patentee after: STATE GRID JIANGSU ELECTRIC POWER Co.,Ltd. INFORMATION & TELECOMMUNICATION BRANCH

Address before: Room 214, building D5, No. 9, Kechuang Avenue, Zhongshan Science and Technology Park, Jiangbei new district, Nanjing, Jiangsu Province

Patentee before: Nanjing Tian Gu Information Technology Co.,Ltd.

Effective date of registration: 20201207

Address after: Room 214, building D5, No. 9, Kechuang Avenue, Zhongshan Science and Technology Park, Jiangbei new district, Nanjing, Jiangsu Province

Patentee after: Nanjing Tian Gu Information Technology Co.,Ltd.

Address before: 210003 Gulou District, Jiangsu, Nanjing new model road, No. 66

Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190101

CF01 Termination of patent right due to non-payment of annual fee