CN100367300C - Characteristic selecting method based on artificial nerve network - Google Patents

Characteristic selecting method based on artificial nerve network Download PDF

Info

Publication number
CN100367300C
CN100367300C CNB2006100195700A CN200610019570A CN100367300C CN 100367300 C CN100367300 C CN 100367300C CN B2006100195700 A CNB2006100195700 A CN B2006100195700A CN 200610019570 A CN200610019570 A CN 200610019570A CN 100367300 C CN100367300 C CN 100367300C
Authority
CN
China
Prior art keywords
layer
node
neural network
input
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CNB2006100195700A
Other languages
Chinese (zh)
Other versions
CN1945602A (en
Inventor
桑农
曹治国
张天序
谢衍涛
张�荣
贾沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Streamax Technology Co Ltd
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CNB2006100195700A priority Critical patent/CN100367300C/en
Publication of CN1945602A publication Critical patent/CN1945602A/en
Application granted granted Critical
Publication of CN100367300C publication Critical patent/CN100367300C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an features selection method based on the artificial neural network, including: (1) the user gives all the features for selection, and the sample for training the artificial neural network; (2) selecting the number of blurring subjection function, setting the number of nodes on each layer of artificial neural network, link weight among layers and the initial value of blurring subjection function; (3) using the back-propagation algorithm to train the network in the mode of batch processing, to adjust the network linking weights and parameters of blurring subjection function; (4) calculating all the importance of characteristics, and ranking the features. The invention avoids the problem of data normalization, in which, calculating is simple and training network is just once, and is easy to combine with various algorithm to be a complete features selection system. The invention has been successfully used in the pattern recognition and target classification with a variety of multi-dimensional characteristics, and also can be applied to all types of pattern recognition with the data characteristics.

Description

Feature selection method based on artificial neural network
Technical Field
The invention belongs to the field of pattern recognition, relates to a feature selection method, and particularly relates to a feature selection method based on an artificial neural network.
Background
Feature selection (feature selection) is an important aspect in the field of pattern recognition, because the complexity of a pattern recognition algorithm tends to increase exponentially with the increase of the dimension of data, and if the dimension of data is not reduced, the size of a classifier becomes extremely large, and the computation overhead required for classification is too large to bear. Therefore, the data features are selected, important features in the data features are selected, and reduction of dimensionality of the data features is an indispensable link. Moreover, most of the features used by most pattern recognition algorithms are automatically extracted by machines at present, so that the features such as redundancy, noise and the like inevitably exist, and the problem can be effectively eliminated by utilizing feature selection.
Feature selection is the process of selecting a subset from a set of all features without or with little reduction in classifier recognition rate. The key point of the feature selection technique is what criteria are chosen to measure the importance of the features. Traditional metric criteria, such as distance-based metrics, information (or uncertainty) -based metrics, dependency-based metrics, etc., focus on analyzing the characteristics of the data, and such methods are not ideal in practice. With the continuous progress of the field of artificial intelligence, some feature measurement methods using the technologies of artificial neural networks (artificial neural networks), fuzzy math (fuzzy math) and the like are proposed. This class of methods is based on classification error rate, i.e., measures the importance of features to classification error rate, and is therefore more efficient than the previous class of methods. In particular operation, most of these methods use artificial neural network techniques for feature selection.
Feature selection based on artificial neural networks can be seen as a special case of pruning algorithm (prunealgorithm), i.e. pruning nodes of the input layer instead of nodes or weights of the hidden layer, as in the document 1, reed, prunen algorithms-aservey, ieee transaction on neural networks,1993, 4 (5): 740 to 746. The common idea is to use the variation of the output values of the artificial neural network before and after pruningChemistry as a characteristic sensitivity measure, such as VerikasA, baeauskiene m.feature selection with neuro networks, pattern recognitionletters,2002, 23 (11) of document 2: 1323 to 1335. The basic assumptions of this idea are: a well-learned neural network will have a greater, i.e. more sensitive, change in the corresponding output value for the more important feature changes, and vice versa. Sensitivity-based metric Λ j The feature selection method of (1) most directly and accurately reflects this hypothesis, such as RuckDW, rogers SKandKabrisky of document 3Featureelectionisingamultilayer proceptron, journal of neuroalnetworkcomputing, 1990,9 (1): 40 to 48.
When the importance of a certain feature is specifically considered, the output change of the artificial neural network before and after the feature is deleted is calculated to be used as a feature metric. The deletion Feature is an observation that the Feature is constantly zero in the sample, as described in De r.k, basal J and Pal S k, neuro-Fuzzy Feature Evaluation With Theoretical analysis, neural Networks,1999, 12 (10): 1429 to 1455. This approach requires that the data be normalized first, which can corrupt the data. In order to avoid the problem of normalization, an ambiguity mapping (fuzzy mapping) layer can be added In the artificial neural network, and each feature is mapped In a one-to-many manner, and the domain of the mapped new feature, namely the ambiguity feature, is limited to [0,1], so that the problem of normalization is avoided, such as Jia P and Sang N.feature selection using a radial basis functions and fuzzy set of the electronic measures, in: proceedings of SPIE 5281 (1) -the Third International Symposium on Multispectral Image Processing and Pattern Recognition, beijing, china: the International Society of Optical Engineering Press,2003.109 to 114. In this method, a fuzzy membership function (fuzzy membership function) is obtained before the artificial neural network learns, and it depends on the first and second moments of the data, which has substantially the same problem as the normalization of document 4. In fact, the fuzzy mapping layer proposed in document 5 can be completely separated from the network, and is used as a normalization method for preprocessing data.
Disclosure of Invention
The invention aims to provide a feature selection method based on an artificial neural network, which avoids the problem of data normalization, has high robustness and has good effects on noise features and redundancy features.
The invention provides a feature selection method based on an artificial neural network, which comprises the following steps:
(1) User specifies the feature f to be selected i I =1, \ 8230;, N, giving a training sample set for training an artificial neural network:
Figure C20061001957000121
the training samples have the same dimension R, R = N, and are classified into K categories: omega l ,…,ω K The qth training sample x q X of the ith dimension of (2) pi I.e. the specified i-th feature f i The q-th observation of (1);
(2) Constructing an artificial neural network sequentially consisting of an input layer, a fuzzy mapping layer, a hidden layer and an output layer according to the training sample; the neural network data is input into the neural network from the input layer through the connection weight w 2 The information is transmitted to a fuzzy mapping layer, and is acted by the fuzzy mapping layer and then passes through a connection weight w 3 Transferred to the hidden layer, acted by the hidden layer and then passed through the connection weight w 4 Transmitting to an output layer to obtain an output;
(3) Training the initialized artificial neural network by using a training sample set given by a user, wherein the processing process comprises the following steps:
(3.1) selecting the estimator e of the mean square error as a performance index in the learning process:
Figure C20061001957000122
wherein, t i m (q) is a target value of an output of a node i of an mth layer of the neural network when a q-th sample is input, a i m (q) is the actual output of node i at the mth level when the qth sample is input, m =2,3,4,g is the number of nodes at that level;
(3.2) use of counter-propagationConnection weight matrix w between each layer of artificial neural network by algorithm m Training, wherein m =3,4;
(3.3) updating parameters xi, sigma and tau in the function of the fuzzy mapping layer node; where ξ is the expectation of class conditional probability density of the corresponding node, σ is the standard deviation of the class conditional probability density of the corresponding node, and τ is an initial value of the corresponding node;
(3.4) when e meets the convergence condition, entering the step (4), and otherwise, repeating the steps (3.2) - (3.3);
(4) And carrying out fuzzy pruning on the features by using the trained artificial neural network, calculating the importance measurement of each feature, and sequencing the features according to the measurement values of the importance.
The invention only needs the user to give the original characteristic set and the sample for training, and can obtain the ranking of all the characteristics in the original characteristic set to the importance of classification. Compared with the existing feature selection method, the feature selection method of the invention has the advantages that: the problem of data normalization is well avoided; the calculation is simple, and the neural network only needs to be trained once; the method is easy to combine with various search algorithms to form a complete feature selection system. The method is successfully applied to various pattern recognition and target classification with multi-dimensional characteristics, and can also be applied to the field of pattern recognition of various data-type related characteristics.
Drawings
FIG. 1 is a flow chart of a feature selection method based on an artificial neural network with an adaptive fuzzy mapping layer;
FIG. 2 is a schematic diagram of an artificial neural network with an adaptive fuzzy mapping layer;
FIG. 3 is a schematic diagram of an artificial neural network with an adaptive fuzzy mapping layer built in an example
FIG. 4 is a graph of the fuzzy membership function (initial value) for the feature seqal length.
Detailed Description
The feature selection method of the present invention starts the feature selection process on the premise that the user gives a data set for training and a feature set to be selected, and the feature selection process is described in detail below.
Feature selection is performed to obtain a measure of the importance of the features. In the feature selection method provided by the invention, the artificial neural network with the fuzzy mapping layer is trained by using the data set provided by the user, and then the importance metric of each feature is calculated by means of the trained network, so that the purpose of feature selection is achieved. As shown in fig. 1, the method of the present invention comprises the steps of:
(1) The user specifies the feature f that needs to be selected i (i =1, \8230;, N), a training sample for training the artificial neural network is given.
(1.1) specification of features
The specified characteristics must be data-type characteristics that directly reflect the actual physical or geometric meaning of the object, such as weight, speed, length, etc. The number N of features is a natural number, that is, the number of features is one or more.
(1.2) definition of training samples
Training samples for training the artificial neural network are also of a data type, and all samples have the same dimensional number R (R = N) and are classified into K categories: omega l ,…,ω K . Dimension R is equal to the number of features specified in step (1.1). The q training sample x q X of the ith dimension of qi Is the specified i-th feature f i The q-th observation of (1).
The specific mathematical description of the training sample set is:
Figure C20061001957000141
wherein Q is the number of training samples, and Q is more than or equal to K, each class omega l (l =1, \8230;, K) has at least one sample,
Figure C20061001957000142
representing a set of real numbers, R being a sample x q Is equal to the number of features N of the training sample set X.
(2) And constructing an artificial neural network consisting of a characteristic layer A, a fuzzy mapping layer B, a hidden layer C and an output layer D according to the training sample, and initializing.
As shown in FIG. 2, the artificial neural network structure comprises an input layer A (i.e. a characteristic layer), a fuzzy mapping layer B, a hidden layer C and an output layer D, wherein a connection weight w is used between layers m (m =2,3,4). The data is input into the neural network from the input layer, then is transmitted to the fuzzy mapping layer through the connection weight, is transmitted to the hidden layer through the connection weight after being acted by the fuzzy mapping layer, and is transmitted to the output layer through the connection weight after being acted by the hidden layer, so that the output is obtained. The construction of an artificial neural network with a fuzzy mapping layer requires the setting of the node numbers of an input layer (characteristic layer), a hidden layer and an output layer; determining each feature f i Number m of corresponding fuzzy membership function i And defining the fuzzy membership functions. The initialization operation needs to determine the initial value of the connection weight between each layer of the artificial neural network and the initial value of the parameter of the fuzzy membership function in each node in the fuzzy mapping layer. The specific process is as follows:
(2.1) input layer A
(2.1.1) selection of number of input layer nodes
Input layer A node number S 1 Equal to the dimension R of the training sample.
(2.1.2) input and output of input layer nodes
Each node inputs a certain dimension of the training sample. When the q sample is input by the neural network, the node A of the input layer i The inputs of (a) are:
the output is:
Figure C20061001957000144
(2.2) fuzzy mapping layer B
(2.2.1) selection of the number of fuzzy membership functions corresponding to each feature
For the feature f i F can be defined according to its specific physical meaning i Corresponding to h i Each fuzzy membership function forms a fuzzy mapping layer node. That is, the number of nodes of the mapping layer B is blurred
Figure C20061001957000151
h i The selection of the value needs to satisfy the following conditions:
Figure C20061001957000152
wherein Q is min =min{Q l },Q l Representing the class omega in the training sample given by the user l The number of samples of (2).
(2.2.2) connection weights between input layer and fuzzy mapping layer
Node A of the input layer i Node B with fuzzy mapping layer i1 ,…,B ihi Node B connected by connection weights and having fuzzy mapping layer il …,B ihi Do not interact with except A i And other nodes of the other input layers are connected, namely the connection mode of 1 to many is called. Feature level a node a i And fuzzy mapping layer node B ij The connection weight value between the feature layer A and the fuzzy mapping layer B is constant to 1, namely, the connection weight matrix w between the feature layer A and the fuzzy mapping layer B 2 The training of the artificial neural network is not participated.
(2.2.3) node B of fuzzy mapping layer ij Is inputted
When the q sample is input by the neural network, the node B of the mapping layer is blurred ij The inputs of (a) are:
(2.2.4) node B of fuzzy mapping layer ij Function of (2)
Node B of fuzzy mapping layer ij The function of (a) is a fuzzy membership function mu ij I.e. characteristic f i The jth membership function of (a). In the present invention, the feature f of the ith dimension is given i A fuzzy membership function of (2) means that a mapping mu is given i :f i →[0,1]。
Node B ij The fuzzy membership function of (a) is of the form:
Figure C20061001957000154
here, n is ij 2 (q) node B for the fuzzy mapping layer when the q sample is input ij Input of a ij 2 (q) is the corresponding actual output. Xi ij Is a node B ij Class of conditional probability density of σ ij Is a node B ij Like strip ofStandard deviation of piece probability density, tau ij Is a node B ij Is measured by the measurement unit. The action of tau is shown in: even if ξ and σ for two membership functions are equal, it is still possible to avoid that the two membership functions are exactly the same by adjusting τ.
For σ ij And τ ij Is not particularly limited, ξ ij Generally taken in the corresponding characteristic f i Is randomly selected on the value range of (1).
(2.3) hidden layer C
(2.3.1) selection of the number of hidden layer nodes
Number of nodes S of hidden layer C 3 Is not specifically selectedThe requirement (2) is generally not less than the number K of classes of training samples.
(2.3.2) obfuscating the connection weights between the mapping layer and the hidden layer
The fuzzy mapping layer B is fully connected with the hidden layer C, that is, each node of the fuzzy mapping layer B is connected with all nodes of the hidden layer C, and each node of the hidden layer C is also connected with all nodes of the fuzzy mapping layer B. Fuzzy mapping layer B and implicit layer C
The initialization of the connection weight value adopts a random method, and the value range of the connection weight value is [0,1]]。
(2.3.3) input of hidden layer node
When the q sample is input to the neural network, node C of the hidden layer u (u=1,…,S 3 ) The inputs of (a) are:
Figure C20061001957000162
wherein, a p 2 (q) node B which is a fuzzy mapping layer p (p=1,…,S 2 ) Output at the input of the q sample of the neural network, w pu 3 Node B being a fuzzy mapping layer p And hidden layer node C u The right of connection between them.
(2.3.4) Effect function of hidden layer nodes
The role function of the hidden layer node is selected as a Sigmoid function:
Figure C20061001957000171
wherein n is u 3 (q) node C of the hidden layer at the input of the q sample for the neural network u Input of a u 3 (q) is the corresponding output.
It can also be chosen as a hyperbolic tangent function:
Figure C20061001957000172
wherein n is u 3 (q) node C of the hidden layer at the input of the q-th sample for the neural network u Input of a u 3 (q) is the corresponding output.
(2.4) output layer D
(2.4.1) selection of the number of output layer nodes
Node number S of output layer D 4 Equal to the number of classes K of the training samples.
(2.4.2) connection rights between hidden layer and output layer
The hidden layer C and the output layer D are all connected, that is, each node of the hidden layer C is connected with all nodes of the output layer D, and each node of the output layer D is also connected with all nodes of the hidden layer C. Connection weight between hidden layer C and output layer D
Figure C20061001957000173
(u=1,…,S 3 ,l=1,…,S 4 ) The initialization of (2) adopts a random method, and the value range of the weight is [0,1]]。
(2.4.3) input and output of output layer nodes
Output layer node D l (l=1,…,S 4 ) Input and output of (D) are equal l Output value n of l 4 (q) is that the q sample of the neural network input belongs to the class ω l The probability of (c).
Wherein w ul 4 Node C being a hidden layer u And output layer node D l The right of connection between.
(3) And training the artificial neural network after initialization by using a training sample set given by a user.
Training the artificial neural network by using a back propagation algorithm in a batch learning mode according to a training sample set given by a user, and updating the connection weight between each layer of the neural network and the parameters of the fuzzy membership function in each training until the artificial neural network meets the convergence condition set by the user.
The specific training method is as follows.
(3.1) selection of Convergence Condition
Firstly, an estimator e of mean square error is selected as a performance index in the learning process:
Figure C20061001957000181
wherein, t i m (q) is a target value of an output of the node i of the mth layer when the q-th sample is input, a i m (q) is the actual output of the mth layer node i when the qth sample is input, and G is the number of nodes in this layer.
The user can set a convergence condition that e is less than a small positive number according to the requirement on the calculation accuracy. For example, setting e < 0.001 as a convergence condition, calculating the value of e after the artificial neural network completes the steps (3.2) and (3.3) in a certain training, and stopping the training if the value of e is less than 0.001; otherwise, the next training is carried out.
(3.2) updating connection weights between layers
The connection weight between the input layer A and the fuzzy mapping layer B is constant to 1, and the training is not participated in. Connecting weight w between fuzzy mapping layer B and hidden layer C 3 Connection w between hidden layer C and output layer D 4 All need to participate in training, and w 3 And w 4 The updating method in training is the same.
The sensitivity of the estimator of the mean square error e in the back-propagation algorithm to the input of the mth layer is defined as
Figure C20061001957000182
Wherein S is m Is the number of nodes of the mth layer of the artificial neural network, n m Is one size of S m A matrix of xQ representing the input of the mth layer of the artificial neural network; n is i m (q) represents the input of the node i of the mth layer at the time when the q-th sample is input to the neural network. Furthermore, it is possible to provide a liquid crystal display device,
the connection weight value is updated according to the steepest descent method, and a minimum modulus estimation calculation method such as a conjugate gradient method can also be adopted here. Connection weight matrix w between mth layer and m-1 layer (m =3,4) of artificial neural network m (dimension is S) m-1 ×S m ) Updated to when the (r + 1) th training is started
w m (r+1)=w m (r)-αg m (a m-1 ) T .
Wherein, alpha is weight learning rate, the value range is more than 0 and less than or equal to 1, and is generally selected to be 0.05.r is the number of exercises. a is m Is one size of S m A matrix of x Q, representing the actual output of the mth layer of the artificial neural network:
Figure C20061001957000192
(3.3) updating of parameters xi, sigma, tau of the Functions of nodes of the fuzzy mapping layer
Node B of fuzzy mapping layer B p (p=1,…,S 2 ) Three parameters xi of the function of (1) p ,σ p ,τ p Updated as follows, where θ is ξ p Is theta is sigma p Is rho is tau p The learning rate of the method adopts parameter selection methods such as a trial and error method and the like.
Figure C20061001957000193
Figure C20061001957000194
Figure C20061001957000201
Wherein the content of the first and second substances, p a 2 is the output matrix a of the fuzzy mapping layer B when inputting Q samples to the artificial neural network 2 Row p. And also
Figure C20061001957000202
Figure C20061001957000204
Figure C20061001957000205
Wherein, a i 1 (q) is with node B p Connected input layer node A i The output at the input of the q sample of the neural network, i.e. x qi
(3.4) termination of training
And (3) carrying out the operations of the steps (3.2) and (3.3) in each training of the artificial neural network. After each training is completed, calculating the value of e, and stopping training if the convergence condition set in step (3.1) is met; otherwise, the next training is carried out.
(4) And carrying out fuzzy pruning on the features by using the trained artificial neural network, calculating the importance measure of each feature, and sequencing.
(4.1) pairs of features f i Performing fuzzy pruning
So-called pair feature f i The fuzzy pruning (fuzzy rule algorithm) of (1) is to set the output values of all fuzzy membership functions corresponding to the feature fi to 0.5, that is, to make the output of the fuzzy mapping layer to be 0
Figure C20061001957000211
Then, the artificial neural network for the input sample x under the condition is obtained q Output vector a given by time-output layer 4 (x q ,i)。
(4.2) calculating the importance measure of the feature FQJ (i)
The characteristic metric function FQJ (i) provided by the invention represents the ith dimension characteristic f i For the importance of the classification, feature f i A larger value of FQJ (i) indicates that the feature is more important for classification. FQJ (i) is defined as follows:
Figure C20061001957000212
wherein, a 4 (x q ) Representing an artificial neural network for an input sample x q Output vector given by time-input layer, a 4 (x q I) represents a pair of features f i Input sample x for artificial neural network after fuzzy pruning q Given the output vector. Using the artificial neural network trained in step (3) to apply to all features f given by the user in step (1.1) i Calculating the corresponding FQJ (i), characteristic f according to the formula i The FQJ (i) value of (a) is a measure of its importance.
(4.3) for all characteristics f i Ordered according to their importance measure FQJ (i)
For all characteristics f i The sorting of the importance of all features to the classification is obtained in descending order of the magnitude of the corresponding FQJ (i) values. By usingThe user can select one or more characteristics with top ranking for identification according to actual needs or constraints of objective conditions, so that the purpose of characteristic selection is achieved.
Example (c):
the user wishes to investigate the following four features: the importance of the Sepal length, the Sepal width, the Petal length and the Petal width to the classification of people is provided, and a training sample is given: the data set IRIS. The IRIS data set is used by many researchers for research in pattern recognition and has become a benchmark. The data set contains 3 classes, each class has 50 samples, each sample has 4 characteristics, sequentially Sepal length, sepal width, petal length and Petal width.
The specific steps for feature selection are as follows:
(1) User specifies the feature f to be selected i (i =1, \8230;, N), a training sample for training the artificial neural network is given.
(1.1) specification of features
User-specified 4 features: the Sepal length, sepal width, petal length and Petal width are all data features. Then N =4.
(1.2) giving training samples
Training samples given by the user are divided into 3 classes: iris Setosa, iris Versicolor and Iris Virginica, i.e., K =3. Each class has 50 samples for a total of 150 samples, i.e., Q =150. Each sample has 4-dimensional features: sepal length, sepal width, petal length, and Petal width. The dimension of the sample R = N =4.
(2) And constructing an artificial neural network consisting of a characteristic layer A, a fuzzy mapping layer B, a hidden layer C and an output layer D according to the training sample, and initializing.
(2.1) construction of the input layer A
(2.1.1) selection of number of input layer nodes
Input layer A node number S 1 Equal to the dimension R of the training sample, i.e.S 1 =4。
(2.2) constructing a fuzzy mapping layer B
(2.2.1) selection of the number of fuzzy membership functions corresponding to each feature
Defining 3 fuzzy membership functions, h, for each feature 1 =h 2 =h 3 =h 4 =3, such that the number of nodes in the fuzzy mapping layer is
Figure C20061001957000221
Is provided with
Figure C20061001957000222
The constraints are satisfied.
(2.2.2) connection weights between input layer and fuzzy mapping layer
Node A of the input layer 1 Node B with fuzzy mapping layer only 11 ,B 12 ,B 13 Nodes A of the input layer connected by connection rights 2 Node B with fuzzy mapping layer only 21 ,B 22 ,B 23 Connected by means of connection rights, node A of the input layer 3 Node B with fuzzy mapping layer only 31 ,B 32 ,B 33 Connected by means of connection rights, node A of the input layer 4 Node B with fuzzy mapping layer only 41 ,B 42 ,B 43 Are connected by a connection right.
(2.2.3) selecting Functions of fuzzy mapping layer nodes
Selecting a node B ij Fuzzy membership function of (a):
Figure C20061001957000231
parameter xi of membership function ij Is generally in the characteristic f i Is randomly selected over the range of values. Taking the feature Sepallength as an example, the value range of the feature is [4.3, 7.9']Then f is 1 In the corresponding 3 fuzzy membership functions, the initial value of xi selected may be: xi 11 =5.2,ξ 12 =6.1,ξ 13 =7.0. Sigma can be setIs σ 11 =σ 12 =σ 13 =0.45, τ may be set to τ 11 =τ 12 =τ l3 =2, the resulting membership function is shown in fig. 4 below.
(2.3) hidden layer C
(2.3.1) selection of the number of hidden layer nodes
Empirically, S is selected 3 =6。
(2.3.2) obfuscating the connection weights between the mapping layer and the hidden layer
Fuzzy mapping layer B and implicit layer C connection weightThe initialization of (p =1, \8230;, 12,u =1, \8230;, 6) is performed by a random method, and the connection weight value ranges from 0,1]. Can make w pu =0.5。
(2.3.3) selecting the Functions of the hidden layer nodes
The role function of the hidden layer node is selected as a Sigmoid function:
wherein n is u 3 (q) node C of the hidden layer at the input of the q-th sample for the neural network u Input of (a) u 3 (q) is the corresponding output.
(2.4) output layer D
(2.4.1) selection of the number of output layer nodes
Node number S of output layer D 4 Is equal to the number of classes K, i.e. S, of the training samples 4 =K=3。
(2.4.2) connection rights between hidden layer and output layer
Connection weight between hidden layer C and output layer D
Figure C20061001957000242
The initialization of (u =1, \8230;, 6) adopts a random method, and the value range of the weight value is [0,1;)]. Can make w ul =0.5(l=1,2,3)。
So far, the artificial neural network with the fuzzy mapping layer is constructed, and the structure diagram is shown in fig. 3.
(3) And training the artificial neural network after initialization by using a training sample set given by a user.
(3.1) selection of Convergence Condition
The convergence condition is set to e < 0.001.
(3.2) updating connection weights between layers
The weight learning rate α =0.05 is empirically selected.
According to the steepest descent method, a connection weight matrix w between the mth layer and the m-1 layer (m =3,4) of the artificial neural network m (dimension is S) m-1 ×S m ) Updated to when the (r + 1) th training is started
w m (r+1)=w m (r)-0.05g m (a m-1 ) T .
Wherein
Figure C20061001957000251
(3.3) updating of parameters xi, sigma, tau of the Functions of nodes of the fuzzy mapping layer
The learning rate of each parameter is selected to be θ =0.1, θ =0.1, and ρ =0.1.
Node B for updating fuzzy mapping layer B using the following formula p (p=1,…,S 2 ) Three parameters xi of the function of p ,σ p ,τ p
Figure C20061001957000253
Figure C20061001957000254
Wherein the content of the first and second substances, p a 2 is the output matrix a of the fuzzy mapping layer B when inputting Q samples to the artificial neural network 2 Line p.
(3.4) termination of training
After the end of 1037 th training, it was calculated that e =0.000999, the convergence condition was satisfied, and training was terminated.
(4) And carrying out fuzzy pruning on the features by using the trained artificial neural network, calculating the importance measure of each feature, and sequencing.
(4.1) pairs of features f i Performing fuzzy pruning
Take the characteristic Sepallength as an example, for f 1 Pruning, i.e. making node B of the mapping layer fuzzy 11 ,B 12 ,B 13 The output value of (d) is set to 0. For example, the observed value of the feature Sepallength is 5.1, and node B of the fuzzy mapping layer before pruning 11 ,B 12 ,B 13 The output of (A) is [0.117,0.005,0.009]The observed value of the characteristic SepalWidth is 3.5, and the node B of the fuzzy mapping layer before pruning 21 ,B 22 ,B 23 The output of (1) is [0.100,0.500 ]]The observed value of the characteristic Petallingth is 1.4, and the fuzzy mapping layer node B before pruning 31 ,B 32 ,B 33 The output of (1) is [0.141,0.974,0.028 ]]The observed value of the characteristic Petalwidth is 0.2, and the node B of the fuzzy mapping layer before pruning 41 ,B 42 ,B 43 The output of (A) is [0.265,0.069,0.030]Thus samples [5.1,3.5,1.4,0.2]The output of the fuzzy mapping layer before pruning is
[0.117,0.005,0.009,0.100,0.500,0.500,0.141,0.974,0.028,0.265,0.069,0.030]。
When pruning is to be performed, the output is modified to
[0.500,0.500,0.500,0.100,0.500,0.500,0.141,0.974,0.028,0.265,0.069,0.030]。
Then, the artificial neural network after such modification is calculated for the input sample x q Output vector a given by time-output layer 4 (x q ,1). Pruning of other features and so on.
(4.2) calculating the importance measure of the feature FQJ (i)
Still take the feature Sepallenggth as an example, for f 1 Calculate FQJ (1):
Figure C20061001957000261
similarly, FQJ (2) =0.095858, FQJ (3) =0.491984, and FQJ (4) =0.511002 are calculated.
For all features f i And sorting the obtained features in descending order according to the corresponding FQJ (i) values, wherein the obtained features have the following importance sequence for the classification task: petalwidth, petallingth, sepalwidth, sepallength.

Claims (5)

1. A feature selection method based on an artificial neural network comprises the following steps:
(1) User specifies the feature f to be selected i I =1, \ 8230;, N, giving a training sample set for training an artificial neural network:
Figure C2006100195700002C1
the training samples have the same dimension R, R = N, and are classified into K categories: omega 1 ,…,ω K The qth training sample x q X of the ith dimension of qi I.e. the specified i-th feature f i The q-th observation of (1);
(2) Constructing an artificial neural network sequentially consisting of an input layer, a fuzzy mapping layer, a hidden layer and an output layer according to the training sample; the neural network data is input into the neural network from the input layer through the connection weight w 2 Transferring to the fuzzy mapping layer, acting by the fuzzy mapping layer, and passing through the connection weight w 3 Transferring to the hidden layer, acting by the hidden layer, and passing through the connection weight w 4 Transmitting the data to an output layer to obtain output;
(3) Training the initialized artificial neural network by using a training sample set given by a user, wherein the processing procedure is as follows:
(3.1) selecting the estimator e of the mean square error as a performance index in the learning process:
Figure C2006100195700002C2
wherein, t i m (q) is a target value of an output of a node i of the mth layer of the neural network when the qth sample is input, a i m (q) is the actual output of node i at the mth level when the qth sample is input, m =2,3,4,g is the number of nodes at that level;
(3.2) adopting a back propagation algorithm to carry out connection weight matrix w between each layer of the artificial neural network m Performing training, wherein m =3,4;
(3.3) updating parameters xi, sigma and tau in the function of the fuzzy mapping layer node; where ξ is the expectation of class conditional probability density of the corresponding node, σ is the standard deviation of the class conditional probability density of the corresponding node, and τ is an initial value of the corresponding node;
(3.4) when e meets the convergence condition, entering the step (4), and otherwise, repeating the steps (3.2) - (3.3);
(4) And carrying out fuzzy pruning on the features by using the trained artificial neural network, calculating the importance measurement of each feature, and sequencing the features according to the measurement values of the importance.
2. The method of claim 1, wherein: the step (2) comprises the following processing procedures:
(2.1) input layer A
Input layer A node number S 1 Equal to the dimension R of the training sample, each node inputs a certain dimension of the training sample, and when the q sample is input by the neural network, the node A of the input layer i The inputs of (a) are:
Figure C2006100195700003C1
the output is:
(2.2) fuzzy mapping layer B
Fuzzy mapping of node number of layer B
Figure C2006100195700003C3
h i Is characterized by f i Number of corresponding fuzzy membership functions, h i The selection of the value needs to satisfy the following conditions:
Figure C2006100195700003C4
wherein Q is min =min{Q l },Q l Representing the class omega in the training sample given by the user l The number of samples of (a);
node A of the input layer i Node B with fuzzy mapping layer i1 ,…,B ihi Node B connected by connection weights and having fuzzy mapping layer i1 ,…,B ihi Do not interact with except A i Other nodes of the other input layer are connected; feature level A node A i And fuzzy mapping layer node B ij The connection weight between the two is constantly 1; when the q sample is input by the neural network, the node B of the mapping layer is blurred ij The inputs of (a) are:
Figure C2006100195700003C5
node B of fuzzy mapping layer ij The function of (a) is a fuzzy membership function mu ij Given the ith dimension feature f i A fuzzy membership function of (1) means that a mapping mu is given i :f i →[0,1];
Node B ij The fuzzy membership function of (a) is of the form:
Figure C2006100195700004C1
n ij 2 (q) node B of the fuzzy mapping layer when the q-th sample is input ij Input of (a) ij 2 (q) is the corresponding actual output, ξ ij Is a node B ij Class of conditional probability density of σ ij As a node B ij Standard deviation of the class of conditional probability densities, τ ij As a node B ij An initial value of (a);
(2.3) hidden layer C
Number of nodes S of hidden layer C 3 The number K of the types of the samples is more than or equal to; the fuzzy mapping layer B and the hidden layer C are all connected, and the connection weight between the fuzzy mapping layer B and the hidden layer C
Figure C2006100195700004C2
Wherein p =1, \ 8230;, S 2 ,u=1,…,S 3 The initialization of the connection weight value adopts a random method, and the value range of the connection weight value is [0,1]];
When the neural network inputs the q sample, node C of the hidden layer u The inputs of (a) are:
Figure C2006100195700004C3
wherein u =1, \ 8230;, S 3 ,a p 2 (q) is a fuzzy mapping layerNode B of p The output at the input of the q sample of the neural network, p =1, \8230;, S 2 ,w pu 3 Node B being a fuzzy mapping layer p And hidden layer node C u The right of connection between them;
the role function of the hidden layer node is selected as a Sigmoid function or a hyperbolic tangent function:
wherein u =1, \ 8230;, S 3 ,n u 3 (q) node C of the hidden layer at the input of the q-th sample for the neural network u Input of a u 3 (q) is the corresponding output;
Figure C2006100195700005C1
wherein u =1, \ 8230;, S 3 ,n u 3 (q) node C of the hidden layer at the input of the q-th sample for the neural network u Input of a u 3 (q) is the corresponding output;
(2.4) output layer D
Node number S of output layer D 4 Is equal to the number of classes K of the training sample; the hidden layer C is fully connected with the output layer D; the connection weight between the hidden layer C and the output layer D is as follows:
Figure C2006100195700005C2
wherein u =1, \ 8230;, S 3 ,l=1,…,S 4 The initialization of the weight value is a random method, and the value range of the weight value is [0,1]];
Output layer node D l Is equal to the input and output of l =1, \ 8230, S 4 ,D l Output value n of l 4 (q) is that the q-th sample of the neural network input belongs to the class ω l Probability of (c):
wherein w ul 4 Node C being a hidden layer u And output layer node D l The right of connection between them.
3. The method of claim 1, wherein: the processing procedures of the steps (3.2) and (3.3) are as follows:
(3.2) training the connection weight w between the fuzzy mapping layer B and the hidden layer C m
The sensitivity of the estimator of mean square error e to the mth layer input is defined as
Figure C2006100195700006C1
Wherein S is m Is the number of nodes at the mth layer of the artificial neural network, n m Is one size of S m A xQ matrix representing the input to the mth layer of the artificial neural network; n is i m (q) represents an input of a node i of the mth layer at the time when the neural network inputs the qth sample, and,
Figure C2006100195700006C2
connection weight matrix w between mth layer and m-1 layer of artificial neural network m Dimension of S m-1 ×S m M =3,4, updated at the start of the (r + 1) th training
w m (r+1)=w m (r)-αg m (a m-1 ) T
Wherein alpha is weight learning rate, the value range is more than 0 and less than or equal to 1, r is the training times, a m Is one size of S m A matrix of x Q, representing the actual output of the mth layer of the artificial neural network:
Figure C2006100195700006C3
(3.3) node B of fuzzy mapping layer B p Three parameters xi of the action function of p ,σ p ,τ p Updated according to the following formula, wherein p =1, \8230;, S 2 Theta is xi p Is σ p Rho is τ p The learning rate of (c):
Figure C2006100195700007C3
wherein the content of the first and second substances, p a 2 fuzzy mapping layer B output matrix a when inputting Q samples to artificial neural network 2 Line p of (2), and
Figure C2006100195700007C5
Figure C2006100195700007C6
Figure C2006100195700008C1
wherein, a i 1 (q) is with node B p Connected input layer node A i The output when the q sample is input into the neural network is x qi
4. The method according to claim 1 or 2, characterized in that: the processing procedure of the step (4) is as follows:
(4.1) pairs of features f i Fuzzy pruning is carried out to lead the output of the fuzzy mapping layer to be
Figure C2006100195700008C2
Obtaining the input sample x of the artificial neural network under the condition q Output vector a given by time-output layer 4 (x q ,i);
(4.2) calculating the importance measure of the feature FQJ (i)
The feature metric function FQJ (i) represents the ith dimension feature f i For the importance of classification, FQJ (i) is defined as follows:
wherein, a 4 (x q ) Representing an artificial neural network for an input sample x q Output vector given by time input layer, a 4 (x q I) represents a pair of features f i Input sample x for artificial neural network after fuzzy pruning q The given output vector uses the artificial neural network trained in step (3) to apply to all the features f given by the user in step (1.1) i Calculating the corresponding FQJ (i), characteristic f according to the formula i The value of FQJ (i) is a measure of its importance;
(4.3) for all features f i Sorted by their importance measure FQJ (i).
5. The method of claim 3, wherein: the processing procedure of the step (4) is as follows:
(4.1) pairs of features f i Performing fuzzy pruning to make the output of the fuzzy mapping layer as
Figure C2006100195700009C1
Obtaining the input sample x of the artificial neural network under the condition q Output vector a given by time-output layer 4 (x q ,i);
(4.2) calculating the importance measure of the feature FQJ (i)
The feature metric function FQJ (i) represents the ith dimension feature f i For the importance of classification, FQJ (i) is defined as follows:
Figure C2006100195700009C2
wherein, a 4 (x q ) Representing an artificial neural network for an input sample x q Output vector given by time input layer, a 4 (x q I) represents a pair of features f i Input sample x for artificial neural network after fuzzy pruning q The given output vector uses the artificial neural network trained in step (3) to apply to all the features f given by the user in step (1.1) i Calculating the corresponding FQJ (i), characteristic f according to the formula i The value of FQJ (i) is a measure of its importance;
(4.3) for all characteristics f i Sorted by their importance measure FQJ (i).
CNB2006100195700A 2006-07-07 2006-07-07 Characteristic selecting method based on artificial nerve network Active CN100367300C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100195700A CN100367300C (en) 2006-07-07 2006-07-07 Characteristic selecting method based on artificial nerve network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100195700A CN100367300C (en) 2006-07-07 2006-07-07 Characteristic selecting method based on artificial nerve network

Publications (2)

Publication Number Publication Date
CN1945602A CN1945602A (en) 2007-04-11
CN100367300C true CN100367300C (en) 2008-02-06

Family

ID=38045000

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100195700A Active CN100367300C (en) 2006-07-07 2006-07-07 Characteristic selecting method based on artificial nerve network

Country Status (1)

Country Link
CN (1) CN100367300C (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7899253B2 (en) * 2006-09-08 2011-03-01 Mitsubishi Electric Research Laboratories, Inc. Detecting moving objects in video by classifying on riemannian manifolds
CN101079109B (en) * 2007-06-26 2011-11-30 北京中星微电子有限公司 Identity identification method and system based on uniform characteristic
CN101441728B (en) * 2007-11-21 2010-09-08 新乡市起重机厂有限公司 Neural network method of crane optimum design
CN101187649B (en) * 2007-12-12 2010-04-07 哈尔滨工业大学 Heterogeneous material diffusion welding interface defect automatic identification method
CN101510262B (en) * 2009-03-17 2012-05-23 江苏大学 Automatic measurement method for separated-out particles in steel and morphology classification method thereof
CN101882238B (en) * 2010-07-15 2012-02-22 长安大学 Wavelet neural network processor based on SOPC (System On a Programmable Chip)
CN102609612B (en) * 2011-12-31 2015-05-27 电子科技大学 Data fusion method for calibration of multi-parameter instruments
CN103425994B (en) * 2013-07-19 2016-09-21 淮阴工学院 A kind of feature selection approach for pattern classification
CN103606007B (en) * 2013-11-20 2016-11-16 广东省电信规划设计院有限公司 Target identification method based on Internet of Things and device
CN103759290A (en) * 2014-01-16 2014-04-30 广东电网公司电力科学研究院 Large coal-fired unit online monitoring and optimal control system and implementation method thereof
CN104504443A (en) * 2014-12-09 2015-04-08 河海大学 Feature selection method and device based on RBF (Radial Basis Function) neural network sensitivity
ES2714152T3 (en) 2015-01-28 2019-05-27 Google Llc Batch Normalization Layers
CN107480686B (en) * 2016-06-08 2021-03-30 阿里巴巴集团控股有限公司 Method and device for screening machine learning characteristics
CN106885228A (en) * 2017-02-10 2017-06-23 青岛高校信息产业股份有限公司 A kind of boiler coal-air ratio optimization method and system
CN107292387A (en) * 2017-05-31 2017-10-24 汪薇 A kind of method that honey quality is recognized based on BP
CN107707657B (en) * 2017-09-30 2021-08-06 苏州涟漪信息科技有限公司 Safety monitoring system based on multiple sensors
US11232344B2 (en) * 2017-10-31 2022-01-25 General Electric Company Multi-task feature selection neural networks
CN109754077B (en) * 2017-11-08 2022-05-06 杭州海康威视数字技术股份有限公司 Network model compression method and device of deep neural network and computer equipment
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN109587248B (en) * 2018-12-06 2023-08-29 腾讯科技(深圳)有限公司 User identification method, device, server and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002061678A2 (en) * 2001-01-31 2002-08-08 Prediction Dynamics Limited Feature selection for neural networks
CN1383522A (en) * 2000-04-24 2002-12-04 国际遥距成象系统公司 Multi-neural net imaging appts. and method
CN1653486A (en) * 2002-02-27 2005-08-10 日本电气株式会社 Pattern feature selection method, classification method, judgment method, program, and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1383522A (en) * 2000-04-24 2002-12-04 国际遥距成象系统公司 Multi-neural net imaging appts. and method
WO2002061678A2 (en) * 2001-01-31 2002-08-08 Prediction Dynamics Limited Feature selection for neural networks
CN1653486A (en) * 2002-02-27 2005-08-10 日本电气株式会社 Pattern feature selection method, classification method, judgment method, program, and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ART-2神经网络的研究与改进. 唐红卫,桑农,曹治国,张天序.红外与激光工程,第33卷第1期. 2004 *
feature selection with neural networks. Verikas A,Bacauskiene M.pattern recognition letters,Vol.23 No.11. 2002 *
一种结构自适应的神经网络特征选择方法. 李仁璞,王正欧.计算机研究与发展,第39卷第12期. 2002 *

Also Published As

Publication number Publication date
CN1945602A (en) 2007-04-11

Similar Documents

Publication Publication Date Title
CN100367300C (en) Characteristic selecting method based on artificial nerve network
CN108171209B (en) Face age estimation method for metric learning based on convolutional neural network
CN108427921A (en) A kind of face identification method based on convolutional neural networks
CN107239514A (en) A kind of plants identification method and system based on convolutional neural networks
CN111414461A (en) Intelligent question-answering method and system fusing knowledge base and user modeling
Wu et al. Enhancing TripleGAN for semi-supervised conditional instance synthesis and classification
CN104634265B (en) A kind of mineral floating froth bed soft measurement method of thickness based on multiplex images Fusion Features
CN113011487B (en) Open set image classification method based on joint learning and knowledge migration
CN108446676A (en) Facial image age method of discrimination based on orderly coding and multilayer accidental projection
CN108228684A (en) Training method, device, electronic equipment and the computer storage media of Clustering Model
CN107609589A (en) A kind of feature learning method of complex behavior sequence data
CN111582450A (en) Neural network model training method based on parameter evaluation and related device
Tembusai et al. K-nearest neighbor with k-fold cross validation and analytic hierarchy process on data classification
CN111340187B (en) Network characterization method based on attention countermeasure mechanism
CN107944049A (en) A kind of film based on deep learning recommends method
CN115051929B (en) Network fault prediction method and device based on self-supervision target perception neural network
CN113674862A (en) Acute renal function injury onset prediction method based on machine learning
CN110289987B (en) Multi-agent system network anti-attack capability assessment method based on characterization learning
CN113361928B (en) Crowd-sourced task recommendation method based on heterogram attention network
Alkhairi et al. Effect of Gradient Descent With Momentum Backpropagation Training Function in Detecting Alphabet Letters
Yang et al. A structure optimization algorithm of neural networks for large-scale data sets
Cai et al. EST-NAS: An evolutionary strategy with gradient descent for neural architecture search
CN117079017A (en) Credible small sample image identification and classification method
Wang et al. Temporal dual-attributed network generation oriented community detection model
CN114818945A (en) Small sample image classification method and device integrating category adaptive metric learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHENZHEN RUIMING TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: HUAZHONG SCINECE AND TECHNOLOGY UNIV

Effective date: 20100804

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 430074 NO.1037, LUOYU ROAD, HONGSHAN DISTRICT, WUHAN CITY, HUBEI PROVINCE TO: 518057 5/F, BUILDING 3, SHENZHEN SOFTWARE PARK, KEJIZHONGER ROAD, NEW + HIGH-TECHNOLOGY ZONE, NANSHAN DISTRICT, SHENZHEN CITY, GUANGDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20100804

Address after: 518057, five floor, building three, Shenzhen Software Park, two road, Nanshan District hi tech Zone, Shenzhen, Guangdong

Patentee after: Shenzhen Streaming Video Technology Co., Ltd.

Address before: 430074 Hubei Province, Wuhan city Hongshan District Luoyu Road No. 1037

Patentee before: Huazhong University of Science and Technology

PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Characteristic selecting method based on artificial nerve network

Effective date of registration: 20130110

Granted publication date: 20080206

Pledgee: Shenzhen SME credit financing guarantee Group Co., Ltd.

Pledgor: Shenzhen Streaming Video Technology Co., Ltd.

Registration number: 2013990000024

PLDC Enforcement, change and cancellation of contracts on pledge of patent right or utility model
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20140318

Granted publication date: 20080206

Pledgee: Shenzhen SME credit financing guarantee Group Co., Ltd.

Pledgor: Shenzhen Streaming Video Technology Co., Ltd.

Registration number: 2013990000024

PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Characteristic selecting method based on artificial nerve network

Effective date of registration: 20140318

Granted publication date: 20080206

Pledgee: Shenzhen SME credit financing guarantee Group Co., Ltd.

Pledgor: Shenzhen Streaming Video Technology Co., Ltd.

Registration number: 2014990000174

PLDC Enforcement, change and cancellation of contracts on pledge of patent right or utility model
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20150528

Granted publication date: 20080206

Pledgee: Shenzhen SME credit financing guarantee Group Co., Ltd.

Pledgor: Shenzhen Streaming Video Technology Co., Ltd.

Registration number: 2014990000174

PLDC Enforcement, change and cancellation of contracts on pledge of patent right or utility model
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Characteristic selecting method based on artificial nerve network

Effective date of registration: 20150603

Granted publication date: 20080206

Pledgee: Shenzhen SME financing Company limited by guarantee

Pledgor: Shenzhen Streaming Video Technology Co., Ltd.

Registration number: 2015990000430

PLDC Enforcement, change and cancellation of contracts on pledge of patent right or utility model
C56 Change in the name or address of the patentee

Owner name: SHENZHEN STREAMAX TECHNOLOGY CO., LTD.

Free format text: FORMER NAME: SHENZHEN RUIMING TECHNOLOGY CO., LTD.

CP03 Change of name, title or address

Address after: Nanshan District Xueyuan Road in Shenzhen city of Guangdong province 518000 No. 1001 Nanshan Chi Park building B1 building 21-23

Patentee after: STREAMAX TECHNOLOGY CO., LTD.

Address before: 518057, five floor, building three, Shenzhen Software Park, two road, Nanshan District hi tech Zone, Shenzhen, Guangdong

Patentee before: Shenzhen Streaming Video Technology Co., Ltd.

DD01 Delivery of document by public notice

Addressee: Chen Dan

Document name: Notification of Passing Examination on Formalities

PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20160718

Granted publication date: 20080206

Pledgee: Shenzhen SME financing Company limited by guarantee

Pledgor: STREAMAX TECHNOLOGY CO., LTD.

Registration number: 2015990000430

PLDC Enforcement, change and cancellation of contracts on pledge of patent right or utility model
PM01 Change of the registration of the contract for pledge of patent right

Change date: 20160718

Registration number: 2015990000430

Pledgor after: STREAMAX TECHNOLOGY CO., LTD.

Pledgor before: Shenzhen Streaming Video Technology Co., Ltd.