CN1945602A - Characteristic selecting method based on artificial nerve network - Google Patents
Characteristic selecting method based on artificial nerve network Download PDFInfo
- Publication number
- CN1945602A CN1945602A CNA2006100195700A CN200610019570A CN1945602A CN 1945602 A CN1945602 A CN 1945602A CN A2006100195700 A CNA2006100195700 A CN A2006100195700A CN 200610019570 A CN200610019570 A CN 200610019570A CN 1945602 A CN1945602 A CN 1945602A
- Authority
- CN
- China
- Prior art keywords
- mrow
- mtd
- centerdot
- msubsup
- msup
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 32
- 210000005036 nerve Anatomy 0.000 title 1
- 238000013528 artificial neural network Methods 0.000 claims abstract description 100
- 238000012549 training Methods 0.000 claims abstract description 66
- 238000010187 selection method Methods 0.000 claims abstract description 13
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 11
- 238000013507 mapping Methods 0.000 claims description 88
- 230000006870 function Effects 0.000 claims description 54
- 238000013138 pruning Methods 0.000 claims description 21
- 230000009471 action Effects 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 5
- 230000035945 sensitivity Effects 0.000 claims description 4
- 238000013459 approach Methods 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims 1
- 238000003909 pattern recognition Methods 0.000 abstract description 11
- 238000010606 normalization Methods 0.000 abstract description 7
- 238000004364 calculation method Methods 0.000 abstract description 3
- 238000010845 search algorithm Methods 0.000 abstract description 2
- 230000008859 change Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 206010065042 Immune reconstitution inflammatory syndrome Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002945 steepest descent method Methods 0.000 description 2
- 244000140747 Iris setosa Species 0.000 description 1
- 235000000827 Iris setosa Nutrition 0.000 description 1
- 241001627144 Iris versicolor Species 0.000 description 1
- 241001136653 Iris virginica Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000002939 conjugate gradient method Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种基于人工神经网络的特征选择方法,包括:①用户给定需要进行特征选择的所有特征,给出对人工神经网络进行训练用的样本;②选定模糊隶属度函数的个数,设置人工神经网络各层的节点数以及各层之间的连接权值和模糊隶属度函数参数的初始值;③利用反向传播算法,以批处理的学习方式,对网络进行训练,调整网络连接权值和模糊隶属度函数的参数;④计算所有特征的重要性度量,并对特征排序。本发明较好地避免了数据归一化的难题;计算简单,网络只需训练一次;容易和各种搜索算法结合起来组成一个完整的特征选择系统。本发明已成功应用于多种具有多维特征的模式识别和目标分类,也可应用于各类涉及数据型特征的模式识别领域。
The invention discloses a feature selection method based on an artificial neural network, which includes: ① The user specifies all the features that need to be selected for feature selection, and provides samples for training the artificial neural network; The number of nodes in each layer of the artificial neural network and the initial value of the connection weights between layers and the fuzzy membership function parameters are set; ③Use the backpropagation algorithm to train and adjust the Network connection weights and parameters of the fuzzy membership function; ④ Calculate the importance measure of all features and sort the features. The invention avoids the difficult problem of data normalization; the calculation is simple, and the network only needs to be trained once; it is easy to combine with various search algorithms to form a complete feature selection system. The invention has been successfully applied to various pattern recognition and object classification with multi-dimensional features, and can also be applied to various pattern recognition fields involving data-type features.
Description
技术领域technical field
本发明属于模式识别领域,涉及一种特征选择方法,具体为一种基于人工神经网络的特征选择方法。The invention belongs to the field of pattern recognition and relates to a feature selection method, in particular to a feature selection method based on an artificial neural network.
背景技术Background technique
特征选择(feature selection)技术是模式识别领域中的一个重要方面,因为模式识别算法的复杂度往往随着数据维数的增长而以指数的形式增长,如果不设法降低数据的维数,分类器的规模将变得异常庞大,进行分类所需要的运算开销也会大得无法承受。因此对数据特征进行选择,选出其中的重要特征,降低数据特征的维数是不可缺少的环节。而且,目前多数模式识别算法所用的特征大多是由机器自动提取的,这就不可避免地存在冗余、噪声等特征,利用特征选择可以有效地消除这一问题。Feature selection technology is an important aspect in the field of pattern recognition, because the complexity of pattern recognition algorithms tends to grow exponentially with the growth of data dimensions. If the dimensionality of data is not reduced, the classifier The scale of will become extremely large, and the computational overhead required for classification will be unbearably large. Therefore, it is an indispensable link to select data features, select important features, and reduce the dimensionality of data features. Moreover, the features used in most pattern recognition algorithms are mostly automatically extracted by machines, which inevitably have features such as redundancy and noise, which can be effectively eliminated by using feature selection.
特征选择是在不降低或较少降低分类器识别率的条件下,从所有特征构成的集合中选出一个子集的过程。特征选择技术的关键点是选用什么准则来度量特征的重要性。传统的度量准则,如基于距离的度量、基于信息(或不确定性)的度量、基于依赖性的度量等等,侧重于分析数据的特性,这类方法在实践中效果并不十分理想。随着人工智能领域的不断进步,一些利用人工神经网络(artificialneuron networks)和模糊数学(fuzzy math)等技术的特征度量方法被提了出来。这一类方法都是基于分类错误率的,即根据特征对分类错误率的贡献来度量其重要性大小,因此比前一类方法更加有效。在具体操作上,这类方法大多数利用人工神经网络技术来进行特征选择。Feature selection is the process of selecting a subset from the set of all features without reducing or reducing the recognition rate of the classifier. The key point of feature selection technology is what criteria to choose to measure the importance of features. Traditional measurement criteria, such as distance-based measurement, information-based (or uncertainty)-based measurement, dependency-based measurement, etc., focus on analyzing the characteristics of data, and the effect of such methods is not very satisfactory in practice. With the continuous progress in the field of artificial intelligence, some feature measurement methods using techniques such as artificial neural networks and fuzzy mathematics have been proposed. This type of method is based on the classification error rate, that is, the importance of the feature is measured according to the contribution of the feature to the classification error rate, so it is more effective than the previous method. In terms of specific operations, most of these methods use artificial neural network technology for feature selection.
基于人工神经网络的特征选择可以看作是剪枝算法(prune algorithm)的一个特例,即剪除输入层的节点而不是隐含层的节点或权值,如文献1的Reed R.Pruning Algorithms-A Survey.IEEE Transactions on Neural Networks,1993,4(5):740~746介绍的。常见的思路是利用剪枝前后人工神经网络的输出值的变化作为特征的敏感性度量,如文献2的Verikas A,Baeauskiene M.Featureselection with neural networks.Pattern Recognition Letters,2002,23(11):1323~1335。这种思路的基本假设是:一个学习良好的神经网络,对于越重要的特征的变化,其相应的输出值变化也越大,即越敏感,反之亦然。基于敏感性度量Aj的特征选择方法最直接而准确地反映了这一假设,如文献3的Ruck D W,Rogers S K and Kabrisky M.Feature selection using a multilayerperceptron,Journal of Neural Network Computing,1990,9(1):40~48所述。Feature selection based on artificial neural network can be regarded as a special case of pruning algorithm (prune algorithm), that is, the nodes of the input layer are pruned instead of the nodes or weights of the hidden layer, such as Reed R.Pruning Algorithms-A of literature 1 Introduced by Survey.IEEE Transactions on Neural Networks, 1993, 4(5): 740~746. The common idea is to use the change of the output value of the artificial neural network before and after pruning as the sensitivity measure of the feature, such as Verikas A, Baeauskiene M. Feature selection with neural networks. Pattern Recognition Letters in
具体考察某个特征的重要性时,通过计算该特征删除前后人工神经网络的输出的变化作为特征度量。所谓删除特征,就是令样本中该特征的观察值恒为零,如文献4的De R.K,Basak J and Pal S K.Neuro-Fuzzy Feature EvaluationWith Theoretical Analysis.Neural Networks,1999,12(10):1429~1455。这种方法要求首先对数据进行归一化,这可能会破坏数据。为了避开归一化的问题,可以在人工神经网络中增加一个模糊映射(fuzzy mapping)层,该层将每一个特征按一对多映射,映射后的新特征,即模糊特征,其定义域限定为[0,1],因此就避开了归一化的问题,如文献5的Jia P and Sang N.Feature selectionusing a radial basis function networks and fuzzy set theoretic measures.In:Proceedings of SPIE 5281(1)-the Third International Symposium onMultispectral Image Processing and Pattern Recognition,Beijing,China:The International Society of Optical Engineering Press,2003.109~114。在这种方法里,模糊隶属度函数(fuzzy membership function)是在人工神经网络进行学习之前就已经获得的,它依赖的仍然是数据的一、二阶矩,这与文献4的归一化其实有同样的问题。事实上,完全可以把文献5提出的模糊映射层从网络中独立出来,作为一种归一化方法进行数据的预处理。When specifically examining the importance of a feature, the change in the output of the artificial neural network before and after the feature is deleted is calculated as the feature measure. The so-called deletion of features is to make the observation value of the feature in the sample constant to zero, such as De R.K, Basak J and Pal S K.Neuro-Fuzzy Feature Evaluation With Theoretical Analysis.Neural Networks, 1999, 12(10): 1429 in
发明内容Contents of the invention
本发明的目的在于提供一种基于人工神经网络的特征选择方法,该方法避免了数据归一化的难题,鲁棒性高,对噪声特征和冗余特征具有好的效果。The purpose of the present invention is to provide a feature selection method based on artificial neural network, which avoids the problem of data normalization, has high robustness, and has good effect on noise features and redundant features.
本发明提供的一种基于人工神经网络的特征选择方法,包括以下步骤:A kind of feature selection method based on artificial neural network provided by the invention comprises the following steps:
(1)用户指定需要进行特征选择的特征fi,i=1,…,N,给出对人工神经网络进行训练用的训练样本集:(1) The user specifies the feature fi that needs to be selected for feature selection, i =1,...,N, and gives the training sample set used for training the artificial neural network:
训练样本有相同的维数R,R=N,分为K个类别:ω1,…,ωK,第q个训练样本xq的第i维xqi即指定的第i个特征fi的第q次观测值;The training samples have the same dimension R, R=N, and are divided into K categories: ω 1 ,..., ω K , the i-th dimension x qi of the q-th training sample x q is the specified i-th feature f i The qth observation value;
(2)根据训练样本,构造依次由输入层、模糊映射层、隐含层和输出层组成的人工神经网络;神经网络数据由输入层输入神经网络,通过连接权w2传递到模糊映射层,经过模糊映射层作用之后再通过连接权w3传递到隐含层,经过隐含层作用之后再通过连接权w4传递到输出层获得输出,其中,m=2,3,4;(2) According to the training samples, construct an artificial neural network composed of input layer, fuzzy mapping layer, hidden layer and output layer; the neural network data is input into the neural network from the input layer, and passed to the fuzzy mapping layer through the connection weight w2 , After the action of the fuzzy mapping layer, it is passed to the hidden layer through the connection weight w 3 , and then passed to the output layer through the connection weight w 4 to obtain the output after the action of the hidden layer, wherein, m=2, 3, 4;
(3)使用用户给出的训练样本集训练初始化之后的人工神经网络,其处理过程为:(3) Use the training sample set given by the user to train the artificial neural network after initialization, and the processing process is as follows:
(3.1)选用均方误差的估计量e作为学习过程中的性能指数:(3.1) Select the estimator e of the mean square error as the performance index in the learning process:
其中,ti m(q)是第m层的节点i在输入第q个样本时的输出的目标值,ai m(q)是第m层的节点i在输入第q个样本时的实际输出,G为该层的节点数;Among them, t i m (q) is the target value of the output of node i in the mth layer when inputting the qth sample, and a i m (q) is the actual output value of the node i in the mth layer when inputting the qth sample Output, G is the number of nodes in this layer;
(3.2)采用反向传播算法对人工神经网络各层之间的连接权矩阵wm进行训练,其中m=3,4;(3.2) The connection weight matrix w between the layers of the artificial neural network is trained by using the backpropagation algorithm, where m=3,4;
(3.3)对模糊映射层节点的作用函数中的参数ξ,σ,τ进行更新;(3.3) update the parameters ξ, σ, and τ in the action function of the fuzzy mapping layer node;
(3.4)当e满足收敛条件时,进入步骤(4),否则重复步骤(3.2)-(3.3);(3.4) When e meets the convergence condition, enter step (4), otherwise repeat steps (3.2)-(3.3);
(4)使用已训练好的人工神经网络对特征进行模糊剪枝,计算每个特征的重要性度量,并按重要性的度量值对特征排序。(4) Use the trained artificial neural network to perform fuzzy pruning on the features, calculate the importance measure of each feature, and sort the features according to the measure value of importance.
本发明只需要用户给出原始特征集和训练用的样本,能够从中获得原始特征集中的所有特征对分类的重要性的排序。本发明的特征选择方法与现有的特征选择方法相比的优点在于:较好地避免了数据归一化的难题;计算简单,神经网络只需训练一次;容易和各种搜索算法结合起来组成一个完整的特征选择系统。本发明已成功应用于多种具有多维特征的模式识别和目标分类,也可应用于各类涉及数据型特征的模式识别领域。The present invention only needs the user to provide the original feature set and training samples, and can obtain the ranking of the importance of all features in the original feature set for classification. Compared with the existing feature selection method, the feature selection method of the present invention has the advantages of: better avoiding the difficult problem of data normalization; simple calculation, and the neural network only needs to be trained once; easy to combine with various search algorithms to form A complete feature selection system. The invention has been successfully applied to various pattern recognition and object classification with multi-dimensional features, and can also be applied to various pattern recognition fields involving data-type features.
附图说明Description of drawings
图1为基于带自适应模糊映射层的人工神经网络的特征选择方法的流程图;Fig. 1 is the flow chart of the feature selection method based on the artificial neural network of band adaptive fuzzy mapping layer;
图2为带有自适应模糊映射层的人工神经网络的结构示意图;Fig. 2 is the structural representation of the artificial neural network with adaptive fuzzy mapping layer;
图3为实例中建立的带有自适应模糊映射层的人工神经网络的结构示意图Figure 3 is a schematic diagram of the structure of the artificial neural network with adaptive fuzzy mapping layer established in the example
图4为特征seqal length的模糊隶属度函数图(初始值)。Fig. 4 is the fuzzy membership function diagram (initial value) of the feature seqal length.
具体实施方式Detailed ways
本发明的特征选择方法在用户给出训练用的数据集和需要进行选择的特征集的前提下,开始特征选择过程,下面详细介绍特征选择流程。The feature selection method of the present invention starts the feature selection process on the premise that the user provides the data set for training and the feature set to be selected. The feature selection process will be described in detail below.
进行特征选择,就是要获得对特征的重要性的度量。本发明提出的特征选择方法中,使用用户提供的数据集训练带有模糊映射层的人工神经网络,再借助训练好的网络计算出各个特征的重要性度量值,达到特征选择的目的。如图1所示,本发明方法包括以下步骤:To perform feature selection is to obtain a measure of the importance of features. In the feature selection method proposed by the present invention, the data set provided by the user is used to train the artificial neural network with the fuzzy mapping layer, and then the importance measure value of each feature is calculated by means of the trained network, so as to achieve the purpose of feature selection. As shown in Figure 1, the inventive method comprises the following steps:
(1)用户指定需要进行特征选择的特征fi(i=1,…,N),给出对人工神经网络进行训练用的训练样本。(1) The user specifies the feature f i (i=1, .
(1.1)特征的指定(1.1) Designation of features
指定的特征必须是数据型的特征,直接反映对象的实际物理意义或者几何意义,如重量、速度、长度等。特征的个数N为自然数,也就是说特征的个数为一个或者多个。The specified features must be data-type features that directly reflect the actual physical or geometric meaning of the object, such as weight, speed, length, etc. The number N of features is a natural number, that is to say, the number of features is one or more.
(1.2)训练样本的限定(1.2) Limitation of training samples
对人工神经网络进行训练用的训练样本也为数据型,所有样本有着相同的维数R(R=N),分为K个类别:ω1,…,ωK。维数R等于步骤(1.1)中指定的特征个数。第q个训练样本xq的第i维xqi就是指定的第i个特征fi的第q次观测值。训练样本集的具体数学描述为:The training samples used for training the artificial neural network are also data types, all samples have the same dimension R (R=N), and are divided into K categories: ω 1 , . . . , ω K . The dimension R is equal to the number of features specified in step (1.1). The i-th dimension x qi of the q-th training sample x q is the q-th observation value of the specified i-th feature fi . The specific mathematical description of the training sample set is:
其中,Q是训练样本的个数,且Q≥K,每一个类ωl(l=1,…,K)至少有一个样本, 表示实数集,R是样本xq的维数,等于训练样本集X的特征数N。Among them, Q is the number of training samples, and Q≥K, each class ω l (l=1,...,K) has at least one sample, Represents a real number set, R is the dimension of the sample x q , which is equal to the feature number N of the training sample set X.
(2)根据训练样本,构造由特征层A、模糊映射层B、隐含层C和输出层D组成的人工神经网络,并初始化。(2) Construct and initialize an artificial neural network consisting of feature layer A, fuzzy mapping layer B, hidden layer C and output layer D according to the training samples.
如图2所示,人工神经网络结构包括输入层A(即特征层)、模糊映射层B、隐含层C和输出层D,层与层之间用连接权wm(m=2,3,4)相连接。数据由输入层输入神经网络,然后通过连接权传递到模糊映射层,经过模糊映射层作用之后再通过连接权传递到隐含层,经过隐含层作用之后再通过连接权传递到输出层从而获得输出。构造带有模糊映射层的人工神经网络需要设置输入层(特征层)、隐含层和输出层的节点数;确定每个特征fi对应的模糊隶属度函数的个数mi,并定义这些模糊隶属度函数。初始化工作则需要确定人工神经网络各层之间的连接权值的初始值和模糊映射层中每个节点内的模糊隶属度函数的参数的初始值。As shown in Figure 2, the artificial neural network structure includes input layer A (i.e. feature layer), fuzzy mapping layer B, hidden layer C and output layer D, and the connection weight w m (m=2,3 , 4) are connected. The data is input into the neural network from the input layer, and then passed to the fuzzy mapping layer through the connection weight, and then passed to the hidden layer through the connection weight after the function of the fuzzy mapping layer, and then passed to the output layer through the connection weight after the action of the hidden layer to obtain output. Constructing an artificial neural network with a fuzzy mapping layer needs to set the number of nodes in the input layer (feature layer), hidden layer and output layer; determine the number m i of fuzzy membership functions corresponding to each feature f i , and define these Fuzzy membership function. The initialization work needs to determine the initial value of the connection weight between the layers of the artificial neural network and the initial value of the parameters of the fuzzy membership function in each node in the fuzzy mapping layer.
具体过程如下:The specific process is as follows:
(2.1)输入层A(2.1) Input layer A
(2.1.1)输入层节点个数的选取(2.1.1) Selection of the number of input layer nodes
输入层A的节点数S1等于训练样本的维数R。The number of nodes S1 of the input layer A is equal to the dimensionality R of the training samples.
(2.1.2)输入层节点的输入和输出(2.1.2) Input and output of input layer nodes
每个节点输入训练样本的某一维。当神经网络输入第q个样本时,输入层的节点Ai的输入为:Each node inputs a certain dimension of the training sample. When the neural network inputs the qth sample, the input of the node A i of the input layer is:
输出为:The output is:
(2.2)模糊映射层B(2.2) Fuzzy Mapping Layer B
(2.2.1)各特征对应的模糊隶属度函数的个数的选取(2.2.1) Selection of the number of fuzzy membership functions corresponding to each feature
对于特征fi,可以根据其具体的物理意义来定义fi对应的mi个模糊隶属度函数,每一个模糊隶属度函数构成一个模糊映射层节点。也就是说,模糊映射层B的节点数
其中,Qmin=min{Ql},Ql表示用户给出的训练样本中属于类ωl的样本数。Among them, Q min =min{Q l }, Q l represents the number of samples belonging to class ω l in the training samples given by the user.
(2.2.2)输入层与模糊映射层之间的连接权(2.2.2) The connection weight between the input layer and the fuzzy mapping layer
输入层的节点Ai与模糊映射层的节点Bi1,…,Bimi用连接权相连接,而且模糊映射层的节点Bi1,…,Bimi不与除Ai之外的输入层其他节点连接,即所谓的1对多的连接方式。特征层A节点Ai和模糊映射层B节点Bij之间的连接权值恒为1,即特征层A和模糊映射层B之间的连接权矩阵w2不参加人工神经网络的训练。The node A i of the input layer is connected with the nodes B i1 ,...,B imi of the fuzzy mapping layer with connection weights, and the nodes B i1 ,...,B imi of the fuzzy mapping layer are not connected with other nodes of the input layer except A i Connection, the so-called 1-to-many connection. The connection weight between node A i of feature layer A and node B ij of fuzzy mapping layer is always 1, that is, the connection weight matrix w 2 between feature layer A and fuzzy mapping layer B does not participate in the training of artificial neural network.
(2.2.3)模糊映射层的节点Bij的输入(2.2.3) The input of the node B ij of the fuzzy mapping layer
当神经网络输入第q个样本时,模糊映射层的节点Bij的输入为:When the neural network inputs the qth sample, the input of the node B ij of the fuzzy mapping layer is:
(2.2.4)模糊映射层的节点Bij的作用函数(2.2.4) The action function of the node B ij of the fuzzy mapping layer
模糊映射层的节点Bij的作用函数为模糊隶属度函数μij,即特征fi的第j个隶属度函数。在本发明中,所谓给定第i维特征fi的一个模糊隶属度函数,是指给定了一个映射μi:fi→[0,1]。The action function of the node B ij of the fuzzy mapping layer is the fuzzy membership function μ ij , which is the jth membership function of the feature f i . In the present invention, the so-called fuzzy membership function given the i-th dimension feature f i means that a mapping μ i is given: f i → [0, 1].
节点Bij的模糊隶属度函数形式如下:The form of fuzzy membership function of node B ij is as follows:
在这里,nij 2(q)为输入第q个样本时模糊映射层的节点Bij的输入,aij 2(q)为相应的实际输出。ξij为节点Bij的类条件概率密度的期望,σij为节点Bij的类条件概率密度的标准差,τij为节点Bij的一个参数。τ的作用表现在:即使两个隶属度函数的ξ和σ是相等的,仍然可以通过对τ的调整避免两个隶属度函数完全相同。Here, n ij 2 (q) is the input of node B ij of the fuzzy mapping layer when the qth sample is input, and a ij 2 (q) is the corresponding actual output. ξ ij is the expectation of class conditional probability density of node B ij , σ ij is the standard deviation of class conditional probability density of node B ij , τ ij is a parameter of node B ij . The role of τ is as follows: Even if the ξ and σ of the two membership functions are equal, the adjustment of τ can still prevent the two membership functions from being identical.
对于σij和τij的初始化没有特别的限制,ξij一般采取在对应的特征fi的值域上随机选取的方法。There are no special restrictions on the initialization of σ ij and τ ij , and ξ ij generally adopts a method of randomly selecting the value range of the corresponding feature f i .
(2.3)隐含层C(2.3) hidden layer C
(2.3.1)隐含层节点个数的选取(2.3.1) Selection of the number of hidden layer nodes
隐含层C的节点个数S3的选取没有特别的要求,一般只要不小于训练样本的类别数K即可。There is no special requirement for the selection of the number S3 of nodes in the hidden layer C, generally as long as it is not less than the number K of categories of training samples.
(2.3.2)模糊映射层与隐含层之间的连接权(2.3.2) Connection weight between fuzzy mapping layer and hidden layer
模糊映射层B与隐含层C之间为全连接,也就是说模糊映射层B的每一个节点都与隐含层C的所有节点相连接,隐含层C的每一个节点也与模糊映射层B的所有节点相连接。模糊映射层B与隐含层C之间的连接权There is a full connection between the fuzzy mapping layer B and the hidden layer C, that is to say, each node of the fuzzy mapping layer B is connected to all nodes of the hidden layer C, and each node of the hidden layer C is also connected to the fuzzy mapping layer All nodes of layer B are connected. Connection weight between fuzzy mapping layer B and hidden layer C
(2.3.3)隐含层节点的输入(2.3.3) Input of hidden layer nodes
当神经网络输入第q个样本时,隐含层的节点Cu(u=1,…,S3)的输入为:When the neural network inputs the qth sample, the input of the hidden layer node C u (u=1,..., S 3 ) is:
其中,ap 2(q)是模糊映射层的节点Bp(p=1,…,S2)在神经网络输入第q个样本时的输出,wpu 3是模糊映射层的节点Bp与隐含层节点Cu之间的连接权。Among them, a p 2 (q) is the output of the node B p (p=1,..., S 2 ) of the fuzzy mapping layer when the neural network inputs the qth sample, w pu 3 is the node B p of the fuzzy mapping layer and Connection weight between hidden layer nodes C u .
(2.3.4)隐含层节点的作用函数(2.3.4) Action function of hidden layer nodes
隐含层节点的作用函数选择为Sigmoid函数:The function function of the hidden layer node is selected as the Sigmoid function:
其中,nu 3(q)为神经网络输入第q个样本时隐含层的节点Cu的输入,au 3(q)为相应的输出。Among them, n u 3 (q) is the input of node C u of the hidden layer when the neural network inputs the qth sample, and a u 3 (q) is the corresponding output.
也可以选择为双曲线正切函数:It can also optionally be a hyperbolic tangent function:
其中,nu 3(q)为神经网络输入第q个样本时隐含层的节点Cu的输入,au 3(q)为相应的输出。Among them, n u 3 (q) is the input of node C u of the hidden layer when the neural network inputs the qth sample, and a u 3 (q) is the corresponding output.
(2.4)输出层D(2.4) Output layer D
(2.4.1)输出层节点个数的选择(2.4.1) Selection of the number of nodes in the output layer
输出层D的节点数S4等于训练样本的类别数K。The number of nodes S 4 of the output layer D is equal to the number K of categories of training samples.
(2.4.2)隐含层与输出层之间的连接权(2.4.2) Connection weight between hidden layer and output layer
隐含层C与输出层D之间为全连接,也就是说隐含层C的每一个节点都与输出层D的所有节点相连接,输出层D的每一个节点也与隐含层C的所有节点相连接。隐含层C与输出层D之间的连接权值
(2.4.3)输出层节点的输入和输出(2.4.3) Input and output of output layer nodes
输出层节点Dl(l=1,…,S4)的输入和输出相等,Dl的输出值nl 4(q)就是神经网络输入的第q个样本属于类ωl的概率。The input and output of the output layer node D l (l=1,..., S 4 ) are equal, and the output value n l 4 (q) of D l is the probability that the qth sample input by the neural network belongs to the class ω l .
其中,wul 4是隐含层的节点Cu与输出层节点Dl之间的连接权。Among them, w ul 4 is the connection weight between the node C u of the hidden layer and the node D l of the output layer.
(3)使用用户给出的训练样本集训练初始化之后的人工神经网络。(3) Use the training sample set given by the user to train the artificial neural network after initialization.
利用反向传播算法,以批处理的学习方式,依据用户给出的训练样本集对人工神经网络进行训练,在每一次训练中更新神经网络各层之间的连接权值和模糊隶属度函数的参数,直至人工神经网络满足用户设定的收敛条件。Using the backpropagation algorithm, the artificial neural network is trained according to the training sample set given by the user in a batch learning method, and the connection weights between the layers of the neural network and the fuzzy membership function are updated in each training. parameters until the artificial neural network meets the convergence conditions set by the user.
具体的训练方法如下。The specific training method is as follows.
(3.1)收敛条件的选取(3.1) Selection of convergence conditions
首先,选用均方误差的估计量e作为学习过程中的性能指数:First, the estimator e of the mean square error is selected as the performance index in the learning process:
其中,ti m(q)是第m层的节点i在输入第q个样本时的输出的目标值,ai m(q)是第m层的节点i在输入第q个样本时的实际输出,G为该层的节点数。Among them, t i m (q) is the target value of the output of node i in the mth layer when inputting the qth sample, and a i m (q) is the actual output value of the node i in the mth layer when inputting the qth sample Output, G is the number of nodes in this layer.
用户可以根据对运算精度的要求,设定e小于某个很小的正数为收敛条件。比如,设定e<0.001为收敛条件,则人工神经网络在某次训练中完成步骤(3.2)和(3.3)之后,计算e的值,如果小于0.001,就停止训练;否则进行下一次训练。The user can set e to be smaller than a small positive number as the convergence condition according to the requirements for operation accuracy. For example, if e<0.001 is set as the convergence condition, the artificial neural network calculates the value of e after completing steps (3.2) and (3.3) in a training session, and if it is less than 0.001, stop training; otherwise, proceed to the next training.
(3.2)各层之间连接权值的更新(3.2) Update of connection weights between layers
输入层A与模糊映射层B之间的连接权值恒为1,不参加训练。模糊映射层B与隐含层C之间的连接权w3,隐含层C与输出层D之间的连接权w4都需要参加训练,而且w3和w4在训练中的更新方法相同。The connection weight between the input layer A and the fuzzy mapping layer B is always 1, and does not participate in training. The connection weight w 3 between the fuzzy mapping layer B and the hidden layer C, and the connection weight w 4 between the hidden layer C and the output layer D need to participate in the training, and the update method of w 3 and w 4 in the training is the same .
反向传播算法中的均方误差的估计量e对于第m层输入的敏感性定义为The sensitivity of the estimator e of the mean square error in the backpropagation algorithm to the input of the mth layer is defined as
其中,Sm是人工神经网络第m层的节点数,nm是一个大小为Sm×Q的矩阵,表示人工神经网络第m层的输入;ni m(q)表示第m层的节点i在神经网络输入第q个样本时的输入。而且,
按最速下降法对连接权值进行更新,在此处也可采用共轭梯度法等最小模估计算法。人工神经网络第m层和第m-1层(m=3,4)之间的连接权矩阵wm(维数是Sm-1×Sm),在第(r+1)次训练开始时更新为The connection weights are updated according to the steepest descent method, and the minimum modulus estimation algorithm such as the conjugate gradient method can also be used here. The connection weight matrix w m (dimension is S m - 1 ×S m ) between the mth layer and the m-1th layer (m=3, 4) of the artificial neural network, the (r+1)th training starts when updated to
wm(r+1)=wm(r)-αgm(am-1)T.w m (r+1)=w m (r)-αg m (a m-1 ) T .
其中,α是权值学习速率,取值范围是0<α≤1,一般选择为0.05。r是训练的次数。am是一个大小为Sm×Q的矩阵,表示人工神经网络第m层的实际输出:Among them, α is the weight learning rate, and the value range is 0<α≤1, and it is generally selected as 0.05. r is the number of training times. a m is a matrix of size S m ×Q, representing the actual output of the mth layer of the artificial neural network:
(3.3)模糊映射层节点的作用函数的参数ξ,σ,τ的更新(3.3) The update of the parameters ξ, σ, τ of the action function of the fuzzy mapping layer node
模糊映射层B的节点Bp(p=1,…,S2)的作用函数的三个参数ξp,σp,τp按以下式子更新,其中θ是ξp的学习速率, 是σp的学习速率,ρ是τp的学习速率,采用试错法等参数选取方法。The three parameters ξ p , σ p , τ p of the action function of node B p (p=1, ..., S 2 ) of fuzzy mapping layer B are updated according to the following formula, where θ is the learning rate of ξ p , is the learning rate of σ p , and ρ is the learning rate of τ p , using trial and error and other parameter selection methods.
其中,pa2是对人工神经网络输入Q个样本时模糊映射层B的输出矩阵a2的第p行。而且Among them, p a 2 is the pth row of the output matrix a 2 of the fuzzy mapping layer B when Q samples are input to the artificial neural network. and
其中,ai 1(q)是与节点Bp相连的输入层节点Ai在神经网络输入第q个样本时的输出,也就是xqi。Among them, a i 1 (q) is the output of the input layer node A i connected to the node B p when the neural network inputs the qth sample, that is, x qi .
(3.4)训练的终止(3.4) Termination of training
人工神经网络在每一次训练中都进行步骤(3.2)和(3.3)的操作。每一次训练完成之后,计算e的值,如果满足在步骤(3.1)中设置的收敛条件,就停止训练;否则进行下一次训练。The artificial neural network performs steps (3.2) and (3.3) in each training. After each training is completed, the value of e is calculated, and if the convergence condition set in step (3.1) is satisfied, the training is stopped; otherwise, the next training is performed.
(4)使用已训练好的人工神经网络对特征进行模糊剪枝,计算每个特征的重要性度量,并排序。(4) Use the trained artificial neural network to perform fuzzy pruning on the features, calculate the importance measure of each feature, and sort them.
(4.1)对特征fi进行模糊剪枝(4.1) Perform fuzzy pruning on feature f i
所谓对特征fi的模糊剪枝(fuzzy prune algorithm),就是将特征fi对应的所有模糊隶属度函数的输出值置为0.5,也就是使得模糊映射层的输出为The so-called fuzzy prune algorithm for feature f i is to set the output values of all fuzzy membership functions corresponding to feature f i to 0.5, that is, to make the output of the fuzzy mapping layer be
然后,获得这种条件下人工神经网络对于输入样本xq时输出层给出的输出向量a4(xq,i)。Then, obtain the output vector a 4 (x q , i) given by the output layer of the artificial neural network for the input sample x q under this condition.
(4.2)计算特征的重要性度量FQJ(i)(4.2) Calculate the importance measure FQJ(i) of the feature
本发明提出的特征度量函数FQJ(i)表示第i维特征fi对于分类的重要性,特征fi的FQJ(i)值越大则表明该特征对分类而言越重要。FQJ(i)的定义如下:The feature measurement function FQJ(i) proposed by the present invention represents the importance of the i-th dimension feature f i for classification, and the larger the FQJ(i) value of feature f i is, the more important the feature is for classification. The definition of FQJ(i) is as follows:
其中,a4(xq)表示人工神经网络对于输入样本xq时输入层给出的输出向量,a4(xq,i)表示对特征fi进行模糊剪枝后的人工神经网络对于输入样本xq给出的输出向量。使用在步骤(3)中已训练好的人工神经网络对在步骤(1.1)中用户给出的所有特征fi,按照公式上述公式计算其相应的FQJ(i),特征fi的FQJ(i)值就是其重要性的度量。Among them, a 4 (x q ) represents the output vector given by the input layer of the artificial neural network for the input sample x q , and a 4 (x q , i) represents the fuzzy pruning of the feature f i by the artificial neural network for the input The output vector given by sample x q . Use the artificial neural network that has been trained in step (3) to calculate its corresponding FQJ(i) for all the features f i given by the user in step (1.1) according to the above formula, and the FQJ(i) of feature f i ) value is the measure of its importance.
(4.2)对所有特征fi,按照其重要性度量FQJ(i)排序(4.2) For all features f i , sort them according to their importance measure FQJ(i)
对所有特征fi,按照对应的FQJ(i)值的大小降序排列,就得到了所有特征对于分类的重要性的排序。用户可以根据实际需要或者客观条件的约束,选取排名靠前的一个或者多个特征用于识别,从而达到了特征选择的目的。For all features f i , sort them in descending order according to the corresponding FQJ(i) value, and then get the ranking of the importance of all features for classification. Users can select one or more top-ranked features for identification according to actual needs or constraints of objective conditions, thus achieving the purpose of feature selection.
实例:Example:
用户希望考察以下四个特征:Sepal length、Sepal width、Petal length和Petal width对分类人物的重要性,并给出了训练样本:数据集IRIS。IRIS数据集被很多研究者用于模式识别方面的研究,已经成为一种基准数据。该数据集包含3类,每类有50个样本,每个样本有4个特征,依次是Sepal length、Sepal width、Petal length和Petal width。The user wants to investigate the following four features: the importance of Sepal length, Sepal width, Petal length and Petal width to classify characters, and gives a training sample: the data set IRIS. The IRIS data set has been used by many researchers in the study of pattern recognition and has become a benchmark data. The data set contains 3 categories, each category has 50 samples, and each sample has 4 features, which are Sepal length, Sepal width, Petal length and Petal width.
进行特征选择的具体步骤如下:The specific steps for feature selection are as follows:
(1)用户指定需要进行特征选择的特征fi(i=1,…,N),给出对人工神经网络进行训练用的训练样本。(1) The user specifies the feature f i (i=1, .
(1.1)特征的指定(1.1) Designation of features
用户指定的4个特征:Sepal length、Sepal width、Petal length和Petalwidth都是数据性特征。则N=4。The four features specified by the user: Sepal length, Sepal width, Petal length, and Petalwidth are all data features. Then N=4.
(1.2)给出训练样本(1.2) Given training samples
用户给出的训练样本分为3个类:Iris Setosa,Iris Versicolor和IrisVirginica,即K=3。每一个类有50个样本,一共150个样本,也即Q=150。每个样本有4维特征:Sepal length、Sepal width、Petal length和Petal width。样本的维数R=N=4。The training samples given by the user are divided into 3 categories: Iris Setosa, Iris Versicolor and IrisVirginica, that is, K=3. Each class has 50 samples, a total of 150 samples, that is, Q=150. Each sample has 4-dimensional features: Sepal length, Sepal width, Petal length, and Petal width. The dimension of the sample is R=N=4.
(2)根据训练样本,构造由特征层A、模糊映射层B、隐含层C和输出层D组成的人工神经网络,并初始化。(2) Construct and initialize an artificial neural network consisting of feature layer A, fuzzy mapping layer B, hidden layer C and output layer D according to the training samples.
(2.1)构造输入层A(2.1) Constructing the input layer A
(2.1.1)输入层节点个数的选取(2.1.1) Selection of the number of input layer nodes
输入层A的节点数S1等于训练样本的维数R,即S1=4。The number S 1 of nodes in the input layer A is equal to the dimension R of the training samples, ie S 1 =4.
(2.2)构造模糊映射层B(2.2) Construct fuzzy mapping layer B
(2.2.1)各特征对应的模糊隶属度函数的个数的选取(2.2.1) Selection of the number of fuzzy membership functions corresponding to each feature
给每个特征定义3个模糊隶属度函数,即m1=m2=m3=m4=3,这样模糊映射层的节点个数是
(2.2.2)输入层与模糊映射层之间的连接权(2.2.2) The connection weight between the input layer and the fuzzy mapping layer
输入层的节点A1只与模糊映射层的节点B11,B12,B13用连接权相连,输入层的节点A2只与模糊映射层的节点B21,B22,B23用连接权相连,输入层的节点A3只与模糊映射层的节点B31,B32,B33用连接权相连,输入层的节点A4只与模糊映射层的节点B41,B42,B43用连接权相连。The node A 1 of the input layer is only connected with the nodes B 11 , B 12 , and B 13 of the fuzzy mapping layer with connection weights, and the node A 2 of the input layer is only connected with the nodes B 21 , B 22 , and B 23 of the fuzzy mapping layer with connection weights The node A 3 of the input layer is only connected with the nodes B 31 , B 32 , and B 33 of the fuzzy mapping layer with connection weights, and the node A 4 of the input layer is only connected with the nodes B 41 , B 42 , and B 43 of the fuzzy mapping layer. Connect right to connect.
(2.2.3)选择模糊映射层节点的作用函数(2.2.3) Select the action function of the fuzzy mapping layer node
选择节点Bij的模糊隶属度函数:Choose the fuzzy membership function of node B ij :
隶属度函数的参数ξij的初始值一般在特征fi的值域上随机选取。以特征Sepal length为例,该特征的取值范围为[4.3,7.9],则f1对应的3个模糊隶属度函数中,选取的ξ的初始值可能是:ξ11=5.2,ξ12=6.1,ξ13=7.0。σ可以设置为σ11=σ12=σ13=0.45,τ可以设置为τ11=τ12=τ13=2,得到的隶属度函数如下图4所示。The initial value of the parameter ξ ij of the membership function is generally randomly selected on the value range of the feature f i . Taking the feature Sepal length as an example, the value range of this feature is [4.3, 7.9], then among the three fuzzy membership functions corresponding to f 1 , the selected initial value of ξ may be: ξ 11 =5.2, ξ 12 = 6.1, ξ 13 =7.0. σ can be set as σ 11 =σ 12 =σ 13 =0.45, τ can be set as τ 11 =τ 12 =τ 13 =2, and the obtained membership function is shown in Figure 4 below.
(2.3)隐含层C(2.3) hidden layer C
(2.3.1)隐含层节点个数的选取(2.3.1) Selection of the number of hidden layer nodes
根据经验,选择S3=6。According to experience, choose S 3 =6.
(2.3.2)模糊映射层与隐含层之间的连接权(2.3.2) Connection weight between fuzzy mapping layer and hidden layer
模糊映射层B与隐含层C之间的连接权
(2.3.3)选择隐含层节点的作用函数(2.3.3) Select the action function of hidden layer nodes
隐含层节点的作用函数选择为Sigmoid函数:The function function of the hidden layer node is selected as the Sigmoid function:
其中,nu 3(q)为神经网络输入第q个样本时隐含层的节点Cu的输入,au 3(q)为相应的输出。Among them, n u 3 (q) is the input of node C u of the hidden layer when the neural network inputs the qth sample, and a u 3 (q) is the corresponding output.
(2.4)输出层D(2.4) Output layer D
(2.4.1)输出层节点个数的选择(2.4.1) Selection of the number of nodes in the output layer
输出层D的节点数S4等于训练样本的类别数K,即S4=K=3。The number S 4 of nodes in the output layer D is equal to the number K of categories of training samples, that is, S 4 =K=3.
(2.4.2)隐含层与输出层之间的连接权(2.4.2) Connection weight between hidden layer and output layer
隐含层C与输出层D之间的连接权值
至此,带有模糊映射层的人工神经网络构造完毕,其结构图如图3所示。So far, the artificial neural network with fuzzy mapping layer has been constructed, and its structure diagram is shown in Figure 3.
(3)使用用户给出的训练样本集训练初始化之后的人工神经网络。(3) Use the training sample set given by the user to train the artificial neural network after initialization.
(3.1)收敛条件的选取(3.1) Selection of convergence conditions
设定e<0.001为收敛条件。Set e<0.001 as the convergence condition.
(3.2)各层之间连接权值的更新(3.2) Update of connection weights between layers
根据经验选择权值学习速率α=0.05。The weight learning rate α=0.05 is selected according to experience.
按最速下降法,人工神经网络第m层和第m-1层(m=3,4)之间的连接权矩阵wm(维数是Sm-1×Sm),在第(r+1)次训练开始时更新为According to the steepest descent method, the connection weight matrix w m (dimension is S m-1 × S m ) between the mth layer and the m-1th layer (m=3, 4) of the artificial neural network, at the (r+ 1) At the beginning of training, it is updated as
wm(r+1)=wm(r)-0.05gm(am-1)T.w m (r+1)=w m (r)-0.05g m (a m-1 ) T .
其中in
(3.3)模糊映射层节点的作用函数的参数ξ,σ,τ的更新选择各参数的学习速率为θ=0.1, ρ=0.1。(3.3) The parameters ξ of the action function of the fuzzy mapping layer node, σ, the update selection of τ The learning rate of each parameter is θ=0.1, p = 0.1.
使用如下公式更新模糊映射层B的节点Bp(p=1,…,S2)的作用函数的三个参数ξp,σp,τp。The following formulas are used to update the three parameters ξ p , σ p , τ p of the action function of node B p (p=1, . . . , S 2 ) of fuzzy mapping layer B.
其中,pa2是对人工神经网络输入Q个样本时模糊映射层B的输出矩阵a2的第p行。Among them, p a 2 is the pth row of the output matrix a 2 of the fuzzy mapping layer B when Q samples are input to the artificial neural network.
(3.4)训练的终止(3.4) Termination of training
在第1037次训练结束之后,计算发现e=0.000999,满足收敛条件,终止训练。After the 1037th training, the calculation finds that e=0.000999, the convergence condition is satisfied, and the training is terminated.
(4)使用已训练好的人工神经网络对特征进行模糊剪枝,计算每个特征的重要性度量,并排序。(4) Use the trained artificial neural network to perform fuzzy pruning on the features, calculate the importance measure of each feature, and sort them.
(4.1)对特征fi进行模糊剪枝(4.1) Perform fuzzy pruning on feature f i
以特征Sepal length为例,对fi进行剪枝,也就是使得模糊映射层的节点B11,B12,B13的输出值置为0。比如,特征Sepal length的观察值是5.1,剪枝前模糊映射层节点B12,B12,B13的输出是[0.117,0.005,0.009],特征Sepal width的观察值是3.5,剪枝前模糊映射层节点B21,B22,B23的输出是[0.100,0.500,0.500],特征Petal length的观察值是1.4,剪枝前模糊映射层节点B31,B32,B33的输出是[0.141,0.974,0.028],特征Petal width的观察值是0.2,剪枝前模糊映射层节点B41,B42,B43的输出是[0.265,0.069,0.030],因此样本[5.1,3.5,1.4,0.2]在剪枝前模糊映射层的输出是Taking the feature Sepal length as an example, fi is pruned, that is, the output values of nodes B 11 , B 12 , and B 13 of the fuzzy mapping layer are set to 0. For example, the observed value of the feature Sepal length is 5.1, the output of the fuzzy mapping layer nodes B 12 , B 12 , and B 13 before pruning is [0.117, 0.005, 0.009], the observed value of the feature Sepal width is 3.5, and the blurred before pruning The output of mapping layer nodes B 21 , B 22 , B 23 is [0.100, 0.500, 0.500], the observation value of the feature Petal length is 1.4, and the output of fuzzy mapping layer nodes B 31 , B 32 , B 33 before pruning is [ 0.141, 0.974, 0.028], the observation value of the feature Petal width is 0.2, the output of the fuzzy mapping layer nodes B 41 , B 42 , and B 43 before pruning is [0.265, 0.069, 0.030], so the sample [5.1, 3.5, 1.4 , 0.2] The output of the fuzzy mapping layer before pruning is
[0.117,0.005,0.009,0.100,0.500,0.500,0.141,0.974,0.028,0.265,0.069,0.030]。[0.117, 0.005, 0.009, 0.100, 0.500, 0.500, 0.141, 0.974, 0.028, 0.265, 0.069, 0.030].
而要进行剪枝,就是要把这个输出修改为To perform pruning, it is necessary to modify this output to
[0.500,0.500,0.500,0.100,0.500,0.500,0.141, 0.974,0.028,0.265,0.069,0.030]。[0.500, 0.500, 0.500, 0.100, 0.500, 0.500, 0.141, 0.974, 0.028, 0.265, 0.069, 0.030].
然后,计算进行这种修改后人工神经网络对于输入样本xq时输出层给出的输出向量a4(xq,1)。对其他特征的剪枝以此类推。Then, calculate the output vector a 4 (x q , 1) given by the output layer of the artificial neural network for the input sample x q after such modification. The pruning of other features can be deduced by analogy.
(4.2)计算特征的重要性度量FQJ(i)(4.2) Calculate the importance measure FQJ(i) of the feature
仍然以特征Sepal length为例,对f1计算FQJ(1):Still taking the feature Sepal length as an example, calculate FQJ(1) for f 1 :
类似的,计算得到FQJ(2)=0.095858,FQJ(3)=0.491984,FQJ(4)=0.511002。Similarly, FQJ(2)=0.095858, FQJ(3)=0.491984, FQJ(4)=0.511002 are calculated.
对所有特征fi,按照对应的FQJ(i)值的大小降序排列,得到特征对于分类任务的重要性排序为:Petal width,Petal length,Sepal width,Sepal length。For all features f i , they are arranged in descending order according to the corresponding FQJ(i) values, and the importance of the features for the classification task is sorted as follows: Petal width, Petal length, Sepal width, Sepal length.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100195700A CN100367300C (en) | 2006-07-07 | 2006-07-07 | A Feature Selection Method Based on Artificial Neural Network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2006100195700A CN100367300C (en) | 2006-07-07 | 2006-07-07 | A Feature Selection Method Based on Artificial Neural Network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1945602A true CN1945602A (en) | 2007-04-11 |
CN100367300C CN100367300C (en) | 2008-02-06 |
Family
ID=38045000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2006100195700A Active CN100367300C (en) | 2006-07-07 | 2006-07-07 | A Feature Selection Method Based on Artificial Neural Network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100367300C (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101187649B (en) * | 2007-12-12 | 2010-04-07 | 哈尔滨工业大学 | Automatic Identification Method of Interface Defects in Diffusion Welding of Dissimilar Materials |
CN101441728B (en) * | 2007-11-21 | 2010-09-08 | 新乡市起重机厂有限公司 | Neural network method of crane optimum design |
CN101882238A (en) * | 2010-07-15 | 2010-11-10 | 长安大学 | Wavelet Neural Network Processor Based on SOPC |
CN101350069B (en) * | 2007-06-15 | 2010-11-17 | 三菱电机株式会社 | Computer implemented method for constructing classifier from training data and detecting moving objects in test data using classifier |
CN101079109B (en) * | 2007-06-26 | 2011-11-30 | 北京中星微电子有限公司 | Identity identification method and system based on uniform characteristic |
CN101510262B (en) * | 2009-03-17 | 2012-05-23 | 江苏大学 | Automatic measurement method for separated-out particles in steel and morphology classification method thereof |
CN102609612A (en) * | 2011-12-31 | 2012-07-25 | 电子科技大学 | Data fusion method for calibration of multi-parameter instruments |
CN103425994A (en) * | 2013-07-19 | 2013-12-04 | 淮阴工学院 | Feature selecting method for pattern classification |
CN103606007A (en) * | 2013-11-20 | 2014-02-26 | 广东省电信规划设计院有限公司 | Target identification method and apparatus based on Internet of Things |
CN103759290A (en) * | 2014-01-16 | 2014-04-30 | 广东电网公司电力科学研究院 | Large coal-fired unit online monitoring and optimal control system and implementation method thereof |
CN104504443A (en) * | 2014-12-09 | 2015-04-08 | 河海大学 | Feature selection method and device based on RBF (Radial Basis Function) neural network sensitivity |
WO2016123409A1 (en) * | 2015-01-28 | 2016-08-04 | Google Inc. | Batch normalization layers |
CN106885228A (en) * | 2017-02-10 | 2017-06-23 | 青岛高校信息产业股份有限公司 | A kind of boiler coal-air ratio optimization method and system |
CN107292387A (en) * | 2017-05-31 | 2017-10-24 | 汪薇 | A kind of method that honey quality is recognized based on BP |
CN107480686A (en) * | 2016-06-08 | 2017-12-15 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus of screening machine learning characteristic |
CN107707657A (en) * | 2017-09-30 | 2018-02-16 | 苏州涟漪信息科技有限公司 | Safety custody system based on multisensor |
CN108229533A (en) * | 2017-11-22 | 2018-06-29 | 深圳市商汤科技有限公司 | Image processing method, model pruning method, device and equipment |
CN109587248A (en) * | 2018-12-06 | 2019-04-05 | 腾讯科技(深圳)有限公司 | User identification method, device, server and storage medium |
CN109726812A (en) * | 2017-10-31 | 2019-05-07 | 通用电气公司 | Feature ordering neural network and method generate the method for simplifying feature set model |
CN109754077A (en) * | 2017-11-08 | 2019-05-14 | 杭州海康威视数字技术股份有限公司 | Network model compression method, device and the computer equipment of deep neural network |
CN114398983A (en) * | 2022-01-14 | 2022-04-26 | 腾讯科技(深圳)有限公司 | Classification prediction method, apparatus, apparatus, storage medium and computer program product |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
PT1301894E (en) * | 2000-04-24 | 2009-08-21 | Iris Int Inc | Multi-neural net imaging apparatus and method |
AU2002230050A1 (en) * | 2001-01-31 | 2002-08-12 | Prediction Dynamics Limited | Feature selection for neural networks |
JP3896868B2 (en) * | 2002-02-27 | 2007-03-22 | 日本電気株式会社 | Pattern feature selection method, classification method, determination method, program, and apparatus |
-
2006
- 2006-07-07 CN CNB2006100195700A patent/CN100367300C/en active Active
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101350069B (en) * | 2007-06-15 | 2010-11-17 | 三菱电机株式会社 | Computer implemented method for constructing classifier from training data and detecting moving objects in test data using classifier |
CN101079109B (en) * | 2007-06-26 | 2011-11-30 | 北京中星微电子有限公司 | Identity identification method and system based on uniform characteristic |
CN101441728B (en) * | 2007-11-21 | 2010-09-08 | 新乡市起重机厂有限公司 | Neural network method of crane optimum design |
CN101187649B (en) * | 2007-12-12 | 2010-04-07 | 哈尔滨工业大学 | Automatic Identification Method of Interface Defects in Diffusion Welding of Dissimilar Materials |
CN101510262B (en) * | 2009-03-17 | 2012-05-23 | 江苏大学 | Automatic measurement method for separated-out particles in steel and morphology classification method thereof |
CN101882238A (en) * | 2010-07-15 | 2010-11-10 | 长安大学 | Wavelet Neural Network Processor Based on SOPC |
CN102609612A (en) * | 2011-12-31 | 2012-07-25 | 电子科技大学 | Data fusion method for calibration of multi-parameter instruments |
CN102609612B (en) * | 2011-12-31 | 2015-05-27 | 电子科技大学 | Data fusion method for calibration of multi-parameter instruments |
CN103425994A (en) * | 2013-07-19 | 2013-12-04 | 淮阴工学院 | Feature selecting method for pattern classification |
CN103425994B (en) * | 2013-07-19 | 2016-09-21 | 淮阴工学院 | A kind of feature selection approach for pattern classification |
CN103606007A (en) * | 2013-11-20 | 2014-02-26 | 广东省电信规划设计院有限公司 | Target identification method and apparatus based on Internet of Things |
CN103606007B (en) * | 2013-11-20 | 2016-11-16 | 广东省电信规划设计院有限公司 | Target recognition method and device based on internet of things |
CN103759290A (en) * | 2014-01-16 | 2014-04-30 | 广东电网公司电力科学研究院 | Large coal-fired unit online monitoring and optimal control system and implementation method thereof |
CN104504443A (en) * | 2014-12-09 | 2015-04-08 | 河海大学 | Feature selection method and device based on RBF (Radial Basis Function) neural network sensitivity |
US10902319B2 (en) | 2015-01-28 | 2021-01-26 | Google Llc | Batch normalization layers |
US10417562B2 (en) | 2015-01-28 | 2019-09-17 | Google Llc | Batch normalization layers |
US12033073B2 (en) | 2015-01-28 | 2024-07-09 | Google Llc | Batch normalization layers |
US11893485B2 (en) | 2015-01-28 | 2024-02-06 | Google Llc | Batch normalization layers |
US11853885B2 (en) | 2015-01-28 | 2023-12-26 | Google Llc | Image classification using batch normalization layers |
US11308394B2 (en) | 2015-01-28 | 2022-04-19 | Google Llc | Image classification using batch normalization layers |
RU2666308C1 (en) * | 2015-01-28 | 2018-09-06 | Гугл Инк. | Package normalization layers |
US11281973B2 (en) | 2015-01-28 | 2022-03-22 | Google Llc | Batch normalization layers |
WO2016123409A1 (en) * | 2015-01-28 | 2016-08-04 | Google Inc. | Batch normalization layers |
US10628710B2 (en) | 2015-01-28 | 2020-04-21 | Google Llc | Image classification using batch normalization layers |
EP3483795A1 (en) * | 2015-01-28 | 2019-05-15 | Google LLC | Batch normalization layers |
CN107480686A (en) * | 2016-06-08 | 2017-12-15 | 阿里巴巴集团控股有限公司 | A kind of method and apparatus of screening machine learning characteristic |
CN106885228A (en) * | 2017-02-10 | 2017-06-23 | 青岛高校信息产业股份有限公司 | A kind of boiler coal-air ratio optimization method and system |
CN107292387A (en) * | 2017-05-31 | 2017-10-24 | 汪薇 | A kind of method that honey quality is recognized based on BP |
CN107707657A (en) * | 2017-09-30 | 2018-02-16 | 苏州涟漪信息科技有限公司 | Safety custody system based on multisensor |
CN107707657B (en) * | 2017-09-30 | 2021-08-06 | 苏州涟漪信息科技有限公司 | Safety monitoring system based on multiple sensors |
CN109726812A (en) * | 2017-10-31 | 2019-05-07 | 通用电气公司 | Feature ordering neural network and method generate the method for simplifying feature set model |
CN109754077B (en) * | 2017-11-08 | 2022-05-06 | 杭州海康威视数字技术股份有限公司 | Network model compression method and device of deep neural network and computer equipment |
CN109754077A (en) * | 2017-11-08 | 2019-05-14 | 杭州海康威视数字技术股份有限公司 | Network model compression method, device and the computer equipment of deep neural network |
CN108229533A (en) * | 2017-11-22 | 2018-06-29 | 深圳市商汤科技有限公司 | Image processing method, model pruning method, device and equipment |
CN109587248A (en) * | 2018-12-06 | 2019-04-05 | 腾讯科技(深圳)有限公司 | User identification method, device, server and storage medium |
CN109587248B (en) * | 2018-12-06 | 2023-08-29 | 腾讯科技(深圳)有限公司 | User identification method, device, server and storage medium |
CN114398983A (en) * | 2022-01-14 | 2022-04-26 | 腾讯科技(深圳)有限公司 | Classification prediction method, apparatus, apparatus, storage medium and computer program product |
CN114398983B (en) * | 2022-01-14 | 2024-11-05 | 腾讯科技(深圳)有限公司 | Classification prediction method, device, equipment, storage medium and computer program product |
Also Published As
Publication number | Publication date |
---|---|
CN100367300C (en) | 2008-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1945602A (en) | Characteristic selecting method based on artificial nerve network | |
CN1846218A (en) | An artificial neural network | |
CN1100541A (en) | Neural network and method of using same | |
CN100336070C (en) | Method of robust human face detection in complicated background image | |
CN1280773C (en) | Paper note truth and false identifying method and paper note inserting direction identifying method | |
CN1096038C (en) | Method and device for file retrieval based on Bayesian network | |
CN1238833C (en) | Voice idnetifying device and voice identifying method | |
CN1599913A (en) | Iris identification system and method, and storage media having program thereof | |
CN1363899A (en) | File sorting parameters generator and file sortor for using parameters therefrom | |
Leung et al. | Data mining on dna sequences of hepatitis b virus | |
CN1310825A (en) | Methods and apparatus for classifying text and for building a text classifier | |
CN1794266A (en) | Biocharacteristics fusioned identity distinguishing and identification method | |
CN1595398A (en) | System that translates by improving a plurality of candidate translations and selecting best translation | |
CN1689042A (en) | Biometrics information registration apparatus, biometrics information matching apparatus, biometrics information registration/matching system, and biometrics information registration program | |
CN1894703A (en) | Pattern recognition method, and device and program therefor | |
CN1287641A (en) | Method and apparatus for performing pattern dictionary formation for use in sequence homology detection | |
CN1871622A (en) | Image collation system and image collation method | |
CN1924734A (en) | Control method for online quality detection | |
CN101030257A (en) | File-image cutting method based on Chinese characteristics | |
CN1109310C (en) | Time-series signal predicting apparatus | |
CN100346339C (en) | Image retrieval method and image retrieval device | |
CN116741268B (en) | Method, device and computer readable storage medium for screening key mutation of pathogen | |
CN1313975A (en) | N-tuple or ram based neural network classification system and method | |
CN110516526A (en) | A Few-Sample Object Recognition Method Based on Feature Prototype Metric Learning | |
CN1653486A (en) | Pattern Feature Selection Method, Classification Method, Judgment Method, Program and Unit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
ASS | Succession or assignment of patent right |
Owner name: SHENZHEN RUIMING TECHNOLOGY CO., LTD. Free format text: FORMER OWNER: HUAZHONG SCINECE AND TECHNOLOGY UNIV Effective date: 20100804 |
|
C41 | Transfer of patent application or patent right or utility model | ||
COR | Change of bibliographic data |
Free format text: CORRECT: ADDRESS; FROM: 430074 NO.1037, LUOYU ROAD, HONGSHAN DISTRICT, WUHAN CITY, HUBEI PROVINCE TO: 518057 5/F, BUILDING 3, SHENZHEN SOFTWARE PARK, KEJIZHONGER ROAD, NEW + HIGH-TECHNOLOGY ZONE, NANSHAN DISTRICT, SHENZHEN CITY, GUANGDONG PROVINCE |
|
TR01 | Transfer of patent right |
Effective date of registration: 20100804 Address after: 518057, five floor, building three, Shenzhen Software Park, two road, Nanshan District hi tech Zone, Shenzhen, Guangdong Patentee after: Shenzhen Streaming Video Technology Co., Ltd. Address before: 430074 Hubei Province, Wuhan city Hongshan District Luoyu Road No. 1037 Patentee before: Huazhong University of Science and Technology |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Characteristic selecting method based on artificial nerve network Effective date of registration: 20130110 Granted publication date: 20080206 Pledgee: Shenzhen SME credit financing guarantee Group Co., Ltd. Pledgor: Shenzhen Streaming Video Technology Co., Ltd. Registration number: 2013990000024 |
|
PLDC | Enforcement, change and cancellation of contracts on pledge of patent right or utility model | ||
PC01 | Cancellation of the registration of the contract for pledge of patent right |
Date of cancellation: 20140318 Granted publication date: 20080206 Pledgee: Shenzhen SME credit financing guarantee Group Co., Ltd. Pledgor: Shenzhen Streaming Video Technology Co., Ltd. Registration number: 2013990000024 |
|
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Characteristic selecting method based on artificial nerve network Effective date of registration: 20140318 Granted publication date: 20080206 Pledgee: Shenzhen SME credit financing guarantee Group Co., Ltd. Pledgor: Shenzhen Streaming Video Technology Co., Ltd. Registration number: 2014990000174 |
|
PLDC | Enforcement, change and cancellation of contracts on pledge of patent right or utility model | ||
PC01 | Cancellation of the registration of the contract for pledge of patent right |
Date of cancellation: 20150528 Granted publication date: 20080206 Pledgee: Shenzhen SME credit financing guarantee Group Co., Ltd. Pledgor: Shenzhen Streaming Video Technology Co., Ltd. Registration number: 2014990000174 |
|
PLDC | Enforcement, change and cancellation of contracts on pledge of patent right or utility model | ||
PE01 | Entry into force of the registration of the contract for pledge of patent right |
Denomination of invention: Characteristic selecting method based on artificial nerve network Effective date of registration: 20150603 Granted publication date: 20080206 Pledgee: Shenzhen SME financing Company limited by guarantee Pledgor: Shenzhen Streaming Video Technology Co., Ltd. Registration number: 2015990000430 |
|
PLDC | Enforcement, change and cancellation of contracts on pledge of patent right or utility model | ||
C56 | Change in the name or address of the patentee |
Owner name: SHENZHEN STREAMAX TECHNOLOGY CO., LTD. Free format text: FORMER NAME: SHENZHEN RUIMING TECHNOLOGY CO., LTD. |
|
CP03 | Change of name, title or address |
Address after: Nanshan District Xueyuan Road in Shenzhen city of Guangdong province 518000 No. 1001 Nanshan Chi Park building B1 building 21-23 Patentee after: STREAMAX TECHNOLOGY CO., LTD. Address before: 518057, five floor, building three, Shenzhen Software Park, two road, Nanshan District hi tech Zone, Shenzhen, Guangdong Patentee before: Shenzhen Streaming Video Technology Co., Ltd. |
|
DD01 | Delivery of document by public notice |
Addressee: Chen Dan Document name: Notification of Passing Examination on Formalities |
|
PC01 | Cancellation of the registration of the contract for pledge of patent right |
Date of cancellation: 20160718 Granted publication date: 20080206 Pledgee: Shenzhen SME financing Company limited by guarantee Pledgor: STREAMAX TECHNOLOGY CO., LTD. Registration number: 2015990000430 |
|
PLDC | Enforcement, change and cancellation of contracts on pledge of patent right or utility model | ||
PM01 | Change of the registration of the contract for pledge of patent right |
Change date: 20160718 Registration number: 2015990000430 Pledgor after: STREAMAX TECHNOLOGY CO., LTD. Pledgor before: Shenzhen Streaming Video Technology Co., Ltd. |