CN105389599A - Feature selection approach based on neural-fuzzy network - Google Patents

Feature selection approach based on neural-fuzzy network Download PDF

Info

Publication number
CN105389599A
CN105389599A CN201510658528.2A CN201510658528A CN105389599A CN 105389599 A CN105389599 A CN 105389599A CN 201510658528 A CN201510658528 A CN 201510658528A CN 105389599 A CN105389599 A CN 105389599A
Authority
CN
China
Prior art keywords
fuzzy
feature
network
neuro
fuzzy network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510658528.2A
Other languages
Chinese (zh)
Inventor
胡静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN201510658528.2A priority Critical patent/CN105389599A/en
Publication of CN105389599A publication Critical patent/CN105389599A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a feature selection approach based on a neural-fuzzy network. The feature selection approach includes the following steps of: the first step, adjusting parameters of fuzzy membership functions to obtain a membership function set shown in the description, based on an x training neural fuzzy network, wherein [mu]fimj represents an mj-th membership function of features fi; the second step, calculating the output oq of the neural-fuzzy network when the input is xq; the third step, modifying the neural-fuzzy network to allow a fuzzy mapping layer to map the xq as that shown in the description, wherein the values of all the membership functions of the fi are identically equal to 0.5, and at this moment, the input of the neural-fuzzy network is that shown in the description; the fourth step, calculating measurement values FQJi shown in the description of the feature fi; and the fifth step, sorting the feature measurement values FQJi in a descending order.

Description

Based on the feature selection approach of Neuro-Fuzzy Network
Technical field
The present invention relates to mode identification technology, more particularly, the present invention relates to a kind of feature selection approach based on Neuro-Fuzzy Network.
Background technology
Feature selecting is the key issue of the technology such as pattern-recognition.In recent years, artificial neural network being used for feature selecting is a hot issue.By having the artificial neural network of good learning performance, the significance level of each feature or character subset can be inferred, but also there are some problems.
In feature selecting field, major part can be regarded as a special case of network pruning algorithms based on the feature selecting of artificial neural network, unlike being input node by what wipe out, instead of hidden node or weights.The method of current widespread use be utilize network before and after beta pruning export between variable quantity as by the tolerance of beta pruning feature importance, this method also can be called feature significance measure.Generally all suppose that the significance level of a feature is lower for a study good network, it is less on the output impact of network.
The neural network type being mainly used in feature significance tolerance is at present divided into two large classes, one class is the Multilayer Feedforward Neural Networks based on multilayer perceptron, as a kind of feature selecting algorithm based on multilayer perceptron that professor Ruck proposes, define a kind of feature significance measure, namely weighed the significance level of feature by the output of computational grid relative to the rate of change that network inputs; Professor R.K.De on this basis, proposes another kind of method, and the minimum value of namely first fetching data and maximal value are normalized, and then weighs the significance level of feature relative to the rate of change that network inputs by the output of computational grid.
Another large class is exactly fuzzy neural network, as professor MK.Jia proposes to utilize membership function that data are mapped to degree of membership space from original feature space, and in the meaning of degree of membership, carries out network beta pruning and computational grid to the susceptibility of certain feature.Professor D.Chakraborty utilizes front a kind of method design based on fuzzy rule Neuro-Fuzzy Network, can carry out feature selecting and classification simultaneously.This network has 4 layers, and point 3 stages train, and wherein the 2nd stage completes feature selecting, and then in the end the stage completes in membership function adjustment, and that is, before membership function study, feature selecting completes.
Said method all carries out feature selecting based on artificial neural network.First class is mainly main based on the feature selecting of multilayer perceptron.Although achieve some good classifying qualities to feature selecting, this simple method is for the data of some special distribution, and its result can not reflect truth.If be normalized data in advance, although can ensure some unchangeability, such as displacement and convergent-divergent constant, may lose other unchangeability, such as rotatory, and may lose the vital role of information some have to(for) classification.Second largest class methods, based on fuzzy logic and artificial neural network, make some key characters lose problem although avoid due to the normalization of data, also can reflect certain authenticity to special distributed data.But the definition due to its membership function is actually and completed before e-learning, so degree of membership maps the data distortion after still may causing mapping.The four-layer structure network that professor D.Chakraborty proposes, the 2nd stage just completes feature selecting, and then in the end the stage completes in membership function adjustment, and that is, before membership function study, feature selecting completes.So the result of feature selecting will depend critically upon the initial value of membership function.
Summary of the invention
Technical matters to be solved by this invention is for there is above-mentioned defect in prior art, provides a kind of feature selection approach based on Neuro-Fuzzy Network; Wherein Neuro-Fuzzy Network object of the present invention is, feature selecting is after whole e-learning completes, and namely degree of membership self study is carried out after completing, and avoids the problem that characteristic results depends on membership function initial parameter value.In addition, fuzzy membership function is completed by self study, avoids the problem that the key character that brings due to data normalization is lost.Further, also define a kind of new characteristic measure method, make beta pruning process be carry out at the mapping layer of fuzzy neural network, instead of carry out at input layer.The method easily and other searching algorithms combine the complete pattern classification system of composition one.
In order to realize above-mentioned technical purpose, according to the present invention, providing a kind of feature selection approach based on Neuro-Fuzzy Network, comprising:
First step: train Neuro-Fuzzy Network based on χ, the parameter of adjustment fuzzy membership function is to obtain membership function collection:
μ = { μ 11 , ... μ 1 m 1 , ... ... , μ i 1 , ... μ im j , ... ... , μ R 1 , ... , μ Rm R } ,
Wherein representation feature f im jindividual membership function;
Second step: calculate and be input as x qtime Neuro-Fuzzy Network output o q;
Third step: amendment Neuro-Fuzzy Network, makes FUZZY MAPPING layer by x qbe mapped as
Wherein make f iall membership function value perseverances be 0.5, the now output of Neuro-Fuzzy Network is
4th step: calculate feature f imetric FQJ i
FQJ i = 1 Q Σ q = 1 Q || o q - q q ( i ) || 2 ;
5th step: by characteristic measure value FQJ iby descending sort.
Preferably, by metric FQJ ilarger feature f ibe defined as more important.
Accompanying drawing explanation
By reference to the accompanying drawings, and by reference to detailed description below, will more easily there is more complete understanding to the present invention and more easily understand its adjoint advantage and feature, wherein:
Fig. 1 schematically shows the structure of Neuro-Fuzzy Network.
Fig. 2 schematically shows according to the preferred embodiment of the invention based on the process flow diagram of the feature selection approach of Neuro-Fuzzy Network.
It should be noted that, accompanying drawing is for illustration of the present invention, and unrestricted the present invention.Note, represent that the accompanying drawing of structure may not be draw in proportion.Further, in accompanying drawing, identical or similar element indicates identical or similar label.
Embodiment
In order to make content of the present invention clearly with understandable, below in conjunction with specific embodiments and the drawings, content of the present invention is described in detail.
The present invention proposes a kind of novel Neuro-Fuzzy Network, this network is the combination based on fuzzy set theory and artificial neural network, is mainly used in new Feature Selection, and the method can be applicable to the fields such as pattern-recognition, data mining, image procossing.
Principle of the present invention and embodiment will be explained below from Neuro-Fuzzy Network structure.
(1) Neuro-Fuzzy Network structure:
To C class (ω 1, ω 2... ω l..., ω c) identification problem, note training sample set is, χ = { x q | x q = ( x q 1 , x q 2 , ... , x q i , ... , x q R ) T ∈ R R , q = 1 , 2 , ... , Q } , Q is the number of training sample set, and feature set is Φ={ f 1..., f i... f r, x ijfeature f ian observed reading.Fuzzy inference system is according to rule set to x qdifferentiate, wherein fuzzy rule there is following form:
ifx 1isA 1kandx 2isA 2k…andx RisA Rkthenω l(1)
Wherein, A ikbe defined in feature f ion a kth fuzzy set, and be certain Fuzzy Logic Operators; As k ≠ h, not necessarily always there is A ik≠ A ihset up." if " part is called the former piece of fuzzy inference system, and " then " part is called the consequent of fuzzy inference system.Traditional fuzzy inference system needs expert to set up these rules, and Neuro-Fuzzy Network can obtain these rules by the representational training sample set of study automatically.The present invention adopts pruning algorithms to calculate the metric of feature in third layer, and utilization has the learning method of supervision to upgrade the parameter value of Neuro-Fuzzy Network.The structure of Neuro-Fuzzy Network is illustrated in fig. 1 shown below.
(2) principle of work of Neuro-Fuzzy Network:
As shown in Figure 1.Input layer L1 is the input block of network, and it is transferred to lower one deck input quantity, and its nodes is the dimension of input feature vector, i.e. d 1=R.
FUZZY MAPPING layer L2 is membership function mapping layer, and the output of input layer L1 is projected to membership function space at this layer.The output of the node j of FUZZY MAPPING layer L2 is:
a j 2 = μ A ( a i 1 ) - - - ( 2 )
Namely its membership function adopts bell (bell shape) function:
μ A ( x ) = 1 1 + [ ( x - ξ σ ) 2 ] τ , σ ≠ 0 , τ ≥ 0 - - - ( 3 )
Here ξ is the convergent-divergent of the center of this function, σ control function, and τ controls the width of the bell flat top of function, and τ is larger, and bell flat top is wider.The nodes of FUZZY MAPPING layer L2 wherein m ibe defined in feature f ion the number of fuzzy membership function.Input layer L1 is set to 1 to all weights of FUZZY MAPPING layer L2, and the node j of FUZZY MAPPING layer L2 exports and is:
a j 2 = z ‾ n 1 - e - β i 2 - - - ( 4 )
Wherein gaussian membership function, one and input feature vector f ithe adjustable parameter be associated, is called adjustment of features device, for the importance of measures characteristic.When when getting higher value, value trends towards for less trend towards 1, namely make feature convergent.Based on this, if for good feature get higher value, for bad feature get less value, just can with one about β i 2function γ i = 1 - e - β i 2 Select feature.
Each node of former piece layer L3 correspond to the if part of a fuzzy rule, and the output of its node j is:
a j 3 = M ( a j 2 , Θ ) - - - ( 5 )
Wherein M is defined as:
M ( x 1 , x 2 , ... , x R , r ) = ( x 1 r + x 2 r + ... + x R r ) 1 r R - - - ( 6 )
Wherein r >=1.The nodes of former piece layer L3 is fUZZY MAPPING layer L2 is set to 1 to all weights of former piece layer L3.In network training, the information that the value of three parameter ξ of bell shaped function, σ, τ can get according to the sample learning concentrated from training sample and be adjusted to suitable value.
The network structure of former piece layer L3 is the same with D.Chakraborty's, but, the parameter still participation in learning of former piece layer L3 of the present invention.
Each node of consequent layer L4, from " then " part that represent fuzzy rule the angle of fuzzy reasoning, represents a classification from the angle of pattern classification.Therefore, the nodes d of consequent layer L4 4=C.The weight of each rule that former piece layer L3 exports by the node of consequent layer L4 is joined together, and calculates and finally show that a value represents that the sample x of input belongs to the determination degree of class l with S type function.The transport function of consequent layer L4 is S type function.
Former piece layer L3 to consequent layer L4 is one and entirely connects, and weights press the initialization of Nguyen-Widrow method, and in e-learning subsequently, adjust to best value.The output of L4 layer:
a j 4 = S ( j W 4 a 3 ) - - - ( 7 )
Wherein W 4for the weight matrix of former piece layer L3 to consequent layer L4, dimension is d 4× d 3, the jth row vector of this matrix, a 3it is the output vector of former piece layer L3.
L4 transport function uses S type function, instead of as D.Chakraborty, use max function.
(3) learning process of Neuro-Fuzzy Network:
Network learning procedure is divided into 3 stages.1st stage fixed membership function parameter ξ, σ, τ, the weights between adjustment former piece layer L3 and consequent layer L4 and adjustment of features device parameter beta i, the 2nd stage, to γ ithe feature f of < th (th is a threshold value) i, wipe out its all nodes at FUZZY MAPPING layer L2, then re-training network.In 3rd stage, according to certain criterion, beta pruning is carried out to the node of former piece layer L3, and again trains, upgrade the parameter ξ of the bell membership degree function of FUZZY MAPPING layer L2, σ, τ simultaneously.
Fig. 2 schematically shows according to the preferred embodiment of the invention based on the process flow diagram of the feature selection approach of Neuro-Fuzzy Network.
As shown in Figure 2, comprise based on the feature selection approach of Neuro-Fuzzy Network according to the preferred embodiment of the invention:
First step S1: train Neuro-Fuzzy Network based on χ, make the parameter adjustment of fuzzy membership function to suitable value, obtain membership function collection:
&mu; = { &mu; 11 , ... &mu; 1 m 1 , ... ... , &mu; i 1 , ... &mu; im j , ... ... , &mu; R 1 , ... , &mu; Rm R } ,
Wherein representation feature f im jindividual membership function;
Second step S2: calculate and be input as x qtime Neuro-Fuzzy Network output o q;
Third step S3: amendment Neuro-Fuzzy Network, makes FUZZY MAPPING layer by x qbe mapped as,
Even f iall membership function value perseverances be 0.5, if now the output of Neuro-Fuzzy Network is
4th step S4: calculate feature f imetric FQJ i
FQJ i = 1 Q &Sigma; q = 1 Q || o q - q q ( i ) || 2
5th step S5: by characteristic measure value by descending sort, the larger feature of metric is more important.
Neuro-Fuzzy Network of the present invention combines the advantage of artificial neural network and fuzzy logic inference system, has broad application prospects in pattern-recognition and feature selecting field.The Neuro-Fuzzy Network four-layer structure that the present invention proposes has a self-adaptation membership function learning process, the method overcoming D.Chakraborty professor completes problem for feature selecting before membership function study, the result avoiding feature selecting will depend critically upon the problem of the initial value of membership function, meanwhile, it also avoid the data degradation difficult problem brought of data normalization.Neuro-Fuzzy Network combines with pruning algorithms, and it also avoid the data of simple algorithm for some special distribution of general neural network, its result can not reflect a difficult problem for truth.
Tool of the present invention has the following advantages: 1) fuzzy membership function is self study, avoids the problem of data normalization; 2) calculate simply, can feature selecting is recycled and reused for and need not again train after network training; 3) characteristic measure value has simple, the univocal advantage of calculating; 4) can be used for pattern classification, due to based on fuzzy rule, so network behavior easy understand, analysis; 5) easy and various searching algorithm combines the complete feature selecting system of composition one.Application surface of the present invention is comparatively wide, can be applied in the aspects such as image recognition, speech recognition, data mining, machine vision.
In addition, it should be noted that, unless stated otherwise or point out, otherwise the term " first " in instructions, " second ", " the 3rd " etc. describe only for distinguishing each assembly, element, step etc. in instructions, instead of for representing logical relation between each assembly, element, step or ordinal relation etc.
Be understandable that, although the present invention with preferred embodiment disclose as above, but above-described embodiment and be not used to limit the present invention.For any those of ordinary skill in the art, do not departing under technical solution of the present invention ambit, the technology contents of above-mentioned announcement all can be utilized to make many possible variations and modification to technical solution of the present invention, or be revised as the Equivalent embodiments of equivalent variations.Therefore, every content not departing from technical solution of the present invention, according to technical spirit of the present invention to any simple modification made for any of the above embodiments, equivalent variations and modification, all still belongs in the scope of technical solution of the present invention protection.

Claims (2)

1., based on a feature selection approach for Neuro-Fuzzy Network, it is characterized in that comprising:
First step: train Neuro-Fuzzy Network based on χ, the parameter of adjustment fuzzy membership function is to obtain membership function collection:
&mu; = { &mu; 11 , ... &mu; 1 m 1 , ... ... , &mu; i 1 , ... &mu; im j , ... ... , &mu; R 1 , ... , &mu; Rm R } ,
Wherein representation feature f im jindividual membership function;
Second step: calculate and be input as x qtime Neuro-Fuzzy Network output o q;
Third step: amendment Neuro-Fuzzy Network, makes FUZZY MAPPING layer by x qbe mapped as,
Wherein make f iall membership function value perseverances be 0.5, the now output of Neuro-Fuzzy Network is ;
4th step: calculate feature f imetric FQJ i
FQJ i = 1 Q &Sigma; q = 1 Q || o q - q q ( i ) || 2 ;
5th step: by characteristic measure value FQJ iby descending sort.
2. the feature selection approach based on Neuro-Fuzzy Network according to claim 1, is characterized in that, by metric FQJ ilarger feature f ibe defined as more important.
CN201510658528.2A 2015-10-12 2015-10-12 Feature selection approach based on neural-fuzzy network Pending CN105389599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510658528.2A CN105389599A (en) 2015-10-12 2015-10-12 Feature selection approach based on neural-fuzzy network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510658528.2A CN105389599A (en) 2015-10-12 2015-10-12 Feature selection approach based on neural-fuzzy network

Publications (1)

Publication Number Publication Date
CN105389599A true CN105389599A (en) 2016-03-09

Family

ID=55421869

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510658528.2A Pending CN105389599A (en) 2015-10-12 2015-10-12 Feature selection approach based on neural-fuzzy network

Country Status (1)

Country Link
CN (1) CN105389599A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN109993230A (en) * 2019-04-04 2019-07-09 江南大学 A kind of TSK Fuzzy System Modeling method towards brain function MRI classification
CN110021431A (en) * 2019-04-11 2019-07-16 上海交通大学 Artificial intelligence assistant diagnosis system, diagnostic method
CN111860826A (en) * 2016-11-17 2020-10-30 北京图森智途科技有限公司 Image data processing method and device of low-computing-capacity processing equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860826A (en) * 2016-11-17 2020-10-30 北京图森智途科技有限公司 Image data processing method and device of low-computing-capacity processing equipment
CN108229533A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Image processing method, model pruning method, device and equipment
CN109993230A (en) * 2019-04-04 2019-07-09 江南大学 A kind of TSK Fuzzy System Modeling method towards brain function MRI classification
CN109993230B (en) * 2019-04-04 2023-04-18 江南大学 TSK fuzzy system modeling method for brain function magnetic resonance image classification
CN110021431A (en) * 2019-04-11 2019-07-16 上海交通大学 Artificial intelligence assistant diagnosis system, diagnostic method

Similar Documents

Publication Publication Date Title
CN107247989A (en) A kind of neural network training method and device
CN107886073A (en) A kind of more attribute recognition approaches of fine granularity vehicle based on convolutional neural networks
CN109765462A (en) Fault detection method, device and the terminal device of transmission line of electricity
CN104850890B (en) Instance-based learning and the convolutional neural networks parameter regulation means of Sadowsky distributions
CN107463966A (en) Radar range profile&#39;s target identification method based on dual-depth neutral net
CN102156871B (en) Image classification method based on category correlated codebook and classifier voting strategy
CN106326984A (en) User intention identification method and device and automatic answering system
CN112784964A (en) Image classification method based on bridging knowledge distillation convolution neural network
CN105005774A (en) Face relative relation recognition method based on convolutional neural network and device thereof
CN105389599A (en) Feature selection approach based on neural-fuzzy network
CN107622272A (en) A kind of image classification method and device
CN106067042A (en) Polarization SAR sorting technique based on semi-supervised degree of depth sparseness filtering network
CN104299047B (en) A kind of method for building up of the Waypoint assessment indicator system based on Field Using Fuzzy Comprehensive Assessment
CN107358142A (en) Polarimetric SAR Image semisupervised classification method based on random forest composition
CN106777127A (en) The automatic generation method and system of the individualized learning process of knowledge based collection of illustrative plates
CN106778796A (en) Human motion recognition method and system based on hybrid cooperative model training
CN103838836A (en) Multi-modal data fusion method and system based on discriminant multi-modal deep confidence network
CN103927550B (en) A kind of Handwritten Numeral Recognition Method and system
CN105205453A (en) Depth-auto-encoder-based human eye detection and positioning method
CN111881802B (en) Traffic police gesture recognition method based on double-branch space-time graph convolutional network
CN111680451B (en) Online simulation system and method for microscopic urban traffic
CN106778882A (en) A kind of intelligent contract automatic classification method based on feedforward neural network
CN107203752A (en) A kind of combined depth study and the face identification method of the norm constraint of feature two
CN107662617A (en) Vehicle-mounted interactive controlling algorithm based on deep learning
CN113052254B (en) Multi-attention ghost residual fusion classification model and classification method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160309

RJ01 Rejection of invention patent application after publication