CN108960424A - Determination method, apparatus, equipment and the storage medium of triumph neuron - Google Patents
Determination method, apparatus, equipment and the storage medium of triumph neuron Download PDFInfo
- Publication number
- CN108960424A CN108960424A CN201810697923.5A CN201810697923A CN108960424A CN 108960424 A CN108960424 A CN 108960424A CN 201810697923 A CN201810697923 A CN 201810697923A CN 108960424 A CN108960424 A CN 108960424A
- Authority
- CN
- China
- Prior art keywords
- neuron
- neural network
- value
- training data
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
Abstract
The invention discloses determination method, apparatus, equipment and the storage mediums of a kind of triumph neuron.This method comprises: obtaining self organizing neural network currently the present weight value of each neuron and the current training data of the self organizing neural network in winning domain;The similarity parameter between each present weight value and the current training data is calculated, and determines the triumph neuron of the self organizing neural network based on the similarity parameter.The embodiment of the present invention can provide more diverse triumph neuron method of determination by using above-mentioned technical proposal for self organizing neural network, to meet the data processing needs of different user.
Description
Technical field
The present invention relates to nerual network technique field more particularly to a kind of determination method, apparatus, the equipment of triumph neuron
And storage medium.
Background technique
In recent years, with the development of big data technology and increasing for data processing needs, artificial neural network (Neural
Network, NN) also have been greatly developed.
Artificial neural network can be generally divided into BP (Back Propagation) neural network, radial base mind according to type
Through network, perceptron neural network, linear neural network, self organizing neural network and Feedback Neural Network etc..Wherein, self-organizing
Neural network introduces topological structure when network forms, and excitement, coordination between biological neuron are simulated by competition learning
The study and work that network is instructed with inhibition, principle of dynamics of the Competition in information processing, particularly suitable for solution
Certainly the problem of pattern classification and identification aspect.Here, the competition learning of self organizing neural network refers between same layer neuron
It vies each other, the neuron (i.e. triumph neuron) for competing triumph modifies the process of itself and corresponding connection weight.Competition learning
It is a kind of unsupervised learning method, only need to provides some learning samples to network in learning process, network is according to input sample
Characteristic carry out Self-organizing Maps can be to sample auto-sequencing and classification, without providing ideal target output.In self-organizing
In mapping process, the competition layer of self organizing neural network is responsible for carrying out learning sample analysis comparison, finds rule and sorts out.
It only can be based on the Euclidean distance between learning sample and neuron weight come to each nerve but the prior art is general
Member is analyzed, and determines triumph neuron, and method of determination is single, is unable to satisfy the use demand of different user.
Summary of the invention
In view of this, determination method, apparatus, equipment and storage that the embodiment of the present invention provides a kind of triumph neuron are situated between
Matter, to solve the single technical problem of self organizing neural network triumph neuron method of determination in the prior art.
In a first aspect, the embodiment of the invention provides a kind of determination methods of triumph neuron, comprising:
Obtain self organizing neural network currently the present weight value of each neuron and self-organizing nerve in winning domain
The current training data of network;
The similarity parameter between each present weight value and the current training data is calculated, and is based on the similarity
Parameter determines the triumph neuron of the self organizing neural network.
Second aspect, the embodiment of the invention provides a kind of determining devices of triumph neuron, comprising:
Weighted data obtains module, for obtain self organizing neural network currently in winning domain each neuron present weight
The current training data of value and the self organizing neural network;
Neuron determining module, for calculating the ginseng of the similarity between each present weight value and the current training data
It counts, and determines the triumph neuron of the self organizing neural network based on the similarity parameter.
The third aspect, the embodiment of the invention provides a kind of equipment, comprising:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processing
Device realizes the determination method of triumph neuron as described in the embodiments of the present invention.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer
Program realizes the determination method of the triumph neuron as described in the embodiment of the present application when the program is executed by processor.
In the technical solution of above-mentioned determining triumph neuron, self organizing neural network currently each nerve in winning domain is obtained
The present weight value of member and the current training data of self organizing neural network, calculate separately each present weight value and the current instruction
Practice the similarity parameter between data, and determines the triumph mind of self organizing neural network based on each similarity parameter being calculated
Through member.The technical solution of above-mentioned determination triumph neuron, can provide more diverse triumph neuron for self organizing neural network
Method of determination increases the scope of application of self organizing neural network, improves number of the self organizing neural network in different application scene
According to processing accuracy rate, meet the data processing needs of different user.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, of the invention other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is a kind of flow diagram of the determination method for triumph neuron that the embodiment of the present invention one provides;
Fig. 2 is a kind of flow diagram of the determination method of triumph neuron provided by Embodiment 2 of the present invention;
Fig. 3 is a kind of flow diagram of the determination method for triumph neuron that the embodiment of the present invention three provides;
Fig. 4 is a kind of structural block diagram of the determining device for triumph neuron that the embodiment of the present invention four provides;
Fig. 5 is a kind of structural schematic diagram for equipment that the embodiment of the present invention five provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just
In description, only some but not all contents related to the present invention are shown in the drawings.
Embodiment one
The embodiment of the present invention one provides a kind of determination method of triumph neuron.This method can be by triumph neuron really
Determine device execution, wherein the device can be generally integrated in based on self organizing neural network by software and or hardware realization
Model carries out in the equipment of data processing.Fig. 1 is the process of the determination method for the triumph neuron that the embodiment of the present invention one provides
Schematic diagram, as shown in Figure 1, this method comprises:
S110, obtain self organizing neural network currently in winning domain each neuron present weight value and the self-organizing
The current training data of neural network.
Wherein, the current winning domain of self organizing neural network can be understood as its self-organizing nerve when this calculates beginning
The winning field of network, can for self organizing neural network initial winning domain or using a upper input data determine from group
Knit winning domain corresponding to the upper triumph neuron of neural network.In view of the accuracy of determined triumph neuron, preferably
, if current training data there are a upper input data, i.e., current training data is not this using self organizing neural network
First data that self organizing neural network is input to when cluster operation are carried out, then current winning domain can be for using a upper input
Weighted value regulatory domain corresponding to the upper triumph neuron for the self organizing neural network that data determine;If current training data
There is no a upper input datas, then current winning domain can be the original winning domain of self organizing neural network.Correspondingly, if working as
Preceding training data is there are a upper input data, then currently the present weight value of each neuron is excellent in winning domain for self organizing neural network
Choosing can be obtained after the weighted value adjustment to each neuron in weighted value regulatory domain corresponding to upper triumph neuron
The weighted value of each neuron;If a upper input data is not present in current training data, self organizing neural network is currently winning
The present weight value of each neuron preferably can be the original power of each neuron in self organizing neural network currently winning domain in domain
Weight values.Wherein, the original winning domain of self organizing neural network can be set as needed by user or developer, it is preferred that its
It may include all neurons in self organizing neural network competition layer, to further increase subsequent determined triumph neuron
Accuracy;Self organizing neural network currently in winning domain each neuron original weighted value can by user setting or assign with
Machine number obtains.
In the present embodiment, the current training data of self organizing neural network can be obtained from local transfer, and can also be based on
User, which inputs, to be obtained, and if user is when carrying out data clusters or sequence using self organizing neural network, will can disposably be owned
Data are all input in data processing equipment, can also be only defeated to the data processing equipment configured with self organizing neural network every time
Enter (or one group) data, and inputs again after self organizing neural network determines the corresponding triumph neuron of the data next
(or next group) data.Situation about being disposably all input to all data for user in data processing equipment, current training
Data are established rules really, can according to need setting, as user can be true based on the descending or ascending sequence of each data
Settled preceding training data, alternatively, randomly selecting current training from the data of all data or unselected mistake that user inputs
Data, herein with no restriction.Wherein, the current training data of self organizing neural network can be understood as this and determine self-organizing mind
Input data used by triumph neuron through network.
Similarity parameter between S120, each present weight value of calculating and the current training data, and based on described
Similarity parameter determines the triumph neuron of the self organizing neural network.
In the present embodiment, similarity parameter can be understood as that phase between present weight value and current training data can be characterized
Like the parameter of degree, such as compatibility function, related coefficient, cosine similarity, mahalanobis distance, wherein compatibility function value, phase
Relationship number and cosine similarity and similarity degree are positively correlated, and mahalanobis distance and similarity degree are negatively correlated, i.e., if a certain current power
Compatibility function value, related coefficient and/or cosine similarity between weight values and current training data is larger, alternatively, a certain work as
Mahalanobis distance between preceding weighted value and current training data is smaller, then it represents that between the current weighted value and current training data
Similarity degree it is higher;, whereas if compatibility function value, phase relation between a certain present weight value and current training data
Several and/or cosine similarity is smaller, alternatively, the mahalanobis distance between a certain present weight value and current training data is larger, then
Indicate that the similarity degree between the current weighted value and current training data is lower.At this point, correspondingly, determining triumph neuron
When, it can be by compatibility function value, related coefficient and/or the cosine similarity in current winning domain between current training data
The corresponding neuron of maximum present weight value, alternatively, the smallest present weight of mahalanobis distance between current training data
It is worth the triumph neuron that corresponding neuron is determined as self organizing neural network.Wherein, present weight value and current training data
Between similarity parameter can based on one in compatibility function value, related coefficient, cosine similarity and mahalanobis distance or
Multinomial determination, compatibility function value, related coefficient, cosine similarity between present weight value and current training data and/or
The calculation method of mahalanobis distance can according to need selection, herein with no restriction.
In view of the correlation between different weighted values and different input datas, the similarity parameter preferably can be
Compatibility function value to eliminate the influence of amplitude variation, and avoids the multiple components for embodying single features from (embodying single features
Different present weight values and/or different input data) influence to triumph neuron definitive result.At this point, correspondingly, the meter
The similarity parameter between each present weight value and the current training data is calculated, and institute is determined based on the similarity parameter
The triumph neuron for stating self organizing neural network, may include: calculate each present weight value and the current training data it
Between compatibility function value, the present weight value and the current training data are row vector or column vector;By described one
The cause property corresponding neuron of the maximum present weight value of functional value is determined as the triumph neuron of the self organizing neural network.
The determination method for the triumph neuron that the embodiment of the present invention one provides obtains self organizing neural network currently winning domain
In the present weight value of each neuron and the current training data of self organizing neural network, calculate separately each present weight value with
Similarity parameter between the current training data, and self organizing neural network is determined based on each similarity parameter being calculated
Triumph neuron.The present embodiment can provide more diverse obtain by using above-mentioned technical proposal for self organizing neural network
Win neuron method of determination, increase the scope of application of self organizing neural network, improves self organizing neural network in different application field
Data processing accuracy rate in scape, meets the data processing needs of different user.
Embodiment two
Fig. 2 is the flow diagram of the determination method of triumph neuron provided by Embodiment 2 of the present invention.The present embodiment exists
It is optimized on the basis of above-described embodiment, in the present embodiment, " each present weight value and the current trained number will be calculated
Compatibility function value between " optimization are as follows: calculate the first auto spectral density, the current training data of each present weight value
The second auto spectral density, each present weight value with the current training data intersect general density;It is composed certainly according to described first
Density, second auto spectral density and the general density of intersection determine between each present weight value and the current training data
Compatibility function value.
Correspondingly, as shown in Fig. 2, the determination method of triumph neuron provided in this embodiment includes:
S210, obtain self organizing neural network currently in winning domain each neuron present weight value and the self-organizing
The current training data of neural network.
S220, the first auto spectral density for calculating each present weight value, the current training data the second auto spectral density,
The general density of intersecting of each present weight value and the current training data, the present weight value and the current training data
It is row vector or column vector.
In this step, the present weight value of a certain neuron and the current training data of self organizing neural network can be simultaneously
For row vector or column vector, the relationship between the neuron and each component of training data can pass through the neuron present weight value
Each component characterization.
In the present embodiment, a certain present weight value (such as x=(x1,x2,L,xn)T) and/or current training data (such as y=
(w1,w1,L,wn)T) auto spectral density can by calculating the auto-correlation function of the current weighted value and/or current training data,
And discrete Fourier transform acquisition is carried out to the auto-correlation function being calculated;A certain present weight value and current training data
Cross-spectral density can be and mutual to what is be calculated by calculating the cross-correlation function of the current weighted value and current training data
It closes function and carries out discrete time Fourier transform acquisition.Wherein, present weight value can be identical with the dimension of current training data
Or it is different.It handles for ease of calculation, present weight value and current training data are preferably the identical vector of dimension, at this point, such as
Both fruits dimension is different, then can be by adding at the lesser vector of dimension (present weight value or current training data) end
The vector is converted to the vector for having same dimension with the biggish vector of dimension by the mode of 0 component.
Illustratively, present weight value is indicated with x, current training data is indicated with y, with Rx(t) the auto-correlation letter of x is indicated
Number, with Ry(t) auto-correlation function of y is indicated, with Rxy(t) cross-correlation function of x and y is indicated, with Sx(ω) indicates variable x's
Auto spectral density, with Sy(ω) indicates the auto spectral density of variable y, with Sxy(ω) indicates the cross-spectral density of variable x and y, then is based on
Wiener-Xin Qin formula is available (by taking continuous signal as an example):
It follows that Sxy(ω) and RxyIt (t) is Fourier transform pairs, Sx(ω), and RxIt (t) is Fourier transform pairs, Sy
(ω) and RyIt (t) is Fourier transform pairs.Cross-spectral density Sxy(ω) can describe present weight value and current instruction in frequency domain
Practice the correlation of data, auto spectral density Sx(ω), and Sy(ω) is to describe a signal (present weight value and current in frequency domain
Training data) itself correlation.
S230, each worked as according to first auto spectral density, second auto spectral density and the general density determination of the intersection
Compatibility function value between preceding weighted value and the current training data.
In the present embodiment, compatibility function value can be understood as similar between reaction present weight value and current training data
The parameter of degree, the calculation method of compatibility function value can according to need setting, it is contemplated that calculation method it is simplicity, preferably
, the compatibility function value of each present weight value Yu current training data can be calculated by following formula:
Wherein, Sx(ω) is the first auto spectral density of present weight value, Sy(ω) is composed certainly for the second of current training data
Density, Sxy(ω) is the cross-spectral density of the present weight value and the current training data.
By Such analysis it is found that auto spectral density Sx(ω), and Sy(ω) is real number, intersects general density Sxy(ω) is plural number,
The mathematical meaning of above-mentioned formula is cross-spectral density SxyThe mould of (ω), value range are [0,1].WhenWhen, it indicates to work as
There is perfect linear relationship, the two similarity is very high by preceding weighted value x and current training data y;WhenWhen, it indicates
Any similitude is not present between present weight value x and current training data y.In practical applications, present weight value x and current
Compatibility function value between training data is typically in the range of between 0 and 1, and is not equal to 0 or 1, and the value of compatibility function is bigger, table
The corresponding neuron of bright present weight value x is higher to the responsiveness of current training data y.Compatibility function value reflects currently
The similarity degree of weighted value x and current training data y, and the influence of amplitude of variation is eliminated, only reflect in frequency domain merely
Similarity degree between two row vectors or the variation of column vector per unit, according to present weight value x's and current training data y
The assessment of similarity degree determines the triumph neuron of self organizing neural network, intuitive and concise.
S240, the corresponding neuron of the maximum present weight value of the compatibility function value is determined as the self-organizing mind
Triumph neuron through network.
The determination method of triumph neuron provided by Embodiment 2 of the present invention is worked as according to each neuron in current winning domain
Compatibility function value between preceding weighted value and current training data determines the triumph neuron of self organizing neural network, not only can
More diverse triumph neuron method of determination enough is provided for self organizing neural network, increases the applicable model of self organizing neural network
It encloses, improves data processing accuracy rate of the self organizing neural network in different application scene;Identified triumph can also be improved
The usage experience of user is turned up in the accuracy of neuron.
Embodiment three
Fig. 3 is a kind of flow diagram of the determination method for triumph neuron that the embodiment of the present invention three provides.This implementation
Example optimizes on the basis of the above embodiments, further, described determining described from group based on the similarity parameter
After the triumph neuron for knitting neural network, further includes: determine the corresponding weighted value regulatory domain of the triumph neuron and institute
State the current learning rate function of self organizing neural network;It is calculated in the weighted value regulatory domain according to the current learning rate function
Each neuron weighted value adjusted;The weighted value regulatory domain is determined as the current winning of the self organizing neural network
Each weighted value adjusted is determined as the present weight value of each neuron in the current winning domain by domain;Described in acquisition
Next input data is determined as the current of the self organizing neural network by next input data of self organizing neural network
Training data, and the operation for executing and calculating the similarity parameter between each weighted value and the current training data is returned, directly
It is each in the self organizing neural network to obtain until the maximum value of the current learning rate function is in setting numberical range
The weighted value of neuron completes training.
Correspondingly, as shown in figure 3, the determination method of triumph neuron provided in this embodiment includes:
S310, obtain self organizing neural network currently in winning domain each neuron present weight value and the self-organizing
The current training data of neural network.
Similarity parameter between S320, each present weight value of calculating and the current training data, and based on described
Similarity parameter determines the triumph neuron of the self organizing neural network.
S330, the current of the corresponding weighted value regulatory domain of the triumph neuron and the self organizing neural network is determined
Learning rate function.
Here, current learning rate function can be understood as determining corresponding learning rate function of current winning domain moment, i.e., on
The learning rate function at one moment.In the present embodiment, the corresponding weighted value regulatory domain of triumph neuron and self organizing neural network
The determination method of current learning rate function can according to need setting.
Specifically, the determination for weighted value regulatory domain, can preset and calculate corresponding weighted value regulatory domain every time
Shape (such as square, rectangle, hexagon regular figure or other irregular figures) and weight regulatory domain in each nerve
Maximum topology distance of the member apart from triumph neuron, and after each determining triumph neuron, centered on the triumph neuron,
Corresponding shape is calculated with this and maximum topology distance determines the corresponding weighted value regulatory domain of the triumph neuron;It can also be pre-
The reduction that weighted value regulatory domain is first arranged is regular (such as presetting the topology distance that weighted value regulatory domain is reduced every time), and
After determining triumph neuron, centered on the triumph neuron, current winning domain is reduced according to set reduction rule,
Obtain the corresponding weighted value regulatory domain of the triumph neuron.
Determination for current learning rate function can be preset with each nerve in training time and weighted value regulatory domain
The learning rate function of topology distance variation between member and triumph neuron, and based on after determining triumph neuron, by upper one
The corresponding training time at moment substitutes into learning rate function, with the current learning rate function of determination;Learning rate can also be preset
The adjustment rule of function, and after determining triumph neuron, upper learning rate function when adjusting weighted value according to last time and
The adjustment rule determines the current learning rate function of the self organizing neural network.
S340, each neuron weight adjusted in the weighted value regulatory domain is calculated according to the current learning rate function
Value.
It, can be according to the topology distance pair between neuron each in weighted value regulatory domain and triumph neuron in the present embodiment
The weighted value of each neuron is adjusted in weighted value regulatory domain.Illustratively, weighted value tune can be adjusted based on following formula
The weighted value of each neuron in integral domain:
xij(t)=xij(t-1)+η(t-1,N)[yi-xij(t-1)], i=1,2, L n, j ∈ Nj*(t)
Wherein, xij(t) weighted value regulatory domain N is indicatedj*(t) the neuron j in is in the current of t moment (i.e. current time)
Weighted value;xij(t-1) indicate neuron j in t-1 moment (i.e. last moment) weighted value;Study of the η (t-1, N) at the t-1 moment
Rate function, i.e., current learning rate function;yiFor current training data;N is weighted value regulatory domainIn neuron with obtain
Win neuron j*Between topology distance.Here, the initial value of learning rate function can by user or developer as needed into
Row setting.
S350, judge whether the maximum value of the current learning rate function is setting in numberical range, if so, executing
S380;If it is not, then executing S360.
In the present embodiment, under a certain fixed topology distance, learning rate function can be the function reduced with the training time,
The functional minimum value can for 0 or be substantially equal to 0 number.By above-mentioned weighted value adjustment formula it is found that learning rate functional value
Size mainly influence the size of each neuron weighted value adjustment amplitude, therefore, when learning rate functional value is reduced to lesser numerical value
When, it is believed that therefore the weighted value of each neuron of self organizing neural network, which has leveled off to stabilization, can preset for tying
Beam adjusts the setting numberical range of the learning rate function maxima of neuron weighted value process, and current in self organizing neural network
The weighted value of each neuron is determined as self-organizing feature map when the maximum value of learning rate function is within the setting numberical range
The best weights weight values of each neuron in network, and terminate the adjustment process of neuron weighted value.Illustratively, when learning rate function
When minimum value is zero, [0,0] can be set by setting numberical range or set 0 for the maximum value for setting numberical range;When
When learning rate functional minimum value is substantially equal to 0, the maximum value for setting numberical range can be set greater than to 0 numerical value,
Such as 0.005,0.001.
Herein, it should be noted that the setting data area of the above-mentioned maximum value for presetting learning rate function is only this
A kind of specific embodiment of application can also preset the intermediate value and/or average value of learning rate function in the present embodiment
Deng numberical range stop and when the intermediate value of current learning rate function and/or average value are within set numberical range
Only adjust the weighted value of each neuron in self organizing neural network.
S360, the current winning domain that the weighted value regulatory domain is determined as to the self organizing neural network, will be each described
Weighted value adjusted is determined as the present weight value of each neuron in the current winning domain.
In the present embodiment, each neuron weighted value adjusted in current winning domain directly can be determined as present weight
Value, can also be normalized each neuron weighted value adjusted, and the weighted value after normalized is determined
For the present weight value of each neuron, herein with no restriction.It is simplicity in view of subsequent calculating, it preferably can be to each neuron
Weighted value adjusted is normalized, at this time, it is preferred that it is described each weighted value adjusted is determined as it is described
The present weight value of each neuron in current winning domain, comprising: each weighted value adjusted is normalized, and
Each weighted value after normalized is determined as to the present weight value of each neuron in the current winning domain.
S370, the next input data for obtaining the self organizing neural network, are determined as institute for next input data
The current training data of self organizing neural network is stated, S320 is returned.
Specifically, next input data can be determined based on the input sequence of each data or other pre-set sequences,
Can also be randomly selected from multiple input datas next input data or from the input data of multiple unselected mistakes with
Machine chooses next input data.Herein, it should be noted that for based on each data input sequence or other are pre-set
Sequence determines the case where next input data, can be by the sequence if current training data is in the end of the sequence
First data is determined as next input data, can be suitable by this if current training data is not in the end of the sequence
It is located at after current training data in sequence and the data adjacent with current training data is determined as next input data;For from more
The case where randomly selecting next input data in the input data of a unselected mistake, if in each input data there is no not by
Each input data then can be marked the input data for being, and being never selected again by the input data of selection
In randomly select data as next input data.
In the present embodiment, next input data can be directly determined as to the current training data of self-organizing network, it can also
Next input data to be normalized, and next input data after normalized is determined as self-organizing nerve
The current training data of network, herein with no restriction.It is simplicity in view of subsequent calculating, it preferably can be to current training data
It is normalized, at this time, it is preferred that described that next input data is determined as working as the self organizing neural network
Preceding training data, comprising: next input data is normalized, and by next input number after normalized
According to the current training data for being determined as the self organizing neural network.
S380, the weighted value for recording each neuron in the self organizing neural network.
In the present embodiment, after the weighted value of each neuron tends towards stability, it can recorde in self organizing neural network at this time
The weighted value of each neuron, thus, when user inputs same type data again, the weighted value can be directly used to user again
The data of secondary input are clustered or are sorted.
The determination method for the triumph neuron that the embodiment of the present invention three provides, based between each weighted value and input data
Compatibility function value determines triumph neuron, and to the weight of each neuron in the corresponding weighted value regulatory domain of the triumph neuron
Value is adjusted, and is not only able to provide more diverse triumph neuron method of determination for self organizing neural network, increases self-organizing
The scope of application of neural network improves data processing accuracy rate of the self organizing neural network in different application scene;It can be with
The accuracy of triumph neuron determined by improving, improves convergence rate in self organizing neural network data handling procedure, reduces
Time spent by data handling procedure.
Example IV
The embodiment of the present invention four provides a kind of determining device of triumph neuron.The device can be by software and/or hardware
It realizes, can generally be integrated in the equipment for carrying out data processing based on self organizing neural network model, it can be by executing mind of winning
Determination method through member determines the triumph neuron of self organizing neural network.Fig. 4 is the triumph mind that the embodiment of the present invention four provides
The structural block diagram of determining device through member, as shown in figure 4, the device includes:
Weighted data obtains module 401, for obtain self organizing neural network currently in winning domain each neuron it is current
The current training data of weighted value and the self organizing neural network;
Neuron determining module 402, it is similar between each present weight value and the current training data for calculating
Parameter is spent, and determines the triumph neuron of the self organizing neural network based on the similarity parameter.
The determining device for the triumph neuron that the embodiment of the present invention four provides obtains module by weighted data and is obtained from group
Neural network currently the present weight value of each neuron and the current training data of self organizing neural network in winning domain are knitted, is led to
It crosses neuron determining module and calculates separately similarity parameter between each present weight value and the current training data, and based on
Obtained each similarity parameter determines the triumph neuron of self organizing neural network.The present embodiment is by using above-mentioned technical side
Case can provide more diverse triumph neuron method of determination for self organizing neural network, increase the suitable of self organizing neural network
With range, data processing accuracy rate of the self organizing neural network in different application scene is improved, the data of different user are met
Process demand.
In the above scheme, the similarity parameter can be compatibility function value, and the neuron determining module 402 can
To include: functional value computing unit, for calculating the consistency letter between each present weight value and the current training data
Numerical value, the present weight value and the current training data are row vector or column vector;Neuron determination unit, being used for will
The corresponding neuron of the maximum present weight value of compatibility function value is determined as the triumph mind of the self organizing neural network
Through member.
In the above scheme, the functional value computing unit may include: spectrum density computation subunit, each for calculating
First auto spectral density of present weight value, the second auto spectral density of the current training data, each present weight value with it is described
The general density of intersection of current training data;Functional value determines subelement, is used for according to first auto spectral density, described second certainly
Spectrum density and the general density of intersection determine the compatibility function value between each present weight value and the current training data.
In the above scheme, the functional value determines that subelement can be used for: calculating each present weight by following formula
Compatibility function value between value and the current training data:
Wherein, Sx(ω) is the first auto spectral density of present weight value, Sy(ω) is composed certainly for the second of current training data
Density, Sxy(ω) is the cross-spectral density of the present weight value and the current training data.
Further, the determining device of triumph neuron provided in this embodiment can also include: regulatory domain determining module,
For it is described the triumph neuron of the self organizing neural network is determined based on the similarity parameter after, determine described in obtain
Win the current learning rate function of the corresponding weighted value regulatory domain of neuron and the self organizing neural network;Weighted value adjusts mould
Block, for calculating each neuron weighted value adjusted in the weighted value regulatory domain according to the current learning rate function;It is excellent
Winning domain determining module will be each for the weighted value regulatory domain to be determined as to the current winning domain of the self organizing neural network
The weighted value adjusted is determined as the present weight value of each neuron in the current winning domain;Input data obtains mould
Next input data is determined as described from group by block for obtaining next input data of the self organizing neural network
The current training data of neural network is knitted, and is returned similar between the execution each weighted value of calculating and the current training data
Spend the operation of parameter, until until the maximum value of the current learning rate function is in setting numberical range, with obtain it is described oneself
The weighted value of each neuron in neural network is organized, training is completed.
In the above scheme, the regulatory domain determining module may include: regulatory domain determination unit, for the triumph
Centered on neuron, the current winning domain is reduced according to setting reduction rule, it is corresponding to obtain the triumph neuron
Weighted value regulatory domain;Learning rate determination unit, upper learning rate function and institute when for according to last time adjustment weighted value
State the current learning rate function that the adjustment rule of e-learning rate function is determined the self organizing neural network by self-organizing mind.
In the above scheme, the winning domain determining module can be used for: the weighted value regulatory domain is determined as described
Each weighted value adjusted is normalized in the current winning domain of self organizing neural network, and will be at normalization
Each weighted value after reason is determined as the present weight value of each neuron in the current winning domain.
In the above scheme, the input data obtains module and can be used for: obtaining under the self organizing neural network
Next input data is normalized in one input data, and next input data after normalized is true
It is set to the current training data of the self organizing neural network, and returns to execution and calculate each weighted value and the current trained number
The operation of similarity parameter between, until the current learning rate function maximum value setting numberical range in until,
To obtain the weighted value of each neuron in the self organizing neural network, training is completed.
The determining device for the triumph neuron that the embodiment of the present invention four provides can be performed what any embodiment of that present invention provided
The determination method of triumph neuron has the corresponding functional module of determination method and beneficial effect for executing triumph neuron.Not
The technical detail of detailed description in the present embodiment, reference can be made to the determination of triumph neuron provided by any embodiment of the invention
Method.
Embodiment five
Fig. 5 is a kind of structural schematic diagram for equipment/terminal/server that the embodiment of the present invention five provides, as shown in figure 5,
Equipment/the terminal/server includes processor 50 and memory 51, can also include input unit 52 and output device 53;If
The quantity of processor 50 can be one or more in standby/terminal/server, in Fig. 5 by taking a processor 50 as an example;Equipment/
Processor 50, memory 51, input unit 52 and output device 53 in terminal/server can pass through bus or other modes
It connects, in Fig. 5 for being connected by bus.
Memory 51 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer
Sequence and module, if the corresponding program instruction/module of the determination method of the triumph neuron in the embodiment of the present invention is (for example, obtain
The weighted data won in the determining device of neuron obtains module 401 and neuron determining module 402).Processor 50 passes through fortune
Software program, instruction and the module that row is stored in memory 51, thereby executing the various functions of equipment/terminal/server
Using and data processing, that is, realize the determination method of above-mentioned triumph neuron.
Memory 51 can mainly include storing program area and storage data area, wherein storing program area can store operation system
Application program needed for system, at least one function;Storage data area, which can be stored, uses created data etc. according to terminal.This
Outside, memory 51 may include high-speed random access memory, can also include nonvolatile memory, for example, at least a magnetic
Disk storage device, flush memory device or other non-volatile solid state memory parts.In some instances, memory 51 can be further
Including the memory remotely located relative to processor 50, these remote memories can by network connection to equipment/terminal/
Server.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Input unit 52 can be used for receiving the number or character information of input, and generate and equipment/terminal/server
User setting and the related key signals input of function control.Output device 53 may include that display screen etc. shows equipment.
The embodiment of the present invention five also provides a kind of storage medium comprising computer executable instructions, and the computer can be held
Row is instructed when being executed by computer processor for executing a kind of determination method of triumph neuron, this method comprises:
Obtain self organizing neural network currently the present weight value of each neuron and self-organizing nerve in winning domain
The current training data of network;
The similarity parameter between each present weight value and the current training data is calculated, and is based on the similarity
Parameter determines the triumph neuron of the self organizing neural network.
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present invention
Triumph nerve provided by any embodiment of the invention can also be performed in the method operation that executable instruction is not limited to the described above
Relevant operation in the determination method of member.
By the description above with respect to embodiment, it is apparent to those skilled in the art that, the present invention
It can be realized by software and required common hardware, naturally it is also possible to which by hardware realization, but in many cases, the former is more
Good embodiment.Based on this understanding, technical solution of the present invention substantially in other words contributes to the prior art
Part can be embodied in the form of software products, which can store in computer readable storage medium
In, floppy disk, read-only memory (Read-Only Memory, ROM), random access memory (Random such as computer
Access Memory, RAM), flash memory (FLASH), hard disk or CD etc., including some instructions are with so that a computer is set
Standby (can be personal computer, server or the network equipment etc.) executes method described in each embodiment of the present invention.
It is worth noting that, in the embodiment of the determining device of above-mentioned triumph neuron, included each unit and mould
Block is only divided according to the functional logic, but is not limited to the above division, and is as long as corresponding functions can be realized
It can;In addition, the specific name of each functional unit is also only for convenience of distinguishing each other, the protection model being not intended to restrict the invention
It encloses.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that
The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation,
It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention
It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also
It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.
Claims (10)
1. a kind of determination method of triumph neuron characterized by comprising
Obtain self organizing neural network currently the present weight value of each neuron and the self organizing neural network in winning domain
Current training data;
The similarity parameter between each present weight value and the current training data is calculated, and is based on the similarity parameter
Determine the triumph neuron of the self organizing neural network.
2. the method according to claim 1, wherein the similarity parameter is compatibility function value, the meter
The similarity parameter between each present weight value and the current training data is calculated, and institute is determined based on the similarity parameter
State the triumph neuron of self organizing neural network, comprising:
Calculate the compatibility function value between each present weight value and the current training data, the present weight value and institute
Stating current training data is row vector or column vector;
The corresponding neuron of the maximum present weight value of the compatibility function value is determined as the self organizing neural network
Triumph neuron.
3. according to the method described in claim 2, it is characterized in that, described calculate each present weight value and the current training
Compatibility function value between data, comprising:
Calculate the first auto spectral density of each present weight value, the second auto spectral density of the current training data, each current
Weighted value intersects general density with the current training data;
According to first auto spectral density, second auto spectral density and it is described intersect general density determine each present weight value with
Compatibility function value between the current training data.
4. according to the method described in claim 3, it is characterized in that, it is described according to first auto spectral density, described second from
Spectrum density and the general density of intersection determine the compatibility function value between each present weight value and the current training data,
Include:
The compatibility function value between each present weight value and the current training data is calculated by following formula:
Wherein, Sx(ω) is the first auto spectral density of present weight value, Sy(ω) is the second auto spectral density of current training data,
Sxy(ω) is the cross-spectral density of the present weight value and the current training data.
5. the method according to claim 1, wherein described determining described from group based on the similarity parameter
After the triumph neuron for knitting neural network, further includes:
Determine the current learning rate letter of the corresponding weighted value regulatory domain of the triumph neuron and the self organizing neural network
Number;
Each neuron weighted value adjusted in the weighted value regulatory domain is calculated according to the current learning rate function;
The weighted value regulatory domain is determined as to the current winning domain of the self organizing neural network, by each power adjusted
Weight values are determined as the present weight value of each neuron in the current winning domain;
Next input data is determined as the self-organizing mind by the next input data for obtaining the self organizing neural network
Current training data through network, and return to the similarity for executing and calculating between each weighted value and the current training data and join
Several operations, until the current learning rate function maximum value setting numberical range in until, to obtain the self-organizing
The weighted value of each neuron in neural network completes training.
6. according to the method described in claim 5, it is characterized in that, the corresponding weighted value tune of the determination triumph neuron
The current learning rate function of integral domain and the self organizing neural network, comprising:
Centered on the triumph neuron, the current winning domain is reduced according to setting reduction rule, is obtained described
The corresponding weighted value regulatory domain of triumph neuron;
Upper learning rate function and self-organizing mind when adjusting weighted value according to last time is by the tune of e-learning rate function
Whole rule determines the current learning rate function of the self organizing neural network.
7. according to the method described in claim 5, it is characterized in that, it is described each weighted value adjusted is determined as it is described
The present weight value of each neuron in current winning domain, comprising:
Each weighted value adjusted is normalized, and each weighted value after normalized is determined as described
The present weight value of each neuron in current winning domain;
Correspondingly, the current training data that next input data is determined as to the self organizing neural network, comprising:
Next input data is normalized, and next input data after normalized is determined as described
The current training data of self organizing neural network.
8. a kind of determining device of triumph neuron characterized by comprising
Weighted data obtains module, for obtain self organizing neural network currently in winning domain each neuron present weight value with
And the current training data of the self organizing neural network;
Neuron determining module, for calculating the similarity parameter between each present weight value and the current training data,
And the triumph neuron of the self organizing neural network is determined based on the similarity parameter.
9. a kind of equipment, which is characterized in that the equipment includes:
One or more processors;
Memory, for storing one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now determination method of the triumph neuron as described in any in claim 1-7.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
The determination method of the triumph neuron as described in any in claim 1-7 is realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810697923.5A CN108960424A (en) | 2018-06-29 | 2018-06-29 | Determination method, apparatus, equipment and the storage medium of triumph neuron |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810697923.5A CN108960424A (en) | 2018-06-29 | 2018-06-29 | Determination method, apparatus, equipment and the storage medium of triumph neuron |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108960424A true CN108960424A (en) | 2018-12-07 |
Family
ID=64484466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810697923.5A Pending CN108960424A (en) | 2018-06-29 | 2018-06-29 | Determination method, apparatus, equipment and the storage medium of triumph neuron |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108960424A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112816884A (en) * | 2021-03-01 | 2021-05-18 | 中国人民解放军国防科技大学 | Method, device and equipment for monitoring health state of satellite lithium ion battery |
CN113705858A (en) * | 2021-08-02 | 2021-11-26 | 西安交通大学 | Shortest path planning method, system, equipment and storage medium for multi-target area |
-
2018
- 2018-06-29 CN CN201810697923.5A patent/CN108960424A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112816884A (en) * | 2021-03-01 | 2021-05-18 | 中国人民解放军国防科技大学 | Method, device and equipment for monitoring health state of satellite lithium ion battery |
CN113705858A (en) * | 2021-08-02 | 2021-11-26 | 西安交通大学 | Shortest path planning method, system, equipment and storage medium for multi-target area |
CN113705858B (en) * | 2021-08-02 | 2023-07-11 | 西安交通大学 | Shortest path planning method, system, equipment and storage medium for multiple target areas |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103559504B (en) | Image target category identification method and device | |
EP4080889A1 (en) | Anchor information pushing method and apparatus, computer device, and storage medium | |
Cao et al. | Automatic selection of t-SNE perplexity | |
CN109902222A (en) | Recommendation method and device | |
US20210049424A1 (en) | Scheduling method of request task and scheduling center server | |
JP2023523029A (en) | Image recognition model generation method, apparatus, computer equipment and storage medium | |
CN110210558B (en) | Method and device for evaluating performance of neural network | |
CN110046706A (en) | Model generating method, device and server | |
CN108960424A (en) | Determination method, apparatus, equipment and the storage medium of triumph neuron | |
CN109389140A (en) | The method and system of quick searching cluster centre based on Spark | |
CN111191722B (en) | Method and device for training prediction model through computer | |
CN110263136B (en) | Method and device for pushing object to user based on reinforcement learning model | |
DE112021004843T5 (en) | CREATING A SMART HOME BUBBLE | |
CN109255377A (en) | Instrument recognition methods, device, electronic equipment and storage medium | |
CN116522565B (en) | BIM-based power engineering design power distribution network planning method and computer equipment | |
CN113204642A (en) | Text clustering method and device, storage medium and electronic equipment | |
CN111260056B (en) | Network model distillation method and device | |
CN110555099B (en) | Computer-implemented method and apparatus for language processing using neural networks | |
CN111931916A (en) | Exploration method and device of deep learning model | |
KR102154425B1 (en) | Method And Apparatus For Generating Similar Data For Artificial Intelligence Learning | |
CN110782020A (en) | Network structure determination method and device and electronic system | |
Marcoulides et al. | Exploratory data mining algorithms for conducting searches in structural equation modeling: A comparison of some fit criteria | |
CN107203916B (en) | User credit model establishing method and device | |
CN112100482A (en) | Search result ordering method and device, electronic equipment and storage medium | |
Konen et al. | Parameter-tuned data mining: A general framework |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181207 |
|
RJ01 | Rejection of invention patent application after publication |