CN110458288A - Data forecasting system, method and device based on wavelet neural network - Google Patents
Data forecasting system, method and device based on wavelet neural network Download PDFInfo
- Publication number
- CN110458288A CN110458288A CN201910730762.XA CN201910730762A CN110458288A CN 110458288 A CN110458288 A CN 110458288A CN 201910730762 A CN201910730762 A CN 201910730762A CN 110458288 A CN110458288 A CN 110458288A
- Authority
- CN
- China
- Prior art keywords
- data
- subelement
- neural network
- group
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/061—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention belongs to nerual network technique fields, and in particular to data forecasting system, method and device based on wavelet neural network.The system comprises: the reverse train unit for carrying out propagating training in direction for carrying out the data pre-processing unit of data prediction, the neural network construction unit for constructing neural network, group's algorithm unit for training neural network, the result for obtaining according to the training of group's algorithm unit;The data pre-processing unit signal is connected to neural network construction unit;The neural network construction unit signal is connected to group algorithm unit;Group's algorithm unit signal is connected to reverse train unit;The reverse train cell signal is connected to group algorithm unit.Have the advantages that analysis result is accurate, intelligence degree is high and analysis efficiency is high.
Description
Technical field
The invention belongs to nerual network technique fields, and in particular to the data forecasting system based on wavelet neural network, side
Method and device.
Background technique
In field of statistics, some people by data analysis be divided into descriptive statistical analysis, exploratory data analysis and
Confirmatory data analysis;Wherein, exploratory data analysis, which is laid particular emphasis among data, finds new feature, and confirmatory data point
Analysis then lays particular emphasis on the confirmation or falsfication for having hypothesis.
Exploratory data analysis refers to be formed and be worth the inspection of hypothesis to a kind of method that data are analyzed, and is
To the supplement of traditional statistics hypothesis testing means.This method is by famous American statistician John Tu Ji (John Tukey)
Name.
Qualitative data analysis is also known as " Qualitative Data analysis ", " qualitative research " or " qualitative studies analysis ", is
(analysis of money data has a very wide range of using model nonumeric type data of the finger to such as word, photo, observation result etc in other words
It encloses.Typical data analysis may include following three steps:
1, exploratory data analysis:, may be disorderly and unsystematic when data just obtain, rule is not seen, by mapping, making
Table, with various forms of equation models, calculate the possibility form of the means exploring law such as certain characteristic quantities, i.e., toward what direction
With with which kind of mode looks for and discloses the regularity lain in data.
2, the selected analysis of model proposes a kind of or a few class possible models on the basis of exploratory analysis, then passes through
Certain model is therefrom selected in further analysis.
3, inference analysis: make usually using the degree of reliability and levels of precision of the mathematical statistics method to institute's cover half type or estimation
Infer out.
Summary of the invention
In view of this, it is a primary object of the present invention to data forecasting system, method and dresses based on wavelet neural network
It sets, has the advantages that analysis result is accurate, intelligence degree is high and analysis efficiency is high.
In order to achieve the above objectives, the technical scheme of the present invention is realized as follows:
Data forecasting system based on wavelet neural network, the system comprises: for carrying out the data of data prediction
Pretreatment unit, group's algorithm unit for training neural network, is used the neural network construction unit for constructing neural network
The reverse train unit for propagating training in direction is carried out in the result obtained according to the training of group's algorithm unit;The data prediction list
First signal is connected to neural network construction unit;The neural network construction unit signal is connected to group algorithm unit;The group
Algorithm unit signal is connected to reverse train unit;The reverse train cell signal is connected to group algorithm unit.
Further, the data pre-processing unit includes: data analysis subelement, missing values processing subelement, exception
Value processing subelement, duplicate removal processing subelement and noise data handle subelement;The data analysis subelement signal is connected to
Missing values handle subelement;The missing values processing subelement signal is connected to outlier processing subelement;At the exceptional value
Reason subelement signal is connected to duplicate removal processing subelement;It is single that the duplicate removal processing subelement signal is connected to noise data processing
Member;Noise treatment exports result to neural network construction unit.
Further, the neural network construction unit include: input layer building subelement, hide layer building subelement and
Export layer building subelement;The input layer building subelement, determines the neuron number of input layer, connects with hiding layer signal
It connects;The hidden layer determines the neuron number of hidden layer, connect with output layer signal.
Further, group's algorithm unit includes: initialization subelement, decoding subunit, fitness computation subunit
With update subelement;The initialization subelement signal is connected to decoding subunit;The decoding subunit signal is connected to suitable
Response computation subunit;The fitness computation subunit signal is connected to update subelement.
Further, the reverse train unit includes: error calculation subelement, optimized parameter acquisition subelement, network
Training subelement, test data typing subelement and prediction subelement;The error calculation subelement is connected separately in mind
Through network struction unit and fitness computation subunit;The optimized parameter obtains subelement and is connected separately in update son list
Member and network training subelement;The network training subelement signal is connected to prediction subelement;Test data typing
Cell signal is connected to prediction subelement.
Data predication method based on wavelet neural network, the method execute following steps:
Step 1: carrying out data prediction, the data after generating data prediction;
Step 2: according to the data after data prediction, constructing neural network;
Step 3: using group's algorithm training neural network;
Step 4: the optimal solution that the training of group's algorithm is obtained is instructed as the initial value of network parameter using back-propagation algorithm
Practice network.
Further, in the step 1, data prediction is carried out, the method for the data after generating data prediction executes
Following steps:
Step 1.1: analysis data whether there is shortage of data, data outliers, Data duplication and noise data;
Step 1.2: carrying out shortage of data processing, comprising: according to the attribute related coefficient maximum attribute of missing values data
It is divided into several groups, calculates separately each group of mean value, these mean values is put into the numerical value of missing;
Step 1.3: carrying out data outliers processing, comprising: judge whether data are in normal distribution, if data are obeyed
Normal distribution determines that exceptional value is in one group of measured value and the deviation of average value is more than the value of 3 times of standard deviations;If data are obeyed
Normal distribution, distance averageExcept value occur probability beIf data are disobeyed
Normal distribution carries out data description using the multiple standard deviation far from average value;
Step 1.4: carrying out data deduplication processing;
Step 1.5: carrying out noise data processing, comprising: with a Function Fitting data come smooth data;Linear regression relates to
And the straight line of two attributes (or variable) of fitting is found out, enable an attribute to predict another.
Further, in the step 2, according to the data after data prediction, the method for constructing neural network execute with
Lower step: input layer number is determined that output layer neuron number is pre- by network by the dimension of input data feature vector
Measured value number determines;Hidden layer number of nodes has L node in input layer, and when output layer has N number of node, calculation formula is used
Following formula:Wherein, M is concealed nodes number, and a is the constant between 0 to 10.
Further, in the step 3, following steps are executed using the method for group's algorithm training neural network:
Step 3.1: carrying out particle and speed initialization, comprising: initialize N number of particle composition population, X=(X1, X2 ...,
XN), i-th of particle Xi=(xi1, xi2 ..., xiD);
Step 3.2: particle is decoded to network, comprising: the topological structure of setting network is L node of input layer, is hidden
M node of layer, the N number of node of output layer;Then network input layer has M to output layer to L × M weighting parameter of hidden layer, hidden layer
× N number of weight, each node of hidden layer there are one shifting parameter and a frequency parameter totally 2 × N number of parameter, therefore total L × M
+ 2 × N+M × N number of parameter;Particle vector coding sequence is that preceding L × M parameter is input layer to L × M weight of hidden layer, so
N number of parameter is frequency parameter afterwards, and secondly N number of parameter is shifting parameter, remaining parameter is M × N number of power of the hidden layer to output layer
Value;
Step 3.3: calculating particle fitness, comprising: particle fitness, the speed of i-th of particle are calculated according to optimization aim
It spends Vi=(vi1, vi2 ..., viD), each iteration recording individual extreme value Pi=(pi1, pi2 ..., piD) is (from starting to iterate to
Current iteration individual particles use degree optimum position) and population group extreme value Pg=(pg1, pg2 ..., pgD);
Step 3.4: following formula is used, particle rapidity and position are updated: Wherein, w is inertial factor, c1And c2For aceleration pulse, r1And r2Between [0,1] with
Machine number;Wherein, α is constant.
Data prediction meanss based on wavelet neural network, described device are the computer journey being stored on computer media
Sequence, comprising: carry out data prediction, the code segment of the data after generating data prediction;According to the data after data prediction,
Construct the code segment of neural network;Use the code segment of group's algorithm training neural network;The optimal solution that the training of group's algorithm is obtained
As the initial value of network parameter, the code segment of back-propagation algorithm training network is used.
Data forecasting system, method and device based on wavelet neural network of the invention, has the following beneficial effects: energy
To be analyzed and predicted compared with high-accuracy and correlation data, shown in the comparison with traditional Arima algorithm more bright
Aobvious advantage.This will provide highly efficient and accurate processing mode for the analysis of data center and prediction.
Detailed description of the invention
Fig. 1 is the system structure signal for the data forecasting system based on wavelet neural network that the embodiment of the present invention provides
Figure;
Fig. 2 is the method flow signal for the data predication method based on wavelet neural network that the embodiment of the present invention provides
Figure;
Fig. 3 be the data forecasting system provided in an embodiment of the present invention based on wavelet neural network, method and device it is small
The network structure topological diagram of wave neural network.
Specific embodiment
With reference to the accompanying drawing and the embodiment of the present invention is described in further detail method of the invention.
Embodiment 1
Data forecasting system based on wavelet neural network, the system comprises: for carrying out the data of data prediction
Pretreatment unit, group's algorithm unit for training neural network, is used the neural network construction unit for constructing neural network
The reverse train unit for propagating training in direction is carried out in the result obtained according to the training of group's algorithm unit;The data prediction list
First signal is connected to neural network construction unit;The neural network construction unit signal is connected to group algorithm unit;The group
Algorithm unit signal is connected to reverse train unit;The reverse train cell signal is connected to group algorithm unit.
Embodiment 2
On the basis of a upper embodiment, the data pre-processing unit includes: data analysis subelement, missing values processing
Subelement, outlier processing subelement, duplicate removal processing subelement and noise data handle subelement;The data analyze subelement
Signal is connected to missing values processing subelement;The missing values processing subelement signal is connected to outlier processing subelement;Institute
It states outlier processing subelement signal and is connected to duplicate removal processing subelement;The duplicate removal processing subelement signal is connected to noise number
According to processing subelement;Noise treatment exports result to neural network construction unit.
Specifically, in machine learning and related fields, the computation model inspiration of artificial neural network (artificial neural network)
Central nervous system (especially brain) from animal, and be used to estimate or may rely on a large amount of input and general
Unknown approximate function.Artificial neural network typically appears as " neuron " interconnected, it can from the calculated value of input, and
And can machine learning and pattern-recognition due to their self-adaptive property system.Generally there are Three models:
Selection mode: this will depend on the expression and application of data.The model of excessively complexity frequently can lead to problem
It practises.
Learning algorithm: there is countless tradeoffs between learning algorithm.Almost all of algorithm is for a specific data
Collect training will well with correct hyper parameter cooperation.However, invisible data training needs on the algorithm of selection and adjustment
Want the experiment of significant quantity.
Robustness: if in the model, cost function and learning algorithm, properly selecting obtained neural network can be with
It is very healthy and strong.There is correct implementation, artificial neural network can be applied to on-line study and large data collection naturally
Application program.It, which is simply realized and show, relies primarily on local presence in structure, so that it is quick in hardware, it is parallel real
It is existing.
Embodiment 3
On the basis of a upper embodiment, the neural network construction unit includes: input layer building subelement, hidden layer
Construct subelement and output layer building subelement;The input layer building subelement, determines the neuron number of input layer, and hidden
Hide layer signal connection;The hidden layer determines the neuron number of hidden layer, connect with output layer signal.
Embodiment 4
On the basis of a upper embodiment, group's algorithm unit includes: initialization subelement, decoding subunit, adaptation
It spends computation subunit and updates subelement;The initialization subelement signal is connected to decoding subunit;The decoding subunit
Signal is connected to fitness computation subunit;The fitness computation subunit signal is connected to update subelement.
Embodiment 5
On the basis of a upper embodiment, the reverse train unit includes: error calculation subelement, optimized parameter acquisition
Subelement, network training subelement, test data typing subelement and prediction subelement;The error calculation subelement is believed respectively
Number it is connected to neural network construction unit and fitness computation subunit;The optimized parameter obtains subelement and is connected separately
In update subelement and network training subelement;The network training subelement signal is connected to prediction subelement;The test
Data inputting subelement signal is connected to prediction subelement.
Embodiment 6
Data predication method based on wavelet neural network, the method execute following steps:
Step 1: carrying out data prediction, the data after generating data prediction;
Step 2: according to the data after data prediction, constructing neural network;
Step 3: using group's algorithm training neural network;
Step 4: the optimal solution that the training of group's algorithm is obtained is instructed as the initial value of network parameter using back-propagation algorithm
Practice network.
Embodiment 7
On the basis of a upper embodiment, in the step 1, data prediction is carried out, the number after generating data prediction
According to method execute following steps:
Step 1.1: analysis data whether there is shortage of data, data outliers, Data duplication and noise data;
Step 1.2: carrying out shortage of data processing, comprising: according to the attribute related coefficient maximum attribute of missing values data
It is divided into several groups, calculates separately each group of mean value, these mean values is put into the numerical value of missing;
Step 1.3: carrying out data outliers processing, comprising: judge whether data are in normal distribution, if data are obeyed
Normal distribution determines that exceptional value is in one group of measured value and the deviation of average value is more than the value of 3 times of standard deviations;If data are obeyed
Normal distribution, distance averageExcept value occur probability beIf data are disobeyed
Normal distribution carries out data description using the multiple standard deviation far from average value;
Step 1.4: carrying out data deduplication processing;
Step 1.5: carrying out noise data processing, comprising: with a Function Fitting data come smooth data;Linear regression relates to
And the straight line of two attributes (or variable) of fitting is found out, enable an attribute to predict another.
Specifically, " washing off " gone out exactly " dirty " that data cleansing is also seen from name, refers to discovery and corrects data text
Last one of program of identifiable mistake in part, including check data consistency, handle invalid value and missing values etc..Because of number
The set of the data towards a certain theme according to the data in warehouse, these data extracted from multiple operation systems and
Comprising historical data, thus the unavoidable data having are wrong data, the data that have have conflict between each other, these mistakes
Or the data that have conflict be clearly that we are undesired, referred to as " dirty data ".We will be according to certain rules " dirty number
According to " " washing off ", here it is data cleansings.And the task of data cleansing is to filter those undesirable data, by filtering
As a result competent business department is given, is confirmed whether to filter out or by being extracted again after service unit amendment.It does not meet and wants
The data asked mainly have incomplete data, the data of mistake, duplicate data three categories.Data cleansing is audited with questionnaire
Difference, the data scrubbing after typing are usually by computer without being manually performed.Due to investigation, coding and typing error, number
There may be some invalid values and missing values in, need to give processing appropriate.Common processing method has: estimation, whole example
It deletes, variable deletion and in pairs deletion.
It estimates (estimation).Simplest method is exactly with some variableSample average、MedianOrModeGeneration
For invalid value and missing values.This method is simple, but does not fully consider existing information in data, and error may be larger.Separately
A kind of method is exactly the answer according to respondent to other problems, and the correlation analysis or logical deduction passed through between variable carries out
Estimation.For example, the situation that possesses of a certain product may be related with family income, can be calculated according to the family income of respondent
A possibility that possessing this product.
It is to reject the sample containing missing values that whole example, which deletes (casewise deletion),.Since many questionnaires all may
There are missing values, the result of this way may cause effective sample volume and greatly reduce, and be unable to fully using being collected into
Data.Therefore, it is only suitable for key variables missing, or the case where the sample specific gravity very little containing invalid value or missing values.
Variable deletion (variable deletion).If there are many invalid value and missing values of a certain variable, and the change
Amount for it is studied the problem of be not it is especially important, then it is contemplated that by the variable deletion.This way reduces for analysis
Variables number, but without change sample size.
Deleting (pairwise deletion) in pairs is to represent invalid value with a specific code (usually 9,99,999 etc.)
And missing values, while retaining the whole variables and sample of data concentration.But in specific calculate only with there is complete answer
Sample, thus the different variable differences analyzed because being related to, effective sample volume would also vary from.This is a kind of conservative place
Reason method remains the available information in data set to the maximum extent.
Analysis result may be had an impact using different processing methods, especially when the appearance of missing values and nonrandom
And when obvious related between variable.Therefore, invalid value and missing values should be avoided the occurrence of as far as possible in investigation, guarantee the complete of data
Whole property.
Embodiment 8
On the basis of a upper embodiment, in the step 2, according to the data after data prediction, neural network is constructed
Method execute following steps: input layer number determines by the dimension of input data feature vector, output layer neuron
Number is determined by neural network forecast value number;Hidden layer number of nodes has L node in input layer, when output layer has N number of node,
Calculation formula uses following formula: Wherein, M is concealed nodes number, and a is normal between 0 to 10
Number.
Embodiment 9
On the basis of a upper embodiment, in the step 3, executed using the method for group's algorithm training neural network following
Step:
Step 3.1: carrying out particle and speed initialization, comprising: initialize N number of particle composition population, X=(X1, X2 ...,
XN), i-th of particle Xi=(xi1, xi2 ..., xiD);
Step 3.2: particle is decoded to network, comprising: the topological structure of setting network is L node of input layer, is hidden
M node of layer, the N number of node of output layer;Then network input layer has M to output layer to L × M weighting parameter of hidden layer, hidden layer
× N number of weight, each node of hidden layer there are one shifting parameter and a frequency parameter totally 2 × N number of parameter, therefore total L × M
+ 2 × N+M × N number of parameter;Particle vector coding sequence is that preceding L × M parameter is input layer to L × M weight of hidden layer, so
N number of parameter is frequency parameter afterwards, and secondly N number of parameter is shifting parameter, remaining parameter is M × N number of power of the hidden layer to output layer
Value;
Step 3.3: calculating particle fitness, comprising: particle fitness, the speed of i-th of particle are calculated according to optimization aim
It spends Vi=(vi1, vi2 ..., viD), each iteration recording individual extreme value Pi=(pi1, pi2 ..., piD) is (from starting to iterate to
Current iteration individual particles use degree optimum position) and population group extreme value Pg=(pg1, pg2 ..., pgD);
Step 3.4: following formula is used, particle rapidity and position are updated: Wherein, w is inertial factor, c1And c2For aceleration pulse, r1And r2Between [0,1] with
Machine number;Wherein, α is constant.
Embodiment 10
Data prediction meanss based on wavelet neural network, described device are the computer journey being stored on computer media
Sequence, comprising: carry out data prediction, the code segment of the data after generating data prediction;According to the data after data prediction,
Construct the code segment of neural network;Use the code segment of group's algorithm training neural network;The optimal solution that the training of group's algorithm is obtained
As the initial value of network parameter, the code segment of back-propagation algorithm training network is used.
It is sub that the foregoing is merely one embodiment of the present of invention, but the range that the present invention cannot be limited in this way, all according to this
Invent the variation in done structure, if where not losing main idea of the invention, be regarded as falling into the scope of the present invention it
Inside it is restricted.
Person of ordinary skill in the field can be understood that, for convenience and simplicity of description, foregoing description
The specific work process of system and related explanation, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
It should be noted that system provided by the above embodiment, only illustrate with the division of above-mentioned each functional module
It is bright, in practical applications, it can according to need and complete above-mentioned function distribution by different functional modules, i.e., it will be of the invention
Module or step in embodiment are decomposed or are combined again, for example, the module of above-described embodiment can be merged into a module,
It can also be further split into multiple submodule, to complete all or part of the functions described above.The present invention is implemented
Module, the title of step involved in example, it is only for distinguish modules or step, be not intended as to of the invention improper
It limits.
Person of ordinary skill in the field can be understood that, for convenience and simplicity of description, foregoing description
The specific work process and related explanation of storage device, processing unit, can refer to corresponding processes in the foregoing method embodiment,
Details are not described herein.
Those skilled in the art should be able to recognize that, mould described in conjunction with the examples disclosed in the embodiments of the present disclosure
Block, method and step, can be realized with electronic hardware, computer software, or a combination of the two, software module, method and step pair
The program answered can be placed in random access memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electric erasable and can compile
Any other form of storage well known in journey ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field is situated between
In matter.In order to clearly demonstrate the interchangeability of electronic hardware and software, in the above description according to function generally
Describe each exemplary composition and step.These functions are executed actually with electronic hardware or software mode, depend on technology
The specific application and design constraint of scheme.Those skilled in the art can carry out using distinct methods each specific application
Realize described function, but such implementation should not be considered as beyond the scope of the present invention.
Term " first ", " second " etc. are to be used to distinguish similar objects, rather than be used to describe or indicate specific suitable
Sequence or precedence.
Term " includes " or any other like term are intended to cover non-exclusive inclusion, so that including a system
Process, method, article or equipment/device of column element not only includes those elements, but also including being not explicitly listed
Other elements, or further include the intrinsic element of these process, method, article or equipment/devices.
So far, it has been combined preferred embodiment shown in the drawings and describes technical solution of the present invention, still, this field
Technical staff is it is easily understood that protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from this
Under the premise of the principle of invention, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these
Technical solution after change or replacement will fall within the scope of protection of the present invention.
The foregoing is only a preferred embodiment of the present invention, is not intended to limit the scope of the present invention.
Claims (10)
1. the data forecasting system based on wavelet neural network, which is characterized in that the system comprises: locate in advance for carrying out data
The data pre-processing unit of reason, the neural network construction unit for constructing neural network, for training the group of neural network to calculate
Method unit, the result for being obtained according to the training of group's algorithm unit carry out the reverse train unit for propagating training in direction;The number
Data preprocess cell signal is connected to neural network construction unit;The neural network construction unit signal is connected to group algorithm list
Member;Group's algorithm unit signal is connected to reverse train unit;The reverse train cell signal is connected to group algorithm unit.
2. the system as claimed in claim 1, which is characterized in that the data pre-processing unit include: data analysis subelement,
Missing values handle subelement, outlier processing subelement, duplicate removal processing subelement and noise data and handle subelement;The data
Analysis subelement signal is connected to missing values processing subelement;The missing values processing subelement signal is connected to outlier processing
Subelement;The outlier processing subelement signal is connected to duplicate removal processing subelement;The duplicate removal processing subelement signal connects
It is connected to noise data processing subelement;Noise treatment exports result to neural network construction unit.
3. the system as claimed in claim 1, which is characterized in that the neural network construction unit includes: input layer building
Unit hides layer building subelement and output layer building subelement;The input layer building subelement, determines the nerve of input layer
First number is connect with hiding layer signal;The hidden layer determines the neuron number of hidden layer, connect with output layer signal.
4. system as claimed in claim 3, which is characterized in that group's algorithm unit includes: initialization subelement, solution numeral
Unit, fitness computation subunit and update subelement;The initialization subelement signal is connected to decoding subunit;The solution
Numeral cell signal is connected to fitness computation subunit;The fitness computation subunit signal is connected to update subelement.
5. system as claimed in claim 3, which is characterized in that the reverse train unit includes: error calculation subelement, most
Excellent parameter obtains subelement, network training subelement, test data typing subelement and prediction subelement;Error calculation
Unit is connected separately in neural network construction unit and fitness computation subunit;The optimized parameter obtains subelement point
Level signal, which is connected to, updates subelement and network training subelement;It is single that the network training subelement signal is connected to prediction
Member;The test data typing subelement signal is connected to prediction subelement.
6. the data predication method based on wavelet neural network based on system described in one of claim 1 to 5, feature exist
In the method executes following steps:
Step 1: carrying out data prediction, the data after generating data prediction;
Step 2: according to the data after data prediction, constructing neural network;
Step 3: using group's algorithm training neural network;
Step 4: the optimal solution that the training of group's algorithm is obtained uses back-propagation algorithm training net as the initial value of network parameter
Network.
7. method as claimed in claim 6, which is characterized in that in the step 1, carry out data prediction, it is pre- to generate data
The method for data that treated executes following steps:
Step 1.1: analysis data whether there is shortage of data, data outliers, Data duplication and noise data;
Step 1.2: carrying out shortage of data processing, comprising: data are divided into according to the attribute related coefficient maximum attribute of missing values
Several groups calculate separately each group of mean value, these mean values are put into the numerical value of missing;
Step 1.3: carrying out data outliers processing, comprising: judge whether data are in normal distribution, if data obey normal state
Distribution determines that exceptional value is in one group of measured value and the deviation of average value is more than the value of 3 times of standard deviations;If data obey normal state
Distribution, distance averageExcept value occur probability beIf data disobey normal state
Distribution carries out data description using the multiple standard deviation far from average value;
Step 1.4: carrying out data deduplication processing;
Step 1.5: carrying out noise data processing, comprising: with a Function Fitting data come smooth data;Linear regression is related to looking for
It is fitted the straight line of two attributes (or variable) out, an attribute is enabled to predict another.
8. the method for claim 7, which is characterized in that in the step 2, according to the data after data prediction, structure
The method for building neural network executes following steps: input layer number is determined by the dimension of input data feature vector, defeated
Layer neuron number is determined by neural network forecast value number out;Hidden layer number of nodes has L node in input layer, and output layer has N number of
When node, calculation formula uses following formula:Wherein, M is concealed nodes number, and a is 0 to 10
Between constant.
9. method according to claim 8, which is characterized in that in the step 3, use the side of group's algorithm training neural network
Method executes following steps:
Step 3.1: carrying out particle and speed initialization, comprising: initialize N number of particle composition population, X=(X1, X2 ..., XN),
I-th of particle Xi=(xi1, xi2 ..., xiD);
Step 3.2: particle is decoded to network, comprising: the topological structure of setting network is L node of input layer, and hidden layer M is a
Node, the N number of node of output layer;Then network input layer is to L × M weighting parameter of hidden layer, and hidden layer to output layer has M × N number of
Weight, each node of hidden layer there are one shifting parameter and a frequency parameter totally 2 × N number of parameter, therefore total L × M+2 × N
+ M × N number of parameter;Particle vector coding sequence is that preceding L × M parameter is input layer to L × M weight of hidden layer, then N number of
Parameter is frequency parameter, and secondly N number of parameter is shifting parameter, remaining parameter is M × N number of weight of the hidden layer to output layer;
Step 3.3: calculating particle fitness, comprising: particle fitness, the speed Vi of i-th of particle are calculated according to optimization aim
=(vi1, vi2 ..., viD), each iteration recording individual extreme value Pi=(pi1, pi2 ..., piD) is (from starting to iterate to this
Iteration individual particles use degree optimum position) and population group extreme value Pg=(pg1, pg2 ..., pgD);
Step 3.4: following formula is used, particle rapidity and position are updated: Wherein, w is inertial factor, c1And c2For aceleration pulse, r1And r2Between [0,1] with
Machine number;Wherein, α is constant.
10. the data prediction meanss based on wavelet neural network based on one of claim 6 to 9 the method, feature exist
In described device is the computer program being stored on computer media, comprising: carry out data prediction, generate data and locate in advance
The code segment of data after reason;According to the data after data prediction, the code segment of neural network is constructed;Use the training of group's algorithm
The code segment of neural network;The optimal solution that the training of group's algorithm is obtained is calculated as the initial value of network parameter using backpropagation
The code segment of method training network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910730762.XA CN110458288A (en) | 2019-08-08 | 2019-08-08 | Data forecasting system, method and device based on wavelet neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910730762.XA CN110458288A (en) | 2019-08-08 | 2019-08-08 | Data forecasting system, method and device based on wavelet neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110458288A true CN110458288A (en) | 2019-11-15 |
Family
ID=68485629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910730762.XA Pending CN110458288A (en) | 2019-08-08 | 2019-08-08 | Data forecasting system, method and device based on wavelet neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458288A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5845271A (en) * | 1996-01-26 | 1998-12-01 | Thaler; Stephen L. | Non-algorithmically implemented artificial neural networks and components thereof |
CN110020712A (en) * | 2019-03-26 | 2019-07-16 | 浙江大学 | A kind of optimization population BP neural network forecast method and system based on cluster |
-
2019
- 2019-08-08 CN CN201910730762.XA patent/CN110458288A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5845271A (en) * | 1996-01-26 | 1998-12-01 | Thaler; Stephen L. | Non-algorithmically implemented artificial neural networks and components thereof |
CN110020712A (en) * | 2019-03-26 | 2019-07-16 | 浙江大学 | A kind of optimization population BP neural network forecast method and system based on cluster |
Non-Patent Citations (3)
Title |
---|
姚荣欢: "基于小波神经网络的数据中心KPI预测", 《电子技术应用》 * |
宋万清 等: "《数据挖掘》", 31 January 2019, 中国铁道出版社 * |
贺筱媛 等: "面向战场态势数据智能分析的预处理方法", 《电子技术与软件工程》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Luo et al. | An evolving recurrent interval type-2 intuitionistic fuzzy neural network for online learning and time series prediction | |
Khoshgoftaar et al. | Fault prediction modeling for software quality estimation: Comparing commonly used techniques | |
Koprinska et al. | Correlation and instance based feature selection for electricity load forecasting | |
Smith-Miles | Towards insightful algorithm selection for optimisation using meta-learning concepts | |
Cordón et al. | Hybridizing genetic algorithms with sharing scheme and evolution strategies for designing approximate fuzzy rule-based systems | |
Mori | State-of-the-art overview on data mining in power systems | |
Huang et al. | Integrating fuzzy data mining and fuzzy artificial neural networks for discovering implicit knowledge | |
Jovan et al. | A Poisson-spectral model for modelling temporal patterns in human data observed by a robot | |
Levashenko et al. | Fuzzy classifier based on fuzzy decision tree | |
Sharma et al. | Evaluation of accidental death records using hybrid genetic algorithm | |
Jensen | Performing feature selection with ACO | |
Cheung et al. | Temporal ensemble learning of univariate methods for short term load forecasting | |
Duin et al. | Open issues in pattern recognition | |
Fortes et al. | Inductive learning models with missing values | |
CN110458288A (en) | Data forecasting system, method and device based on wavelet neural network | |
Elwakil | Knowledge discovery based simulation system in construction | |
Nolan | Computer systems that learn: an empirical study of the effect of noise on the performance of three classification methods | |
Wang et al. | Stock market time series data mining based on regularized neural network and rough set | |
Agarwal et al. | On multivariate singular spectrum analysis and its variants | |
Seyfi et al. | Exact combinatorial optimization with temporo-attentional graph neural networks | |
Barcelo | Back-propagation algorithms to compute similarity relationships among archaeological artefacts | |
Sánchez-Marrè et al. | Development of an intelligent data analysis system for knowledge management in environmental data bases | |
Bardsiri et al. | Machine learning methods with feature selection approach to estimate software services development effort | |
Bellary et al. | Hybrid machine learning approach in data mining | |
Shi | Sales Forecasting Using Long Short Term Memory Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191115 |