CN110222826A - One kind being based on improved EEMD-IndRNN ship method for predicting - Google Patents

One kind being based on improved EEMD-IndRNN ship method for predicting Download PDF

Info

Publication number
CN110222826A
CN110222826A CN201910502153.9A CN201910502153A CN110222826A CN 110222826 A CN110222826 A CN 110222826A CN 201910502153 A CN201910502153 A CN 201910502153A CN 110222826 A CN110222826 A CN 110222826A
Authority
CN
China
Prior art keywords
ship
data
component
neural network
flows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910502153.9A
Other languages
Chinese (zh)
Inventor
韩增龙
黄洪琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN201910502153.9A priority Critical patent/CN110222826A/en
Publication of CN110222826A publication Critical patent/CN110222826A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Development Economics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention discloses a kind of based on improved EEMD-IndRNN ship method for predicting, the ship data on flows of nonlinear and nonstationary is decomposed by a series of remainder sequences dull with the low-and high-frequency intrinsic mode functions sequence of stationarity and one using set empirical mode decomposition algorithm, both the information of original series had been retained to the maximum extent, the inherent law of sequence is made full use of again, improves precision of prediction;Then the correlation that each component and original ship data on flows are calculated using Pearson correlation coefficient, is reconfigured component for high, normal, basic three new components according to correlation size;Finally above-mentioned component is separately handled respectively using independent loops neural network, deep learning neural network is constructed by the superposition to multiple hidden layers, in conjunction with a large amount of ship data on flows, the time that ship stream is sufficiently extracted in data training hides characteristic information, completes prediction.The present invention also improves precision of prediction while the processing of refining data component, also has better adaptivity.

Description

One kind being based on improved EEMD-IndRNN ship method for predicting
Technical field
It is the present invention relates to time series forecasting technical field, in particular to a kind of based on improved EEMD-IndRNN (set Empirical modal point-independent loops neural network) ship method for predicting.
Background technique
With the development of China sea economic trade, ships quantity gradually increases, in order to solve the increasing of water route course line density The problems such as analysis of vessel traffic accidents and waterway planning for adding generation are with ship traffic efficiency is improved, needs to carry out science to ship flow Accurately prediction.Nowadays due to crescent day scientific technological advance day and the requirement of higher precision of prediction, for the list of ship flow One prediction technique has been unsatisfactory for the demand of people, effective combination of usually a variety of prediction algorithms.
The research of ship traffic volume forecasting is broadly divided into: first is that seeking one kind according to the influence factor of ship data on flows Unique internal relation, formed it is a kind of possess multiple Function Mappings output and input, to reach prediction purpose.Second is that will show Some prediction models are by application innovation, using in ship traffic volume forecasting.Currently, both at home and abroad for vessel traffic flow The research for measuring prediction is very rich, mainly there is neural network, wavelet transformation, support vector machines, shot and long term memory network and combination Prediction etc..
In view of precision of prediction present in the prior art, not high, adaptability is not high and to recycle neural gradient explosion etc. a series of Problem researches and develops a kind of ship method for predicting based on improved set empirical mode decomposition and independent loops neural network, It can solve the adaptability problem of the time series forecasting for different time scales, moreover it is possible to further increase the pre- of ship flow Survey precision.
Summary of the invention
The purpose of the present invention is to provide one kind, and based on improved EEMD-IndRNN, (set empirical modal divides and independently follows Ring neural network) ship method for predicting, belong to the framework of deep neural network method, can solve for different time ruler The adaptability problem of the time series forecasting of degree, while improving precision of prediction.
In order to achieve the above object, the invention is realized by the following technical scheme:
A kind of ship method for predicting based on improved set empirical mode decomposition and independent loops neural network, packet Containing following steps: S1, being pre-processed to original ship data on flows;S2, the original ship data on flows is carried out steadily Property verifying;S3, set empirical modal point is carried out by the original ship data on flows of the step S2 non-stationary verified Solution, obtains several intrinsic mode functions and a remainder;S4, the phase relation for calculating above-mentioned each component and original ship flow data Number is new component according to degree of relevancy size stack combinations.S5, independent loops neural network is used respectively to new component Model is predicted, since new combination component represents the variation tendency of the long middle or short term of ship stream, it is therefore desirable to using different Component prediction result is finally overlapped to get ship volume forecasting is arrived by the independent loops neural network prediction model of parameter As a result.
Preferably, it in the step S2, further includes:
Stationarity verification is carried out using time series of the ADF method of inspection to the original ship data on flows, when described The time series of original ship data on flows is steady, then unit root is not present, conversely, then there is unit root.
Preferably, it is further included in the step S3:
S31, white noise is added in the original ship data on flows X (t), the ship flow after obtaining addition white noise Data Xn1(t)=X (t)+n (t), n (t) are white noise;
S32, the original ship data on flows X is found outn(t) all maximum and minimum, respectively using sample three times Interpolation fitting goes out coenvelope line q1 and lower envelope line q2, takes its average value m (t)=(q1+q2)/2, obtains new sequences h 1=Xn1 (t)-m(t);When the new sequences h 1 is there are positive minimum or negative maximum, then repeating said steps S32, until finding First intrinsic mode functions IMF1, further obtains new data Xn1(t)-IMF1;
S33, according to the new data X of the step S32n1(t)-IMF1, and by new data Xn1(t)-IMF1 is followed as next X in the step S32 of ringn1(t), circulation executes step S32, until by initial data Xn(t) be decomposed into m intrinsic mode functions and One dull remainder r (t), then:
Xn1(t)=IMF1+IMF2+...+IMFm+r(t)
Step S34, K different white noise is added in the initial data X (t), and repeats above step S31- step S33, accordingly obtain every time be added white noise after decomposed after m intrinsic mode functions and a dull remainder;
Step S35, ensemble average calculating is carried out to each component of decomposition, as follows:
In formula, IMF 'mIt is being averaged for the sum of m-th of intrinsic mode functions for white noise being added K times and obtaining after being decomposed Value, i refer to that white noise is added in i-th;R ' (t) refers to the K dull remainder r that white noise is added and obtains after being decomposed The sum of (t) average value;
Step S36, the decomposition result of the described original ship flow data X (t) are as follows:
X (t)=IMF '1+IMF′2+...IMF′m+r′(t)
Preferably, it is further included in the step S4
Step S41, appeal component IMF ' is calculated1, IMF '2..., IMF 'm, the Pearson correlation coefficient of r ' (t) and X (t)
Wherein, X is each component, and Y is initial data, and E is mathematic expectaion, and cov indicates covariance.When related coefficient is 0, Show that two variables are not related, as soon as claim two when variable increases (reduction) as another variable increases (reduction) To be positively correlated between variable, Pearson correlation coefficient value is between 0 to 1.
Preferably, in the step S42, the multiple set interval includes weak related interval, medium related interval and strong Related interval, accordingly obtains three new component M1, M2 and M3, and the variation for respectively indicating the short, medium and long phase of ship flow becomes Gesture;
Wherein, component M1 is equal to intrinsic mode functions average value sum after all decomposition positioned in weak related interval, Component M2 is equal to intrinsic mode functions average value sum after all decomposition positioned in medium related interval, and component M3 is equal to institute There is the intrinsic mode functions average value sum after the decomposition in strong correlation section.
Preferably, in the step S42, the multiple set interval include section (0,0.3], (0.3,0.6], (0.6, 1.0]。
Preferably, it is further included in the step S5:
Step S51, the new component M1 that will be reconfigured in the step S4, M2, M3 are as independent loops neural network Input;
Step S52, the hidden layer state expression formula of the described independent loops neural network is changed to:
ht=σ (WXt+u⊙ht-1+b)
In formula, t is moment, XtIt is the input of t moment, i.e., above-mentioned new the component M1, M2, M3 that reconfigure;W is hiding Weight between layer, σ is the activation primitive of neuron;U is the weight between input layer and hidden layer;B is bias;⊙ is indicated Matrix element product;ht-1The hidden layer output for indicating t-1 moment (i.e. previous moment), i.e., in each hidden layer neuron of t moment The output and the state at t-1 moment itself only received this moment is used as input;
Step S53, when needing to construct multilayer circulation neural network, new hidden layer output are as follows:
h′t=σ (W ' ht+u′⊙h′t-1+b′)
In formula, t is moment, htIt is the preceding layer hidden layer output of t moment;h′t-1Indicate new hidden layer at the t-1 moment Output;h′tIndicate new hidden layer output;W ' is the weight between new hidden layer, and σ is the activation primitive of neuron;u′ It is the weight between previous hidden layer and current hidden layer;B ' is the bias of current layer;⊙ representing matrix element product;
Step S54, the output of the described independent loops neural network are as follows:
Y (t)=Vh 't+c
In formula, weight coefficient of the V between the last one hidden layer and output layer, h 'tIt is exported for the last one hidden layer, C is threshold value;
Step S55, each predicted value component is exported by each independent loops neural network, and obtained each independent loops are refreshing Each predicted value component through network is overlapped to obtain prediction result:
Y=Y1+Y2+...+Ym+Yr
Wherein, YmAnd YrTo pass through the predicted value of the different ship flow component of independent loops neural network.
Compared with prior art, the invention has the benefit that
(1) due in real life vessel traffic flow will receive the factors such as season, weather, human activity influence and Irregular fluctuation is formed, has the characteristics that non-stationary and nonlinear, brings great difficulty to prediction, the present invention is using only The advantages of vertical Recognition with Recurrent Neural Network, the temporal information of time series is made full use of, and using set empirical modal algorithm will The ship data on flows of nonlinear and nonstationary is decomposed into a series of dull with the intrinsic mode functions sequence of stationarity and one Residue sequence not only remains the information of original series to the maximum extent, but also the inherent law of sequence is made full use of, and improves prediction Precision;
(2) a series of stable high-low frequency weights being obtained in the present invention, high fdrequency component represents the short term variations of ship flow, Low frequency component indicates the Secular Variation Tendency of ship flow, and calculates the Pierre of high-low frequency weight Yu original ship data on flows again Inferior related coefficient, according to correlation size by component combination be new component, obtain new combination component;
(3) of the invention since new combination component represents the variation tendency of the long middle or short term of ship stream, using different The independent loops neural network prediction model of parameter, is respectively separately handled above-mentioned component, passes through the superposition to multiple hidden layers Deep learning neural network is constructed, in conjunction with a large amount of ship data on flows, the time of ship stream is sufficiently extracted in data training Characteristic information is hidden, prediction is completed;
(4) due to having gradient explosion and gradient disappearance problem, the present invention when Recognition with Recurrent Neural Network handles time series problem Using the independent loops neural network of suitable Optimal Parameters to the vessel traffic flow series processing of decomposition, it can reach pre- well Survey effect;With it is traditional based on subjective factor to influence ship flow factor judgement compared with, the present invention have preferably it is adaptive Ying Xing.
Detailed description of the invention
Fig. 1 is that the ship flow of the invention based on improved set empirical mode decomposition and independent loops neural network is pre- Survey method flow diagram;
Fig. 2 is set empirical mode decomposition method schematic diagram of the invention;
Fig. 3 is set empirical mode decomposition figure shown in Fig. 2 of the invention;
Fig. 4 is independent loops neural network expanded schematic diagram of the invention.
Specific embodiment
By reading detailed description of non-limiting embodiments made by-Fig. 4 referring to Fig.1, feature of the invention, Objects and advantages will become more apparent upon.Referring to Fig. 1-Fig. 4 for showing the embodiment of the present invention, this hair hereafter will be described in greater detail It is bright.However, the present invention can be realized by many different forms, and it should not be construed as the limit by the embodiment herein proposed System.
Present invention is mainly applied to predict the current quantity of the ship at a certain harbour or waters, as shown in FIG. 1 to FIG. 4 combination, Ship method for predicting based on set empirical mode decomposition and independent loops neural network of the invention comprises the steps of:
Step S1, ship data on flows pre-processes:
Wherein, since ship data on flows will receive the influence of the factors such as subjective and objective, lead to some data exceptions, though Right quantity is few, but influences on whole prediction model very big.Therefore it needs to ship data on flows X (t) (initial data) It is pre-processed, including deletes zero data and abnormal big data etc., and the ship data on flows { x1, x2..., xnIndicate when Between the ships quantity that accesses to the ports of t.
Step S2, stationarity verifying is carried out to ship data on flows:
Wherein, using ADF (Augmented Dickey-Fuller, new-added item DF unit root) method of inspection to ship flow Data X (t) (initial data) carries out stationarity verification.If the time series of data is steady, unit root is not present, otherwise, There will be unit root;
In the step S2, ADF null hypothesis are as follows: if unit root is not present in time series, time series data is steady;If the time There are unit root, i.e. non-stationary, time series datas stable for one for sequence, it is necessary to and it is significant in given confidence level, Refuse null hypothesis;If explanation is when the critical statistics value of significantly less than 3 confidence levels (1%, 5%, 10%) of obtained statistic Refuse null hypothesis;
If step S3, after step S2 verification, when discovery ship data on flows is non-stationary, for subsequent raising The ship data on flows X (t) of the non-stationary is carried out set empirical mode decomposition by precision of prediction, it will is obtained a series of steady High-low frequency weight.
Wherein, high fdrequency component represents the short term variations of ship flow, and low frequency component indicates that the change in long term of ship flow becomes Gesture.It is the schematic illustration that set empirical mode decomposition is carried out to data-signal as shown in Figure 2.
Following procedure is further included in the step S3:
Step S31, white noise is added in ship data on flows X (t), the ship data on flows after obtaining addition white noise Xn1(t)=X (t)+n (t), n (t) they are white noise, are automatically mapped to the data of different scale and suitably refer on scale, to The modal overlap defect problem for overcoming EMD (Empirical Mode Decomposition, empirical mode decomposition), to obtain Better decomposition result;
Step S32, ship data on flows X is found outn(t) all maximum and minimum are inserted using cubic spline respectively Value fits coenvelope line q1 and lower envelope line q2, takes its average value m (t)=(q1+q2)/2, obtains new sequences h 1=Xn1(t)- m(t).Wherein, if there are positive minimum or negative maximum for new sequences h 1, this step S32 is repeated, until finding the One intrinsic mode functions IMF1, so as to obtain new data Xn1(t)-IMF1;
Step S33, according to the new data X of step S32n1(t)-IMF1, and by new data Xn1(t)-IMF1 is followed as next X in the step S32 of ringn1(t), circulation executes step S32, until by initial data Xn(t) be decomposed into m intrinsic mode functions and One dull remainder r (t), then finally obtain:
Xn1(t)=IMF1+IMF2+...+IMFm+r(t) (1)
Step S34, based on the above principles, the present invention by initial data X (t) be added K different white noise, and repeatedly with Upper step S31- step S33, accordingly obtain every time be added white noise after decomposed after m intrinsic mode functions and a list The data of tune.
Step S35, in order to eliminate the noise of addition, ensemble average calculating is carried out to each component of decomposition, as follows:
In formula, IMFimRefer to m-th of intrinsic mode functions that i-th is added white noise and is decomposed;IMF′mIt is K The average value of the sum of secondary m-th of intrinsic mode functions that white noise is added and obtains after being decomposed, i refer to that white noise is added in i-th Sound;ri(t) refer to the dull data r (t) that i-th is added white noise and is decomposed;R ' (t) refers to that K addition is white The average value of noise and the sum of dull data r (t) obtained after being decomposed.
Therefore, the decomposition result of original ship flow data X (t) are as follows:
X (t)=IMF '1+IMF′2+...IMF′m+r′(t) (4)
Step S4, the related coefficient for calculating above-mentioned each component and original ship flow data, according to degree of relevancy size Stack combinations are new component.In step s3, if establishing neural network respectively to the component of each ship data on flows Model, calculation amount is very big, and there are a certain amount of noises in high fdrequency component, is superimposed component again according to correlation size Combination can largely reduce calculation amount on the basis of not changing precision of prediction.Therefore, Pearson came phase is added in this step Relationship number, classifies according to correlation, correlation in weak dependence.
Following procedure is further included in the step S4:
Step S41, above-mentioned component IMF ' is calculated1, IMF '2..., IMF 'm, r ' (t) and original ship flow data X (t) Pearson correlation coefficient, in this way can be as follows by each component of correlation size discrimination to the percentage contribution of initial data:
Parameter X is each component in formula (5), and Y is initial data, and E is mathematic expectaion, and cov indicates covariance.Wherein, work as phase When relationship number is 0, show that X in formula (5) and two variables of Y are not related;When a variable as the increase of another variable (subtracts It is small) and increase (reduction), it is just referred to as to be positively correlated between two variables, above-mentioned Pearson correlation coefficient value is between 0 to 1.
Step S42, according to calculated correlation results in above-mentioned formula (5), according to the correlation of Pearson correlation coefficient Property size, respectively according to section (0,0.3], (0.3,0.6], (0.6,1.0] reconfigure as weak related, medium related and strong phase Close etc. three new component M1, M2 and M3, each new component be one or more IMF addition as a result, M1, M2 and M3 this Three kinds of components respectively indicate the variation tendency of the short, medium and long phase of ship flow.For example, correlation is located at having for weak related interval IMF′1With IMF '2, then M1=IMF '1+IMF′2, i.e. M1 is equal to the intrinsic mode functions after all decomposition of weak related interval Component average value sum;Similarly, M2 is also equal to the intrinsic mode functions component after all decomposition of medium related interval Average value sum, M3 are also equal to all intrinsic mode functions component average value sums positioned at strong correlation.
Step S5, in above-mentioned steps S4 three obtained component M1, M2 and M3, high fdrequency component M1 represents ship flow Short term variations, low frequency component M3 indicate the Secular Variation Tendency of ship flow.Therefore it needs to use component M1, M2 and M3 respectively The independent loops neural network model of different parameters optimization is predicted that flow chart is as shown in Figure 1.
Fig. 4 is independent loops neural network (IndRNN, Independent Recurrent Neural Networks) mould Type structure, timing t indicate the ship data on flows of not same date, each moment need a length be time_step (assuming that Range is 1-10) ships data as input, output is similarly the data of time_step (assuming that range be 2-11) length, The model of optimization weight is obtained by the training of a large amount of data, to achieve the purpose that prediction.
Independent loops neural network (IndRNN) prediction further includes following procedure in step S5 of the invention:
Step S51, new combination component M1, M2 and M3 will be obtained in step S4 as the defeated of independent loops neural network Enter, different components needs the IndRNN model using different parameters, and input is collectively expressed as X ' in following stepst, instead of Combination component M1, M2, M3 in step S4, i.e., each combination component (M1, M2, M3) are all the X ' in following methodst
Step S52, the hidden layer state expression formula of the described independent loops neural network are as follows:
ht=σ (WX 't+u⊙ht-1+b) (6)
In formula, t is the moment;X′tIt is the input (being respectively above-mentioned component M1, M2, M3) of t moment;W be hidden layer it Between weight;σ is the activation primitive of neuron;U is the weight between input layer and hidden layer;⊙ representing matrix element product;ht-1 The hidden layer output for indicating t-1 moment (i.e. previous moment), i.e., only receive this moment defeated in each hidden layer neuron of t moment Out and the state at t-1 moment itself is as input.
In ship volume forecasting, weight W and u can extract the spy of data by the training of independent loops neural network model Property information.And traditional RNN receives the state of t-1 moment all neurons as input in t moment each neuron.Pass The hidden layer state expression formula of the RNN of system are as follows:
ht=σ (WX 't+Uht-1+b) (7)
It can be seen from formula 6 and formula 7 in the connection of hidden layer weight, independent loops neural network has carried out letter Change, can effectively solve the problem that the problem of gradient disappears and gradient is exploded, neural network is made to carry out 90 multiple-layer stackeds, construct deep learning Network extracts more characteristic informations from ship data on flows, thus significantly more efficient raising ship volume forecasting precision.
Step S53, when needing to construct multilayer circulation neural network, new hidden layer output are as follows:
h′t=σ (W ' ht+u′⊙h′t-1+b′) (8)
In formula, t is the moment;h′t-1Indicate new hidden layer in the output at t-1 moment;h′tIndicate new hidden layer output; htIt is the preceding layer hidden layer output of t moment;W ' is the weight between new hidden layer;σ is the activation primitive of neuron;U ' is Weight between previous hidden layer and current hidden layer;B ' is the bias of current layer;⊙ representing matrix element product.
Step S54, the output of independent loops neural network are as follows:
Y (t)=Vh 't+c (9)
In formula, weight coefficient of the V between hidden layer and output layer, h 'tFor the output of the last one hidden layer, c is threshold value.
Step S55, predicted value component is obtained by the independent loops neural network of each different parameters, and each by what is obtained Each predicted value component of independent loops neural network is overlapped to obtain prediction result:
Y=Y1+Y2+...+Ym+Yr (10)
Wherein, Y1、Y2、......YmAnd YrTo pass through the predicted value of the different ship flow component of independent loops neural network.
In conclusion the present invention first pre-processes ship flow data, then carry out stationarity verifying.By gathering experience Mode decomposition is a series of high-low frequency weights, is converted to stable data sequence.Its high frequency components represents the short-term of ship flow Variation, low frequency component indicate the Secular Variation Tendency of ship flow.This processing to ship flow data can greatly improve prediction Precision, can overcome is influenced and non-stationary and non-linear due to vessel traffic flow by multiple complicated factors such as artificial and natural The problem of data bring extreme difficulties to prediction.Meanwhile Recognition with Recurrent Neural Network processing time series problem when have gradient explosion and Gradient disappearance problem can be with using the independent loops neural network of suitable parameters to the vessel traffic flow series processing of decomposition While avoiding problems, moreover it is possible to construct multilayer neural network, reach good prediction effect.
It is discussed in detail although the contents of the present invention have passed through above preferred embodiment, but it should be appreciated that above-mentioned Description is not considered as limitation of the present invention.After those skilled in the art have read above content, for of the invention A variety of modifications and substitutions all will be apparent.Therefore, protection scope of the present invention should be limited to the appended claims.

Claims (7)

1. a kind of ship method for predicting based on improved set empirical mode decomposition and independent loops neural network, special Sign is comprising the steps of:
S1, original ship data on flows is pre-processed, deletes the value for being zero in original ship data on flows, then delete one Maximum value and minimum value;
S2, stationarity verifying is carried out to the original ship data on flows;
S3, set empirical mode decomposition is carried out by the original ship data on flows of the step S2 non-stationary verified, Obtain several intrinsic mode functions and a remainder;
The Pearson came phase relation of S4, several intrinsic mode functions in calculating step S3 and a remainder and original ship flow data Number, and be multiple new components according to degree of relevancy size stack combinations;
S5, the multiple new component is predicted using the independent loops neural network model of different parameters respectively, is obtained The predicted value component of each independent neural network is simultaneously superimposed, and ship volume forecasting result is obtained.
2. the ship flow as described in claim 1 based on improved set empirical mode decomposition and independent loops neural network Prediction technique, which is characterized in that
In the step S2, further include:
Stationarity verification is carried out using time series of the ADF method of inspection to the original ship data on flows, when described original The time series of ship data on flows is steady, then unit root is not present, conversely, then there is unit root.
3. the ship as claimed in claim 1 or 2 based on improved set empirical mode decomposition and independent loops neural network Method for predicting, which is characterized in that
It is further included in the step S3:
S31, white noise is added in the original ship data on flows X (t), the ship data on flows after obtaining addition white noise Xn1(t)=X (t)+n (t), n (t) are white noise;
S32, the original ship data on flows X is found outn(t) all maximum and minimum use cubic spline interpolation respectively Coenvelope line q1 and lower envelope line q2 are fitted, its average value m (t)=(q1+q2)/2 is taken, obtains new sequences h 1=Xn1(t)-m (t);When the new sequences h 1 is there are positive minimum or negative maximum, then repeating said steps S32, until finding first Intrinsic mode functions IMF1 further obtains new data Xn1(t)-IMF1;
S33, according to the new data X of the step S32n1(t)-IMF1, and by new data Xn1(t)-IMF1 is as subsequent cycle X in step S32n1(t), circulation executes step S32, until by initial data Xn(t) m intrinsic mode functions and one are decomposed into Dull remainder r (t), then:
Xn1(t)=IMF1+IMF2+...+IMFm+r(t)
Step S34, K different white noise is added in the initial data X (t), and repeats above step S31- step S33, Accordingly obtain every time be added white noise after decomposed after m intrinsic mode functions and a dull remainder;
Step S35, ensemble average calculating is carried out to each component of decomposition, as follows:
In formula, IMFimRefer to m-th of intrinsic mode functions that i-th is added white noise and is decomposed;IMF′mBe K times plus Enter white noise and the average value of the sum of m-th of intrinsic mode functions obtaining after being decomposed, i refers to that white noise is added in i-th;ri (t) refer to the dull data r (t) that i-th is added white noise and is decomposed;R ' (t) refers to K addition white noise simultaneously The average value of the sum of the dull data r (t) obtained after being decomposed;
Step S36, the decomposition result of the described original ship flow data X (t) are as follows:
X (t)=IMF '1+IMF′2+...IMF′m+r′(t)。
4. the ship volume forecasting side as claimed in claim 3 based on set empirical mode decomposition and independent loops neural network Method, which is characterized in that
It is further included in the step S4:
Step S41, the component IMF ' after above-mentioned decomposition is calculated1, IMF '2..., IMF 'm, r ' (t) and original ship data on flows X (t) Pearson correlation coefficient, as follows:
In formula (5), X indicates that each component, Y indicate original ship data on flows, and E is mathematic expectaion, and cov indicates covariance;Wherein, When related coefficient is 0, show that X and two variables of Y are not related;Increase or reduce with another variable when a variable and It increases or reduces, then to be positively correlated between two variables, Pearson correlation coefficient value is between 0 to 1;
Step S42, according to the calculated correlation results of step S41, it is multiple for reconfiguring respectively according to multiple set intervals New component, each component are the result that one or more intrinsic mode functions are added.
5. the ship volume forecasting side as claimed in claim 4 based on set empirical mode decomposition and independent loops neural network Method, which is characterized in that
In the step S42, the multiple set interval includes weak related interval, medium related interval and strong correlation section, right It obtains three new component M1, M2 and M3 with answering, respectively indicates the variation tendency of the short, medium and long phase of ship flow;
Wherein, component M1 is equal to intrinsic mode functions average value sum after all decomposition positioned in weak related interval, component M2 is equal to intrinsic mode functions average value sum after all decomposition positioned in medium related interval, and component M3 is equal to all positions Intrinsic mode functions average value sum after decomposition in strong correlation section.
6. the ship volume forecasting side as claimed in claim 5 based on set empirical mode decomposition and independent loops neural network Method, which is characterized in that
In the step S42, the multiple set interval include section (0,0.3], (0.3,0.6], (0.6,1.0].
7. as the ship flow described in claim 5 or 6 based on set empirical mode decomposition and independent loops neural network is pre- Survey method, which is characterized in that
It is further included in the step S5:
Step S51, using three reconfigured in the step S4 new component M1, M2 and M3 as independent loops neural network Input X 't
Step S52, the hidden layer state expression formula of the described independent loops neural network are as follows:
In formula, t is the moment;X′tIt is the input of t moment, i.e., respectively above-mentioned component M1, M2, M3;W is the power between hidden layer Weight;σ is the activation primitive of neuron;U is the weight between input layer and hidden layer;B is bias;Representing matrix element Product;ht-1The hidden layer output for indicating t-1 moment (i.e. previous moment), i.e., only receive this in each hidden layer neuron of t moment The state at the output at quarter and t-1 moment itself is as input;
Step S53, when constructing multilayer circulation neural network, new hidden layer output are as follows:
In formula, t is the moment;htIt is the preceding layer hidden layer output of t moment;h′t-1Indicate that new hidden layer is defeated at the t-1 moment Out;h′tIndicate new hidden layer output;W ' is the weight between new hidden layer;σ is the activation primitive of neuron;Before u ' is Weight between one hidden layer and current hidden layer;B ' is the bias of current layer;Representing matrix element product;
Step S54, the output of the described independent loops neural network are as follows:
Y (t)=Vh 't+c
In formula, weight coefficient of the V between the last one hidden layer and output layer, h 'tFor the output of the last one hidden layer, c is threshold Value;
Step S55, each predicted value component, and each independent loops nerve net that will be obtained are exported by each independent loops neural network Each predicted value component of network is overlapped to obtain prediction result:
Y=Y1+Y2+...+Ym+Yr
Wherein, Y1、Y2、……YmAnd YrTo pass through the predicted value of the different ship flow component of independent loops neural network.
CN201910502153.9A 2019-06-11 2019-06-11 One kind being based on improved EEMD-IndRNN ship method for predicting Withdrawn CN110222826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910502153.9A CN110222826A (en) 2019-06-11 2019-06-11 One kind being based on improved EEMD-IndRNN ship method for predicting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910502153.9A CN110222826A (en) 2019-06-11 2019-06-11 One kind being based on improved EEMD-IndRNN ship method for predicting

Publications (1)

Publication Number Publication Date
CN110222826A true CN110222826A (en) 2019-09-10

Family

ID=67816559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910502153.9A Withdrawn CN110222826A (en) 2019-06-11 2019-06-11 One kind being based on improved EEMD-IndRNN ship method for predicting

Country Status (1)

Country Link
CN (1) CN110222826A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008726A (en) * 2019-10-28 2020-04-14 武汉理工大学 Class image conversion method in power load prediction
CN111241466A (en) * 2020-01-15 2020-06-05 上海海事大学 Ship flow prediction method based on deep learning
CN111415008A (en) * 2020-03-17 2020-07-14 上海海事大学 Ship flow prediction method based on VMD-FOA-GRNN
CN111897851A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Abnormal data determination method and device, electronic equipment and readable storage medium
CN112666483A (en) * 2020-12-29 2021-04-16 长沙理工大学 Improved ARMA lithium battery residual life prediction method
CN112684284A (en) * 2020-11-30 2021-04-20 西安理工大学 Voltage sag disturbance source positioning method integrating attention mechanism and deep learning
CN113487855A (en) * 2021-05-25 2021-10-08 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN115828736A (en) * 2022-11-10 2023-03-21 大连海事大学 EEMD-PE-LSTM-based short-term ship traffic flow prediction method
CN116155623A (en) * 2023-04-17 2023-05-23 湖南大学 Digital audio encryption method and system based on power grid frequency characteristic embedding

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008726A (en) * 2019-10-28 2020-04-14 武汉理工大学 Class image conversion method in power load prediction
CN111008726B (en) * 2019-10-28 2023-08-29 武汉理工大学 Class picture conversion method in power load prediction
CN111241466A (en) * 2020-01-15 2020-06-05 上海海事大学 Ship flow prediction method based on deep learning
CN111241466B (en) * 2020-01-15 2023-10-03 上海海事大学 Ship flow prediction method based on deep learning
CN111415008B (en) * 2020-03-17 2023-03-24 上海海事大学 Ship flow prediction method based on VMD-FOA-GRNN
CN111415008A (en) * 2020-03-17 2020-07-14 上海海事大学 Ship flow prediction method based on VMD-FOA-GRNN
CN111897851A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Abnormal data determination method and device, electronic equipment and readable storage medium
CN112684284A (en) * 2020-11-30 2021-04-20 西安理工大学 Voltage sag disturbance source positioning method integrating attention mechanism and deep learning
CN112666483A (en) * 2020-12-29 2021-04-16 长沙理工大学 Improved ARMA lithium battery residual life prediction method
CN112666483B (en) * 2020-12-29 2022-06-21 长沙理工大学 Lithium battery residual life prediction method for improving ARMA (autoregressive moving average)
CN113487855A (en) * 2021-05-25 2021-10-08 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN113487855B (en) * 2021-05-25 2022-12-20 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN115828736A (en) * 2022-11-10 2023-03-21 大连海事大学 EEMD-PE-LSTM-based short-term ship traffic flow prediction method
CN116155623A (en) * 2023-04-17 2023-05-23 湖南大学 Digital audio encryption method and system based on power grid frequency characteristic embedding
CN116155623B (en) * 2023-04-17 2023-08-15 湖南大学 Digital audio encryption method and system based on power grid frequency characteristic embedding

Similar Documents

Publication Publication Date Title
CN110222826A (en) One kind being based on improved EEMD-IndRNN ship method for predicting
CN110163433A (en) A kind of ship method for predicting
Hilborn The state of the art in stock assessment: where we are and where we are going
CN110472627A (en) One kind SAR image recognition methods end to end, device and storage medium
CN111324990A (en) Porosity prediction method based on multilayer long-short term memory neural network model
CN109034034A (en) A kind of vein identification method based on nitrification enhancement optimization convolutional neural networks
CN106656357B (en) Power frequency communication channel state evaluation system and method
CN113392961A (en) Method for extracting mesoscale eddy track stable sequence and predicting cyclic neural network
CN110853656B (en) Audio tampering identification method based on improved neural network
CN108665109A (en) A kind of reservoir parameter log interpretation method based on recurrence committee machine
CN109063759A (en) A kind of neural network structure searching method applied to the more attribute forecasts of picture
CN108229750A (en) A kind of stock yield Forecasting Methodology
CN108376297A (en) A kind of aquaculture water quality method for early warning, equipment and storage medium
CN113298186A (en) Network abnormal flow detection method for confluent flow model confrontation generation network and clustering algorithm
CN109460874A (en) A kind of ariyoshi wave height prediction technique based on deep learning
Ultsch et al. Self-organizing feature maps predicting sea levels
CN113222234A (en) Gas demand prediction method and system based on integrated modal decomposition
Braik et al. Particle swarm optimisation enhancement approach for improving image quality
CN111832787B (en) Teacher style prediction model training method and computer storage medium
CN108921701A (en) A kind of stock index futures price expectation method based on EDM algorithm and BP neural network
CN115345207A (en) Self-adaptive multi-meteorological-element prediction method
Li et al. A short-term wind power forecasting method based on NWP wind speed fluctuation division and clustering
Oliveira et al. Forecast opportunities for European summer climate ensemble predictions using Self-Organising Maps
CN114722893A (en) Model generation method, image annotation method and device and electronic equipment
CN114037866A (en) Generalized zero sample image classification method based on synthesis of distinguishable pseudo features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20190910

WW01 Invention patent application withdrawn after publication