CN110222826A - One kind being based on improved EEMD-IndRNN ship method for predicting - Google Patents

One kind being based on improved EEMD-IndRNN ship method for predicting Download PDF

Info

Publication number
CN110222826A
CN110222826A CN201910502153.9A CN201910502153A CN110222826A CN 110222826 A CN110222826 A CN 110222826A CN 201910502153 A CN201910502153 A CN 201910502153A CN 110222826 A CN110222826 A CN 110222826A
Authority
CN
China
Prior art keywords
ship
data
neural network
component
hidden layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910502153.9A
Other languages
Chinese (zh)
Inventor
韩增龙
黄洪琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime University filed Critical Shanghai Maritime University
Priority to CN201910502153.9A priority Critical patent/CN110222826A/en
Publication of CN110222826A publication Critical patent/CN110222826A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Marketing (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Development Economics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开一种基于改进的EEMD‑IndRNN船舶流量预测方法,采用集合经验模态分解算法将非线性非平稳的船舶流量数据分解为一系列具有平稳性的高低频本征模函数序列和一个单调的余数序列,既最大限度地保留原始序列的信息,又将序列的内在规律充分利用,提高预测精度;然后使用皮尔逊相关系数计算各个分量和原始船舶流量数据的相关性,根据相关性大小将分量重新组合为高中低三个新的分量;最后利用独立循环神经网络分别对上述分量分开处理,通过对多个隐藏层的叠加构建深度学习神经网络,结合大量的船舶流量数据,在数据训练中充分提取船舶流的时间隐藏特征信息,完成预测。本发明在细化数据分量处理的同时还提高预测精度,还具有更好的自适应性。

The invention discloses an improved EEMD-IndRNN-based ship flow prediction method, which uses an ensemble empirical mode decomposition algorithm to decompose nonlinear and non-stationary ship flow data into a series of high and low frequency eigenmode function sequences with stability and a monotone The remainder sequence of the original sequence not only retains the information of the original sequence to the greatest extent, but also makes full use of the internal laws of the sequence to improve the prediction accuracy; then use the Pearson correlation coefficient to calculate the correlation between each component and the original ship flow data, and according to the correlation size will be The components are recombined into three new components: high, medium and low; finally, the above components are processed separately by using an independent cyclic neural network, and a deep learning neural network is constructed by superimposing multiple hidden layers, combined with a large amount of ship traffic data, in data training The time hidden feature information of the ship flow is fully extracted to complete the prediction. The invention not only refines the processing of data components, but also improves the prediction accuracy and has better adaptability.

Description

一种基于改进的EEMD-IndRNN船舶流量预测方法An Improved EEMD-IndRNN Ship Flow Forecasting Method

技术领域technical field

本发明涉及时间序列预测技术领域,特别涉及一种基于改进的EEMD-IndRNN(集合经验模态分-独立循环神经网络)船舶流量预测方法。The invention relates to the technical field of time series prediction, in particular to a method for predicting ship flow based on an improved EEMD-IndRNN (ensemble empirical mode division-independent recurrent neural network).

背景技术Background technique

随着我国海上经济贸易的发展,船舶数量逐渐增多,为了解决水路航线密度的增加产生的船舶交通事故和航道规划和提高船舶通行效率等问题,需要对船舶流量进行科学准确的预测。如今由于科学技术发展日新月日和更高的预测精度要求,针对船舶流量的单一的预测方法已经不满足人们的需求,通常是多种预测算法的有效结合。With the development of my country's maritime economy and trade, the number of ships is gradually increasing. In order to solve the problems of ship traffic accidents caused by the increase in the density of water routes, channel planning and improve the efficiency of ship traffic, scientific and accurate prediction of ship flow is required. Nowadays, due to the rapid development of science and technology and higher forecast accuracy requirements, a single forecast method for ship flow can no longer meet people's needs, usually an effective combination of multiple forecast algorithms.

船舶交通流量预测的研究主要分为:一是根据船舶流量数据的影响因素寻求一种独特的内在关系,形成一种拥有多个输入和输出的函数映射,从而达到预测目的。二是将现有的预测模型通过应用创新,使用在在船舶交通流量预测中。目前,国内外对于船舶交通流量预测的研究非常丰富,主要有神经网络,小波变换,支持向量机,长短期记忆网络和组合预测等。The research on ship traffic flow forecasting is mainly divided into: one is to seek a unique internal relationship according to the influencing factors of ship flow data, and form a function map with multiple inputs and outputs, so as to achieve the purpose of forecasting. The second is to use the existing forecasting model in the forecasting of ship traffic flow through application innovation. At present, domestic and foreign research on ship traffic flow forecasting is very rich, mainly including neural network, wavelet transform, support vector machine, long short-term memory network and combined forecasting.

鉴于现有技术所存在的预测精度不高、适应性不高和循环神经梯度爆炸等一系列问题,研发一种基于改进的集合经验模态分解和独立循环神经网络的船舶流量预测方法,可以解决对于不同时间尺度的时间序列预测的适应性问题,还能进一步提高船舶流量的预测精度。In view of a series of problems in the existing technology, such as low prediction accuracy, low adaptability, and cyclic neural gradient explosion, a ship flow prediction method based on improved ensemble empirical mode decomposition and independent cyclic neural network can be developed, which can solve the problem of For the adaptability of time series forecasting on different time scales, it can further improve the forecasting accuracy of ship flow.

发明内容Contents of the invention

本发明的目的在于提供一种基于改进的EEMD-IndRNN(集合经验模态分和独立循环神经网络)船舶流量预测方法,属于深度神经网络方法的架构,可以解决对于不同时间尺度的时间序列预测的适应性问题,同时提高了预测精度。The object of the present invention is to provide a kind of ship flow forecasting method based on improved EEMD-IndRNN (ensemble empirical mode and independent cyclic neural network), which belongs to the framework of deep neural network method, and can solve the problem of time series prediction for different time scales Adaptability issues, while improving prediction accuracy.

为了达到上述目的,本发明通过以下技术方案实现:In order to achieve the above object, the present invention is achieved through the following technical solutions:

一种基于改进的集合经验模态分解和独立循环神经网络的船舶流量预测方法,包含以下步骤:S1、对原始船舶流量数据进行预处理;S2、对所述原始船舶流量数据进行平稳性验证;S3、通过所述步骤S2校验得到的非平稳的原始船舶流量数据进行集合经验模态分解,得到若干个本征模函数和一个余数;S4、计算上述各个分量和原始船舶流数据的相关系数,按照相关性程度大小叠加组合为新的分量。S5、对新的分量分别使用独立循环神经网络模型进行预测,由于新的组合分量代表船舶流的长中短期的变化趋势,因此需要使用不同参数的独立循环神经网络预测模型,最后将分量预测结果进行叠加,即得到船舶流量预测结果。。A ship flow prediction method based on improved ensemble empirical mode decomposition and independent cyclic neural network, comprising the following steps: S1, preprocessing the original ship flow data; S2, performing stationarity verification on the original ship flow data; S3, the non-stationary original ship flow data that obtains by described step S2 verification carries out set empirical mode decomposition, obtains several eigenmode functions and a remainder; S4, calculates the correlation coefficient of above-mentioned each component and original ship flow data , superimposed and combined into new components according to the degree of correlation. S5. Use the independent cyclic neural network model to predict the new components. Since the new combined component represents the long-term, medium-term and short-term trend of the ship flow, it is necessary to use an independent cyclic neural network prediction model with different parameters. Finally, the component prediction results Superposition is performed to obtain the ship flow prediction result. .

优选地,所述步骤S2中,进一步包含:Preferably, in the step S2, further comprising:

采用ADF检验方法对所述原始船舶流量数据的时间序列进行平稳性校验,当所述原始船舶流量数据的时间序列平稳,则不存在单位根,反之,则存在单位根。The ADF test method is used to check the stationarity of the time series of the original ship flow data. When the time series of the original ship flow data is stable, there is no unit root; otherwise, there is a unit root.

优选地,所述步骤S3中进一步包含:Preferably, further comprising in the step S3:

S31、在所述原始船舶流量数据X(t)中加入白噪声,得到添加白噪声后的船舶流量数据Xn1(t)=X(t)+n(t),n(t)为白噪声;S31. Add white noise to the original ship flow data X(t) to obtain ship flow data X n1 (t)=X(t)+n(t) after adding white noise, where n(t) is white noise ;

S32、找出所述原始船舶流量数据Xn(t)的所有极大值和极小值,分别使用三次样条插值拟合出上包络线q1和下包络线q2,取其平均值m(t)=(q1+q2)/2,得到新序列h1=Xn1(t)-m(t);当该新序列h1存在正的极小值或者负的极大值,则重复所述步骤S32,直到找到第一个本征模函数IMF1,进一步得到新数据Xn1(t)-IMF1;S32. Find all the maximum and minimum values of the original ship flow data X n (t), respectively use cubic spline interpolation to fit the upper envelope q1 and the lower envelope q2, and take their average value m(t)=(q1+q2)/2, to obtain a new sequence h1=X n1 (t)-m(t); when the new sequence h1 has a positive minimum value or a negative maximum value, then repeat the Describe step S32, until finding first intrinsic mode function IMF1, further obtain new data Xn1(t) -IMF1 ;

S33、根据所述步骤S32的新数据Xn1(t)-IMF1,并将新数据Xn1(t)-IMF1作为下一循环的步骤S32中的Xn1(t),循环执行步骤S32,直到将原始数据Xn(t)分解为m个本征模函数和一个单调的余数r(t),则:S33, according to the new data X n1 (t)-IMF1 of described step S32, and using new data X n1 (t)-IMF1 as X n1 (t) in the step S32 of the next cycle, step S32 is executed in a loop until Decompose the original data X n (t) into m intrinsic modulus functions and a monotonic remainder r(t), then:

Xn1(t)=IMF1+IMF2+...+IMFm+r(t)X n1 (t) = IMF 1 +IMF 2 +...+IMF m +r(t)

步骤S34、将所述原始数据X(t)加入K次不同的白噪声,并重复以上步骤S31-步骤S33,对应地得到每次加入白噪声后进行分解后的m个本征模函数和一个单调的余数;Step S34, adding K times of different white noises to the original data X(t), and repeating the above steps S31-S33, correspondingly obtaining m eigenmode functions and a monotonic remainder;

步骤S35、对分解的各个分量进行整体平均计算,如下:Step S35, performing overall average calculation on each decomposed component, as follows:

式中,IMF′m是K次加入白噪声并进行分解后得到的第m个本征模函数之和的平均值,i是指第i次加入白噪声;r′(t)是指K次加入白噪声并进行分解后得到的单调的余数r(t)之和的平均值;In the formula, IMF' m is the average value of the sum of the m eigenmode functions obtained after adding white noise for K times and decomposing it, i means adding white noise for the ith time; r'(t) means K times The average value of the sum of the monotonous remainder r(t) obtained after adding white noise and decomposing;

步骤S36、所述原始船舶流数据X(t)的分解结果为:Step S36, the decomposition result of the original ship flow data X(t) is:

X(t)=IMF′1+IMF′2+...IMF′m+r′(t)X(t)=IMF' 1 +IMF' 2 +...IMF' m + r'(t)

优选地,所述步骤S4中进一步包含Preferably, the step S4 further includes

步骤S41、计算上诉分量IMF′1,IMF′2,...,IMF′m,r′(t)和X(t)的皮尔逊相关系数Step S41, calculating the Pearson correlation coefficients of appeal components IMF' 1 , IMF' 2 , ..., IMF' m , r'(t) and X(t)

其中,X为各分量,Y为原始数据,E是数学期望,cov表示协方差。当相关系数为0时,表明两个变量没有关系,当一个变量随着另一个变量增大(减小)而增大(减小),就称两个变量之间为正相关,皮尔逊相关系数取值为0到1之间。Among them, X is each component, Y is the original data, E is the mathematical expectation, and cov is the covariance. When the correlation coefficient is 0, it indicates that there is no relationship between the two variables. When one variable increases (decreases) as the other variable increases (decreases), it is called a positive correlation between the two variables, Pearson correlation The coefficient takes a value between 0 and 1.

优选地,所述步骤S42中,所述多个设定区间包含弱相关区间、中等相关区间和强相关区间,对应地得到三个新的分量M1、M2和M3,分别表示船舶流量的短、中、长期的变化趋势;Preferably, in the step S42, the plurality of setting intervals include weak correlation intervals, medium correlation intervals and strong correlation intervals, and correspondingly obtain three new components M1, M2 and M3, respectively representing short, short, medium and long-term trends;

其中,分量M1等于所有位于弱相关区间中的分解后本征模函数平均值相加之和,分量M2等于所有位于中等相关区间中的分解后本征模函数平均值相加之和,分量M3等于所有位于强相关区间中的分解后本征模函数平均值相加之和。Among them, the component M1 is equal to the sum of the average values of all decomposed eigenmode functions located in the weak correlation interval, the component M2 is equal to the sum of the average values of all decomposed eigenmode functions located in the medium correlation interval, and the component M3 It is equal to the sum of the average values of all decomposed eigenmode functions in the strongly correlated interval.

优选地,所述步骤S42中,所述多个设定区间包含区间(0,0.3],(0.3,0.6],(0.6,1.0]。Preferably, in the step S42, the plurality of set intervals include intervals (0, 0.3], (0.3, 0.6], (0.6, 1.0].

优选地,所述步骤S5中进一步包含:Preferably, further comprising in the step S5:

步骤S51、将所述步骤S4中重新组合的新分量M1,M2,M3作为独立循环神经网络的输入;Step S51, using the new components M1, M2, and M3 recombined in the step S4 as the input of the independent cyclic neural network;

步骤S52、所述独立循环神经网络的隐藏层状态表达式改为:Step S52, the hidden layer state expression of the independent recurrent neural network is changed to:

ht=σ(WXt+u⊙ht-1+b)h t =σ(WX t +u⊙h t-1 +b)

式中,t为时刻,Xt是t时刻的输入,即上述的重新组合的新分量M1,M2,M3;W是隐藏层之间的权重,σ是神经元的激活函数;u是输入层和隐藏层之间的权重;b是偏置值;⊙表示矩阵元素积;ht-1表示t-1时刻(即前一个时刻)的隐藏层输出,即在t时刻每个隐藏层神经元只接受此刻的输出以及t-1时刻自身的状态作为输入;In the formula, t is the time, X t is the input at time t, that is, the above-mentioned recombined new components M1, M2, M3; W is the weight between hidden layers, σ is the activation function of neurons; u is the input layer and the weight between the hidden layer; b is the bias value; ⊙ represents the product of matrix elements; h t-1 represents the hidden layer output at time t-1 (that is, the previous time), that is, each hidden layer neuron at time t Only accept the output at this moment and its own state at time t-1 as input;

步骤S53、当需要构建多层循环神经网络时,新的隐藏层输出为:Step S53, when it is necessary to construct a multi-layer recurrent neural network, the output of the new hidden layer is:

h′t=σ(W′ht+u′⊙h′t-1+b′)h′ t = σ(W′h t +u′⊙h′ t-1 +b′)

式中,t为时刻,ht是t时刻的前一层隐藏层输出;h′t-1表示新的隐藏层在t-1时刻的输出;h′t表示新的隐藏层输出;W′是新的隐藏层之间的权重,σ是神经元的激活函数;u′是前一个隐藏层和当前隐藏层之间的权重;b′是当前层的偏置值;⊙表示矩阵元素积;In the formula, t is the time, h t is the output of the previous hidden layer at time t; h′ t-1 represents the output of the new hidden layer at time t-1; h′ t represents the output of the new hidden layer; W′ is the weight between the new hidden layers, σ is the activation function of neurons; u' is the weight between the previous hidden layer and the current hidden layer; b' is the bias value of the current layer; ⊙ represents the matrix element product;

步骤S54、所述独立循环神经网络的输出为:Step S54, the output of the independent recurrent neural network is:

Y(t)=Vh′t+cY( t )=Vh't+c

式中,V为最后一个隐藏层和输出层之间的权重系数,h′t为最后一个隐藏层输出,c为阈值;In the formula, V is the weight coefficient between the last hidden layer and the output layer, h′ t is the output of the last hidden layer, and c is the threshold;

步骤S55、通过各独立循环神经网络输出各预测值分量,并将得到的各独立循环神经网络的各预测值分量进行叠加得到预测结果:Step S55, output each predicted value component through each independent cyclic neural network, and superimpose the obtained predicted value components of each independent cyclic neural network to obtain a predicted result:

Y=Y1+Y2+...+Ym+Yr Y=Y 1 +Y 2 +...+Y m +Y r

其中,Ym和Yr为通过独立循环神经网络的不同船舶流分量的预测值。Among them, Y m and Y r are the predicted values of different ship flow components through the independent recurrent neural network.

与现有技术相比,本发明的有益效果为:Compared with prior art, the beneficial effect of the present invention is:

(1)由于在实际生活中船舶交通流量会受到季节、气候、人为活动等因素的影响而形成无规律的波动,具有非平稳和非线性的特点,给预测带来了极大的困难,本发明利用独立循环神经网络的优点,将时间序列的时间信息充分利用,并采用了集合经验模态算法将非线性非平稳的船舶流量数据分解为一系列具有平稳性的本征模函数序列和一个单调的残数序列,既最大限度地保留了原始序列的信息,又将序列的内在规律充分利用,提高预测精度;(1) In actual life, the ship traffic flow will be affected by factors such as seasons, climate, human activities and form irregular fluctuations, which has the characteristics of non-stationary and nonlinear, which brings great difficulties to the prediction. The invention makes full use of the time information of the time series by utilizing the advantages of the independent cyclic neural network, and uses the ensemble empirical mode algorithm to decompose the nonlinear and non-stationary ship flow data into a series of stationary eigenmode function sequences and a The monotonous residual sequence not only retains the information of the original sequence to the greatest extent, but also makes full use of the internal laws of the sequence to improve the prediction accuracy;

(2)本发明中得到一系列平稳的高低频分量,高频分量代表船舶流量的短期变化,低频分量表示船舶流量的长期变化趋势,且再计算高低频分量与原始船舶流量数据的皮尔逊相关系数,根据相关性大小将分量组合为新的分量,得到新的组合分量;(2) Obtain a series of stable high and low frequency components in the present invention, high frequency component represents the short-term change of ship flow, low frequency component represents the long-term change trend of ship flow, and calculates the Pearson correlation of high and low frequency component and original ship flow data again Coefficient, according to the size of the correlation, the components are combined into new components to obtain new combined components;

(3)本发明由于新的组合分量代表船舶流的长中短期的变化趋势,因此使用不同参数的独立循环神经网络预测模型,分别对上述分量分开处理,通过对多个隐藏层的叠加构建深度学习神经网络,结合大量的船舶流量数据,在数据训练中充分提取船舶流的时间隐藏特征信息,完成预测;(3) In the present invention, since the new combination component represents the long-term, short-term change trend of the ship flow, the independent cyclic neural network prediction model of different parameters is used to process the above-mentioned components separately, and the depth is constructed by superimposing multiple hidden layers Learn the neural network, combine a large amount of ship flow data, fully extract the time hidden feature information of the ship flow in the data training, and complete the prediction;

(4)由于循环神经网络处理时间序列问题时有梯度爆炸和梯度消失问题,本发明使用合适优化参数的独立循环神经网络对分解的船舶交通流量序列处理,能达到很好的预测效果;与传统的基于主观因素对影响船舶流量因素的判断相比,本发明具有更好的自适应性。(4) Due to the problem of gradient explosion and gradient disappearance when the cyclic neural network processes the time series problem, the present invention uses an independent cyclic neural network with suitable optimization parameters to process the decomposed ship traffic flow sequence, which can achieve good prediction results; and traditional Compared with the judgment of factors affecting the flow of ships based on subjective factors, the present invention has better adaptability.

附图说明Description of drawings

图1为本发明的基于改进的集合经验模态分解和独立循环神经网络的船舶流量预测方法流程图;Fig. 1 is the flow chart of the ship flow prediction method based on the improved ensemble empirical mode decomposition and independent cyclic neural network of the present invention;

图2为本发明的集合经验模态分解方法示意图;Fig. 2 is a schematic diagram of the method for ensemble empirical mode decomposition of the present invention;

图3为本发明的图2所示的集合经验模态分解图;Fig. 3 is the set empirical mode decomposition diagram shown in Fig. 2 of the present invention;

图4为本发明的独立循环神经网络展开示意图。Fig. 4 is a schematic diagram of the development of the independent cyclic neural network of the present invention.

具体实施方式Detailed ways

通过阅读参照图1-图4所作的对非限制性实施例所作的详细描述,本发明的特征、目的和优点将会变得更明显。参见示出本发明实施例的图1-图4,下文将更详细的描述本发明。然而,本发明可以由许多不同形式实现,并且不应解释为受到在此提出的实施例的限制。The features, objects and advantages of the present invention will become more apparent by reading the detailed description of a non-limiting embodiment made with reference to FIGS. 1-4 . Referring to Figures 1-4 which illustrate embodiments of the present invention, the present invention will be described in more detail below. However, this invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.

本发明主要应用于预测某一港口或水域的船舶通行数量,如图1~图4结合所示,本发明的基于集合经验模态分解和独立循环神经网络的船舶流量预测方法包含以下步骤:The present invention is mainly used to predict the number of ships passing in a certain port or water area, as shown in the combination of Figures 1 to 4, the ship flow prediction method based on ensemble empirical mode decomposition and independent cyclic neural network of the present invention includes the following steps:

步骤S1、船舶流量数据预处理:Step S1, ship flow data preprocessing:

其中,由于船舶流量数据会受到主观和客观等因素的影响,导致有些数据异常,虽然数量极少,但是对整体的预测模型影响极大。因此需要对船舶流量数据X(t)(原始数据)进行预处理,包括删除零数据以及异常大数据等,且该船舶流量数据{x1,x2,...,xn}表示时间t进出港口的船舶数量。Among them, because the ship flow data will be affected by subjective and objective factors, some data are abnormal. Although the number is very small, it has a great impact on the overall prediction model. Therefore, it is necessary to preprocess the ship flow data X(t) (raw data), including deleting zero data and abnormally large data, etc., and the ship flow data {x 1 , x 2 ,..., x n } represent time t The number of ships entering and leaving the port.

步骤S2、对船舶流量数据进行平稳性验证:Step S2, verifying the stationarity of the ship flow data:

其中,采用ADF(Augmented Dickey-Fuller,增项DF单位根)检验方法对船舶流量数据X(t)(原始数据)进行平稳性校验。如果数据的时间序列平稳,则不存在单位根,否则,就会存在单位根;Among them, the ADF (Augmented Dickey-Fuller, added DF unit root) test method is used to check the stationarity of the ship flow data X(t) (original data). If the time series of the data is stationary, there is no unit root, otherwise, there will be a unit root;

所述步骤S2中,ADF原假设为:若时间序列不存在单位根,则时序数据平稳;若时间序列存在单位根,即非平稳,对于一个平稳的时序数据,就需要在给定的置信水平上显著,拒绝原假设;若得到的统计量显著小于3个置信度(1%、5%、10%)的临界统计值时,说明是拒绝原假设的;In the step S2, the ADF null hypothesis is: if there is no unit root in the time series, then the time series data is stable; if there is a unit root in the time series, that is, non-stationary, for a stable time series data, it is necessary to The null hypothesis is rejected; if the obtained statistic is significantly less than the critical statistical value of 3 confidence levels (1%, 5%, 10%), it means that the null hypothesis is rejected;

步骤S3、若经过所述步骤S2校验后,发现船舶流量数据是非平稳时,为了后续提高预测精度,将该非平稳的船舶流量数据X(t)进行集合经验模态分解,将会得到一系列平稳的高低频分量。Step S3, if after the verification of step S2, it is found that the ship flow data is non-stationary, in order to subsequently improve the prediction accuracy, the non-stationary ship flow data X(t) is subjected to ensemble empirical mode decomposition, and a A series of smooth high and low frequency components.

其中,高频分量代表船舶流量的短期变化,低频分量表示船舶流量的长期变化趋势。如图2所示是对数据信号进行集合经验模态分解的原理示意图。Among them, the high-frequency component represents the short-term change of ship flow, and the low-frequency component represents the long-term change trend of ship flow. As shown in FIG. 2 , it is a schematic diagram of the principle of performing ensemble empirical mode decomposition on data signals.

所述步骤S3中进一步包含以下过程:The following process is further included in the step S3:

步骤S31、在船舶流量数据X(t)中加入白噪声,得到添加白噪声后的船舶流量数据Xn1(t)=X(t)+n(t),n(t)为白噪声,使不同尺度的数据自动映射到合适的参考尺度上,用以克服EMD(Empirical Mode Decomposition,经验模态分解)的模态混叠缺陷问题,从而得到更好的分解结果;Step S31, adding white noise to the ship flow data X(t), to obtain the ship flow data Xn1 (t)=X(t)+n(t) after adding white noise, where n(t) is white noise, so that Data of different scales are automatically mapped to an appropriate reference scale to overcome the mode aliasing defect of EMD (Empirical Mode Decomposition, Empirical Mode Decomposition), so as to obtain better decomposition results;

步骤S32、找出船舶流量数据Xn(t)的所有极大值和极小值,分别使用三次样条插值拟合出上包络线q1和下包络线q2,取其平均值m(t)=(q1+q2)/2,得到新序列h1=Xn1(t)-m(t)。其中,如果新序列h1存在正的极小值或者负的极大值,则重复此步骤S32,直到找到第一个本征模函数IMF1,从而可以得到新数据Xn1(t)-IMF1;Step S32, find out all the maximum and minimum values of the ship flow data X n (t), respectively use cubic spline interpolation to fit the upper envelope q1 and the lower envelope q2, and take the average value m( t)=(q1+q2)/2, a new sequence h1=X n1 (t)-m(t) is obtained. Wherein, if there is a positive minimum value or a negative maximum value in the new sequence h1, repeat this step S32 until the first intrinsic modulus function IMF1 is found, so that new data X n1 (t)-IMF1 can be obtained;

步骤S33、根据步骤S32的新数据Xn1(t)-IMF1,并将新数据Xn1(t)-IMF1作为下一循环的步骤S32中的Xn1(t),循环执行步骤S32,直到将原始数据Xn(t)分解为m个本征模函数和一个单调的余数r(t),则最终得到:Step S33, according to the new data X n1 (t)-IMF1 of step S32, and using new data X n1 (t)-IMF1 as X n1 (t) in the step S32 of next cycle, step S32 is executed in a loop until the The original data X n (t) is decomposed into m intrinsic modulus functions and a monotonous remainder r(t), and finally:

Xn1(t)=IMF1+IMF2+...+IMFm+r(t) (1)X n1 (t) = IMF 1 +IMF 2 +...+IMF m +r(t) (1)

步骤S34、基于上述原理,本发明将原始数据X(t)加入K次不同的白噪声,并重复以上步骤S31-步骤S33,对应地得到每次加入白噪声后进行分解后的m个本征模函数和一个单调的数据。Step S34, based on the above principle, the present invention adds the original data X(t) to K times of different white noises, and repeats the above steps S31-Step S33, and correspondingly obtains m eigenvalues decomposed after adding white noise each time Modulo function and a monotonic data.

步骤S35、为了消除添加的噪声,对分解的各个分量进行整体平均计算,如下:Step S35, in order to eliminate the added noise, perform overall average calculation on each decomposed component, as follows:

式中,IMFim是指第i次加入白噪声并进行分解得到的第m个本征模函数;IMF′m是K次加入白噪声并进行分解后得到的第m个本征模函数之和的平均值,i是指第i次加入白噪声;ri(t)是指第i次加入白噪声并进行分解得到的单调的数据r(t);r′(t)是指K次加入白噪声并进行分解后得到的单调的数据r(t)之和的平均值。In the formula, IMF im refers to the m-th eigenmode function obtained by adding white noise for the ith time and decomposing it; IMF′ m is the sum of the m-th eigenmode function obtained by adding white noise for K times and decomposing i refers to adding white noise for the ith time; r i (t) refers to the monotonous data r(t) obtained by adding white noise for the ith time and decomposing it; r'(t) refers to adding K times White noise and the average value of the sum of the monotonous data r(t) obtained after decomposition.

因此,原始船舶流数据X(t)的分解结果为:Therefore, the decomposition result of the original ship flow data X(t) is:

X(t)=IMF′1+IMF′2+...IMF′m+r′(t) (4)X(t) = IMF' 1 + IMF' 2 + ... IMF' m + r'(t) (4)

步骤S4、计算上述各个分量和原始船舶流数据的相关系数,按照相关性程度大小叠加组合为新的分量。在步骤S3中,如果对每一个船舶流量数据的分量分别建立神经网络模型,计算量很大,并且在高频分量中存在一定量的噪声,按照相关性大小将分量叠加重新组合能在不改变预测精度的基础上很大程度减少计算量。因此,在本步骤中加入皮尔逊相关系数,按照相关性进行分类,弱相关性中相关性。Step S4, calculating the correlation coefficients between the above components and the original ship flow data, superimposing and combining them into new components according to the degree of correlation. In step S3, if a neural network model is established for each component of ship flow data, the amount of calculation is very large, and there is a certain amount of noise in the high-frequency component, and the components can be superimposed and recombined according to the degree of correlation without changing On the basis of prediction accuracy, the amount of calculation is greatly reduced. Therefore, in this step, the Pearson correlation coefficient is added to classify according to the correlation, and the weak correlation is medium correlation.

所述步骤S4中进一步包含以下过程:The following process is further included in the step S4:

步骤S41、计算上述分量IMF′1,IMF′2,...,IMF′m,r′(t)和原始船舶流数据X(t)的皮尔逊相关系数,这样可以通过相关性大小区分各个分量对原始数据的贡献程度,如下:Step S41, calculate the Pearson correlation coefficient of the above-mentioned components IMF' 1 , IMF' 2 , ..., IMF' m , r'(t) and the original ship flow data X(t), so that each The contribution of the component to the original data is as follows:

式(5)中参数X为各分量,Y为原始数据,E是数学期望,cov表示协方差。其中,当相关系数为0时,表明式(5)中的X和Y两个变量没有关系;当一个变量随着另一个变量增大(减小)而增大(减小),就称两个变量之间为正相关,上述皮尔逊相关系数取值为0到1之间。In the formula (5), the parameter X is each component, Y is the original data, E is the mathematical expectation, and cov means the covariance. Among them, when the correlation coefficient is 0, it shows that the two variables X and Y in formula (5) have no relationship; when one variable increases (decreases) with the increase (decrease) of the other variable, it is called two There is a positive correlation between the two variables, and the value of the Pearson correlation coefficient above is between 0 and 1.

步骤S42、根据上述公式(5)中计算出的相关性结果,根据皮尔逊相关系数的相关性大小,分别按照区间(0,0.3],(0.3,0.6],(0.6,1.0]重新组合为弱相关、中等相关和强相关等三个新的分量M1、M2和M3,每一个新的分量为一个或多个IMF相加的结果,M1、M2和M3这三种分量分别表示船舶流量的短、中、长期的变化趋势。例如,相关性位于弱相关区间的有IMF′1和IMF′2,则M1=IMF′1+IMF′2,即M1等于位于弱相关区间的所有的分解后本征模函数分量平均值相加之和;同理,M2也等于位于中等相关区间的所有的分解后本征模函数分量平均值相加之和,M3也等于位于强相关的所有的本征模函数分量平均值相加之和。Step S42, according to the correlation result calculated in the above formula (5), according to the correlation size of the Pearson correlation coefficient, according to the interval (0,0.3], (0.3,0.6], (0.6,1.0]) recombined into Three new components M1, M2 and M3 are weakly correlated, moderately correlated and strongly correlated. Each new component is the result of adding one or more IMFs. The three components M1, M2 and M3 respectively represent the Short-term, medium-term and long-term trends. For example, if the correlation is in the weak correlation interval, there are IMF' 1 and IMF' 2 , then M1=IMF' 1 +IMF' 2 , that is, M1 is equal to all the decompositions in the weak correlation interval The sum of the average values of the eigenmode function components; similarly, M2 is also equal to the sum of the average values of all decomposed eigenmode function components located in the medium correlation interval, and M3 is also equal to all the eigenmode function components located in the strong correlation The sum of the mean values of the modulo function components.

步骤S5、在上述步骤S4得到的三个分量M1、M2和M3中,高频分量M1代表船舶流量的短期变化,低频分量M3表示船舶流量的长期变化趋势。因此需要对分量M1、M2和M3分别使用不同参数优化的独立循环神经网络模型进行预测,流程图如图1所示。Step S5, among the three components M1, M2 and M3 obtained in the above step S4, the high-frequency component M1 represents the short-term change of the ship flow, and the low-frequency component M3 represents the long-term change trend of the ship flow. Therefore, it is necessary to predict the components M1, M2, and M3 using independent cyclic neural network models optimized with different parameters. The flow chart is shown in Figure 1.

图4为独立循环神经网络(IndRNN,Independent Recurrent Neural Networks)模型结构,时序t表示不同日期的船舶流量数据,每一个时刻需要一个长度为time_step(假设范围为1-10)的船舶数据作为输入,输出同样为time_step(假设范围为2-11)长度的数据,通过大量的数据训练得到优化权重的模型,从而达到预测的目的。Figure 4 shows the model structure of an independent recurrent neural network (IndRNN, Independent Recurrent Neural Networks). The time series t represents the ship flow data of different dates. Each time requires a ship data with a length of time_step (assumed to be in the range of 1-10) as input. Output data with the same length as time_step (assumed range is 2-11), and obtain a model with optimized weights through a large amount of data training, so as to achieve the purpose of prediction.

本发明的步骤S5中独立循环神经网络(IndRNN)预测进一步包含以下过程:In the step S5 of the present invention, independent recurrent neural network (IndRNN) prediction further comprises the following process:

步骤S51、将步骤S4中得到新的组合分量M1、M2和M3作为独立循环神经网络的输入,不同的分量需要使用不同参数的IndRNN模型,在以下步骤中输入统一表示为X′t,代替步骤S4中的组合分量M1,M2,M3,即每个组合分量(M1,M2,M3)都是以下方法中的X′tStep S51, use the new combined components M1, M2 and M3 obtained in step S4 as the input of the independent cyclic neural network, different components need to use the IndRNN model with different parameters, and the input in the following steps is uniformly expressed as X′ t instead of the step The combined components M1, M2, M3 in S4, ie each combined component (M1, M2, M3) is X't in the following method.

步骤S52、所述独立循环神经网络的隐藏层状态表达式为:Step S52, the hidden layer state expression of the independent recurrent neural network is:

ht=σ(WX′t+u⊙ht-1+b) (6)h t =σ(WX′ t +u⊙h t-1 +b) (6)

式中,t为时刻;X′t是t时刻的输入(即分别为上述的分量M1,M2,M3);W是隐藏层之间的权重;σ是神经元的激活函数;u是输入层和隐藏层之间的权重;⊙表示矩阵元素积;ht-1表示t-1时刻(即前一个时刻)的隐藏层输出,即在t时刻每个隐藏层神经元只接受此刻的输出以及t-1时刻自身的状态作为输入。In the formula, t is the time; X′ t is the input at time t (that is, the above-mentioned components M1, M2, M3 respectively); W is the weight between hidden layers; σ is the activation function of neurons; u is the input layer and the weight between the hidden layer; ⊙ represents the matrix element product; h t-1 represents the hidden layer output at time t-1 (that is, the previous time), that is, each hidden layer neuron at time t only accepts the output at this moment and The state of itself at time t-1 is taken as input.

在船舶流量预测中,权重W和u可以通过独立循环神经网络模型训练提取数据的特性信息。而传统的RNN在t时刻每一个神经元接受t-1时刻所有神经元的状态作为输入。即传统的RNN的隐藏层状态表达式为:In ship traffic forecasting, the weights W and u can extract characteristic information of the data through independent recurrent neural network model training. In the traditional RNN, each neuron at time t receives the state of all neurons at time t-1 as input. That is, the hidden layer state expression of the traditional RNN is:

ht=σ(WX′t+Uht-1+b) (7)h t =σ(WX′ t +Uh t-1 +b) (7)

由公式6和公式7可以看出,在隐藏层权重的连接上,独立循环神经网络进行了简化,能够有效解决梯度消失和梯度爆炸的问题,使神经网络进行90多层叠加,构建深度学习网络,从船舶流量数据中提取更多的特征信息,从而更加有效的提高船舶流量预测精度。From Formula 6 and Formula 7, it can be seen that in the connection of hidden layer weights, the independent cyclic neural network is simplified, which can effectively solve the problem of gradient disappearance and gradient explosion, so that the neural network can be superimposed with 90 layers to build a deep learning network , to extract more characteristic information from ship flow data, so as to improve the prediction accuracy of ship flow more effectively.

步骤S53、当需要构建多层循环神经网络时,新的隐藏层输出为:Step S53, when it is necessary to construct a multi-layer recurrent neural network, the output of the new hidden layer is:

h′t=σ(W′ht+u′⊙h′t-1+b′) (8)h′ t = σ(W′h t +u′⊙h′ t-1 +b′) (8)

式中,t为时刻;h′t-1表示新的隐藏层在t-1时刻的输出;h′t表示新的隐藏层输出;ht是t时刻的前一层隐藏层输出;W′是新的隐藏层之间的权重;σ是神经元的激活函数;u′是前一个隐藏层和当前隐藏层之间的权重;b′是当前层的偏置值;⊙表示矩阵元素积。In the formula, t is the time; h′ t-1 represents the output of the new hidden layer at time t-1; h′ t represents the output of the new hidden layer; h t is the output of the previous hidden layer at time t; W′ is the weight between the new hidden layers; σ is the activation function of neurons; u' is the weight between the previous hidden layer and the current hidden layer; b' is the bias value of the current layer; ⊙ represents the matrix element product.

步骤S54、独立循环神经网络的输出为:Step S54, the output of the independent recurrent neural network is:

Y(t)=Vh′t+c (9)Y(t)=Vh′ t +c (9)

式中,V为隐藏层和输出层之间的权重系数,h′t为最后一个隐藏层输出,c为阈值。In the formula, V is the weight coefficient between the hidden layer and the output layer, h′ t is the output of the last hidden layer, and c is the threshold.

步骤S55、通过各个不同参数的独立循环神经网络得到预测值分量,并将得到的各独立循环神经网络的各预测值分量进行叠加得到预测结果:Step S55, obtaining the predicted value components through the independent cyclic neural networks with different parameters, and superimposing the obtained predicted value components of the independent cyclic neural networks to obtain the predicted results:

Y=Y1+Y2+...+Ym+Yr (10)Y=Y 1 +Y 2 +...+Y m +Y r (10)

其中,Y1、Y2、......Ym和Yr为通过独立循环神经网络的不同船舶流分量的预测值。Among them, Y 1 , Y 2 , . . . Y m and Y r are the predicted values of different ship flow components through the independent recurrent neural network.

综上所述,本发明先将船舶流数据进行预处理,再进行平稳性验证。通过集合经验模态分解为一系列高低频分量,转换为平稳数据序列。其中高频分量代表船舶流量的短期变化,低频分量表示船舶流量的长期变化趋势。这种对船舶流数据的处理能大大提高预测精度,可以克服由于船舶交通流量受人为和自然等多个复杂因素的影响和非平稳和非线性数据给预测带来极大困难的问题。同时,循环神经网络处理时间序列问题时有梯度爆炸和梯度消失问题,使用合适参数的独立循环神经网络对分解的船舶交通流量序列处理,可以在避免此类问题的同时,还能构建多层神经网络,达到很好的预测效果。To sum up, in the present invention, the ship flow data is preprocessed first, and then the stationarity verification is performed. Decompose the set empirical mode into a series of high and low frequency components, and transform it into a stationary data sequence. Among them, the high-frequency component represents the short-term change of ship flow, and the low-frequency component represents the long-term change trend of ship flow. This kind of processing of ship flow data can greatly improve the prediction accuracy, and can overcome the problem that the ship traffic flow is affected by many complex factors such as man-made and natural, and the non-stationary and nonlinear data bring great difficulties to the prediction. At the same time, there are gradient explosion and gradient disappearance problems when the cyclic neural network deals with time series problems. Using an independent cyclic neural network with appropriate parameters to process the decomposed ship traffic flow sequence can avoid such problems and build a multi-layer neural network. Network, to achieve a very good prediction effect.

尽管本发明的内容已经通过上述优选实施例作了详细介绍,但应当认识到上述的描述不应被认为是对本发明的限制。在本领域技术人员阅读了上述内容后,对于本发明的多种修改和替代都将是显而易见的。因此,本发明的保护范围应由所附的权利要求来限定。Although the content of the present invention has been described in detail through the above preferred embodiments, it should be understood that the above description should not be considered as limiting the present invention. Various modifications and alterations to the present invention will become apparent to those skilled in the art upon reading the above disclosure. Therefore, the protection scope of the present invention should be defined by the appended claims.

Claims (7)

1. a kind of ship method for predicting based on improved set empirical mode decomposition and independent loops neural network, special Sign is comprising the steps of:
S1, original ship data on flows is pre-processed, deletes the value for being zero in original ship data on flows, then delete one Maximum value and minimum value;
S2, stationarity verifying is carried out to the original ship data on flows;
S3, set empirical mode decomposition is carried out by the original ship data on flows of the step S2 non-stationary verified, Obtain several intrinsic mode functions and a remainder;
The Pearson came phase relation of S4, several intrinsic mode functions in calculating step S3 and a remainder and original ship flow data Number, and be multiple new components according to degree of relevancy size stack combinations;
S5, the multiple new component is predicted using the independent loops neural network model of different parameters respectively, is obtained The predicted value component of each independent neural network is simultaneously superimposed, and ship volume forecasting result is obtained.
2. the ship flow as described in claim 1 based on improved set empirical mode decomposition and independent loops neural network Prediction technique, which is characterized in that
In the step S2, further include:
Stationarity verification is carried out using time series of the ADF method of inspection to the original ship data on flows, when described original The time series of ship data on flows is steady, then unit root is not present, conversely, then there is unit root.
3. the ship as claimed in claim 1 or 2 based on improved set empirical mode decomposition and independent loops neural network Method for predicting, which is characterized in that
It is further included in the step S3:
S31, white noise is added in the original ship data on flows X (t), the ship data on flows after obtaining addition white noise Xn1(t)=X (t)+n (t), n (t) are white noise;
S32, the original ship data on flows X is found outn(t) all maximum and minimum use cubic spline interpolation respectively Coenvelope line q1 and lower envelope line q2 are fitted, its average value m (t)=(q1+q2)/2 is taken, obtains new sequences h 1=Xn1(t)-m (t);When the new sequences h 1 is there are positive minimum or negative maximum, then repeating said steps S32, until finding first Intrinsic mode functions IMF1 further obtains new data Xn1(t)-IMF1;
S33, according to the new data X of the step S32n1(t)-IMF1, and by new data Xn1(t)-IMF1 is as subsequent cycle X in step S32n1(t), circulation executes step S32, until by initial data Xn(t) m intrinsic mode functions and one are decomposed into Dull remainder r (t), then:
Xn1(t)=IMF1+IMF2+...+IMFm+r(t)
Step S34, K different white noise is added in the initial data X (t), and repeats above step S31- step S33, Accordingly obtain every time be added white noise after decomposed after m intrinsic mode functions and a dull remainder;
Step S35, ensemble average calculating is carried out to each component of decomposition, as follows:
In formula, IMFimRefer to m-th of intrinsic mode functions that i-th is added white noise and is decomposed;IMF′mBe K times plus Enter white noise and the average value of the sum of m-th of intrinsic mode functions obtaining after being decomposed, i refers to that white noise is added in i-th;ri (t) refer to the dull data r (t) that i-th is added white noise and is decomposed;R ' (t) refers to K addition white noise simultaneously The average value of the sum of the dull data r (t) obtained after being decomposed;
Step S36, the decomposition result of the described original ship flow data X (t) are as follows:
X (t)=IMF '1+IMF′2+...IMF′m+r′(t)。
4. the ship volume forecasting side as claimed in claim 3 based on set empirical mode decomposition and independent loops neural network Method, which is characterized in that
It is further included in the step S4:
Step S41, the component IMF ' after above-mentioned decomposition is calculated1, IMF '2..., IMF 'm, r ' (t) and original ship data on flows X (t) Pearson correlation coefficient, as follows:
In formula (5), X indicates that each component, Y indicate original ship data on flows, and E is mathematic expectaion, and cov indicates covariance;Wherein, When related coefficient is 0, show that X and two variables of Y are not related;Increase or reduce with another variable when a variable and It increases or reduces, then to be positively correlated between two variables, Pearson correlation coefficient value is between 0 to 1;
Step S42, according to the calculated correlation results of step S41, it is multiple for reconfiguring respectively according to multiple set intervals New component, each component are the result that one or more intrinsic mode functions are added.
5. the ship volume forecasting side as claimed in claim 4 based on set empirical mode decomposition and independent loops neural network Method, which is characterized in that
In the step S42, the multiple set interval includes weak related interval, medium related interval and strong correlation section, right It obtains three new component M1, M2 and M3 with answering, respectively indicates the variation tendency of the short, medium and long phase of ship flow;
Wherein, component M1 is equal to intrinsic mode functions average value sum after all decomposition positioned in weak related interval, component M2 is equal to intrinsic mode functions average value sum after all decomposition positioned in medium related interval, and component M3 is equal to all positions Intrinsic mode functions average value sum after decomposition in strong correlation section.
6. the ship volume forecasting side as claimed in claim 5 based on set empirical mode decomposition and independent loops neural network Method, which is characterized in that
In the step S42, the multiple set interval include section (0,0.3], (0.3,0.6], (0.6,1.0].
7. as the ship flow described in claim 5 or 6 based on set empirical mode decomposition and independent loops neural network is pre- Survey method, which is characterized in that
It is further included in the step S5:
Step S51, using three reconfigured in the step S4 new component M1, M2 and M3 as independent loops neural network Input X 't
Step S52, the hidden layer state expression formula of the described independent loops neural network are as follows:
In formula, t is the moment;X′tIt is the input of t moment, i.e., respectively above-mentioned component M1, M2, M3;W is the power between hidden layer Weight;σ is the activation primitive of neuron;U is the weight between input layer and hidden layer;B is bias;Representing matrix element Product;ht-1The hidden layer output for indicating t-1 moment (i.e. previous moment), i.e., only receive this in each hidden layer neuron of t moment The state at the output at quarter and t-1 moment itself is as input;
Step S53, when constructing multilayer circulation neural network, new hidden layer output are as follows:
In formula, t is the moment;htIt is the preceding layer hidden layer output of t moment;h′t-1Indicate that new hidden layer is defeated at the t-1 moment Out;h′tIndicate new hidden layer output;W ' is the weight between new hidden layer;σ is the activation primitive of neuron;Before u ' is Weight between one hidden layer and current hidden layer;B ' is the bias of current layer;Representing matrix element product;
Step S54, the output of the described independent loops neural network are as follows:
Y (t)=Vh 't+c
In formula, weight coefficient of the V between the last one hidden layer and output layer, h 'tFor the output of the last one hidden layer, c is threshold Value;
Step S55, each predicted value component, and each independent loops nerve net that will be obtained are exported by each independent loops neural network Each predicted value component of network is overlapped to obtain prediction result:
Y=Y1+Y2+...+Ym+Yr
Wherein, Y1、Y2、……YmAnd YrTo pass through the predicted value of the different ship flow component of independent loops neural network.
CN201910502153.9A 2019-06-11 2019-06-11 One kind being based on improved EEMD-IndRNN ship method for predicting Withdrawn CN110222826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910502153.9A CN110222826A (en) 2019-06-11 2019-06-11 One kind being based on improved EEMD-IndRNN ship method for predicting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910502153.9A CN110222826A (en) 2019-06-11 2019-06-11 One kind being based on improved EEMD-IndRNN ship method for predicting

Publications (1)

Publication Number Publication Date
CN110222826A true CN110222826A (en) 2019-09-10

Family

ID=67816559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910502153.9A Withdrawn CN110222826A (en) 2019-06-11 2019-06-11 One kind being based on improved EEMD-IndRNN ship method for predicting

Country Status (1)

Country Link
CN (1) CN110222826A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008726A (en) * 2019-10-28 2020-04-14 武汉理工大学 Class image conversion method in power load prediction
CN111241466A (en) * 2020-01-15 2020-06-05 上海海事大学 A deep learning-based approach to ship traffic flow prediction
CN111415008A (en) * 2020-03-17 2020-07-14 上海海事大学 A Vessel Flow Prediction Method Based on VMD-FOA-GRNN
CN111897851A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Abnormal data determination method and device, electronic equipment and readable storage medium
CN112666483A (en) * 2020-12-29 2021-04-16 长沙理工大学 Improved ARMA lithium battery residual life prediction method
CN112684284A (en) * 2020-11-30 2021-04-20 西安理工大学 Voltage sag disturbance source positioning method integrating attention mechanism and deep learning
CN113487855A (en) * 2021-05-25 2021-10-08 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN115828736A (en) * 2022-11-10 2023-03-21 大连海事大学 EEMD-PE-LSTM-based short-term ship traffic flow prediction method
CN116155623A (en) * 2023-04-17 2023-05-23 湖南大学 A digital audio encryption method and system based on grid frequency feature embedding
CN118820687A (en) * 2024-09-14 2024-10-22 浙江省水利河口研究院(浙江省海洋规划设计研究院) A method for calculating dynamic draft of a survey ship based on shipborne GNSS geodetic height

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111008726A (en) * 2019-10-28 2020-04-14 武汉理工大学 Class image conversion method in power load prediction
CN111008726B (en) * 2019-10-28 2023-08-29 武汉理工大学 A method for image-like conversion in power load forecasting
CN111241466A (en) * 2020-01-15 2020-06-05 上海海事大学 A deep learning-based approach to ship traffic flow prediction
CN111241466B (en) * 2020-01-15 2023-10-03 上海海事大学 Ship flow prediction method based on deep learning
CN111415008B (en) * 2020-03-17 2023-03-24 上海海事大学 Ship flow prediction method based on VMD-FOA-GRNN
CN111415008A (en) * 2020-03-17 2020-07-14 上海海事大学 A Vessel Flow Prediction Method Based on VMD-FOA-GRNN
CN111897851A (en) * 2020-07-01 2020-11-06 中国建设银行股份有限公司 Abnormal data determination method and device, electronic equipment and readable storage medium
CN112684284A (en) * 2020-11-30 2021-04-20 西安理工大学 Voltage sag disturbance source positioning method integrating attention mechanism and deep learning
CN112666483A (en) * 2020-12-29 2021-04-16 长沙理工大学 Improved ARMA lithium battery residual life prediction method
CN112666483B (en) * 2020-12-29 2022-06-21 长沙理工大学 An improved ARMA method for predicting the remaining life of lithium batteries
CN113487855B (en) * 2021-05-25 2022-12-20 浙江工业大学 A Traffic Flow Prediction Method Based on EMD-GAN Neural Network Structure
CN113487855A (en) * 2021-05-25 2021-10-08 浙江工业大学 Traffic flow prediction method based on EMD-GAN neural network structure
CN115828736A (en) * 2022-11-10 2023-03-21 大连海事大学 EEMD-PE-LSTM-based short-term ship traffic flow prediction method
CN116155623A (en) * 2023-04-17 2023-05-23 湖南大学 A digital audio encryption method and system based on grid frequency feature embedding
CN116155623B (en) * 2023-04-17 2023-08-15 湖南大学 A digital audio encryption method and system based on grid frequency feature embedding
CN118820687A (en) * 2024-09-14 2024-10-22 浙江省水利河口研究院(浙江省海洋规划设计研究院) A method for calculating dynamic draft of a survey ship based on shipborne GNSS geodetic height

Similar Documents

Publication Publication Date Title
CN110222826A (en) One kind being based on improved EEMD-IndRNN ship method for predicting
CN110163433A (en) A kind of ship method for predicting
CN110491416B (en) Telephone voice emotion analysis and identification method based on LSTM and SAE
CN109524020B (en) A kind of speech enhancement processing method
CN111860982A (en) A short-term wind power prediction method for wind farms based on VMD-FCM-GRU
CN111241466B (en) Ship flow prediction method based on deep learning
CN105488466B (en) A kind of deep-neural-network and Acoustic Object vocal print feature extracting method
CN109256118B (en) End-to-end Chinese dialect recognition system and method based on generative auditory model
CN109785249A (en) A kind of Efficient image denoising method based on duration memory intensive network
CN106779064A (en) Deep neural network self-training method based on data characteristics
KR102741778B1 (en) Method and apparatus for implementing cnn-based water level prediction model
CN108133702A (en) A kind of deep neural network speech enhan-cement model based on MEE Optimality Criterias
CN107622305A (en) Processor and processing method for neural network
Zhang et al. A pairwise algorithm using the deep stacking network for speech separation and pitch estimation
CN110853656A (en) Audio Tampering Recognition Algorithm Based on Improved Neural Network
CN107844849A (en) A kind of new energy output short term prediction method returned based on experience wavelet transformation with improving Gaussian process
CN110263860A (en) A kind of freeway traffic flow prediction technique and device
CN113222234A (en) Gas demand prediction method and system based on integrated modal decomposition
CN117669655A (en) Network intrusion detection deep learning model compression method
CN114999525A (en) Light-weight environment voice recognition method based on neural network
Jadda et al. Speech enhancement via adaptive Wiener filtering and optimized deep learning framework
CN116663619B (en) Data enhancement method, device and medium based on GAN network
CN107797149A (en) A kind of ship classification method and device
Połap et al. Image approach to voice recognition
Xie et al. Data augmentation and deep neural network classification based on ship radiated noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20190910