CN111324990A - Porosity prediction method based on multilayer long-short term memory neural network model - Google Patents

Porosity prediction method based on multilayer long-short term memory neural network model Download PDF

Info

Publication number
CN111324990A
CN111324990A CN202010197833.7A CN202010197833A CN111324990A CN 111324990 A CN111324990 A CN 111324990A CN 202010197833 A CN202010197833 A CN 202010197833A CN 111324990 A CN111324990 A CN 111324990A
Authority
CN
China
Prior art keywords
layer
neural network
input
term memory
short term
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010197833.7A
Other languages
Chinese (zh)
Inventor
陈伟
杨柳青
查蓓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze University
Original Assignee
Yangtze University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze University filed Critical Yangtze University
Priority to CN202010197833.7A priority Critical patent/CN111324990A/en
Publication of CN111324990A publication Critical patent/CN111324990A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention belongs to the technical field of reservoir parameter prediction, and particularly relates to a porosity prediction method based on a multilayer long-short term memory neural network model. A multi-layer long-short term memory based neural network model, comprising: an input layer configured to characterize raw logging parameters of porosity; a hidden layer formed by overlapping a plurality of long-short term memory (LSTM) models; and the output layer outputs the predicted value of the porosity through the full-connection layer from the last hidden layer. According to the invention, LSTMs are overlapped, a plurality of LSTM models are used for framework prediction, the LSTM model of the rear layer uses the output of the front layer as input, the deep LSTM model can be used for not only continuously memorizing long-term information, but also more strictly screening time information, the prediction can be better carried out at the inflection point of porosity change, and particularly, the high-precision target parameter prediction value is obtained under the conditions of less well positions and less dimensionality of logging parameters.

Description

Porosity prediction method based on multilayer long-short term memory neural network model
Technical Field
The invention belongs to the technical field of reservoir parameter prediction, and particularly relates to a porosity prediction method based on a multilayer long-short term memory neural network model.
Background
Deep learning is a method for learning data representation in machine learning, and is one of the most recently developed and practical machine learning methods. The property types or features of the higher level abstraction are formed by recombining from the lower level features. Breakthrough achievements have been achieved in many scientific fields. The prediction accuracy and the recognition capability based on deep learning are continuously improved, and the method is more and more applied to actual development problems in various fields. Deep learning has been the reference of brain working principles in the past, where statistical, mathematical and theoretical knowledge is used. A large number of experimental studies prove that different representation modes of data have great influence on the task learning accuracy rate, and a good data representation method can effectively eliminate irrelevant factors between the data and a learning target and simultaneously retain information of intrinsic relation of the data.
With the development of deep learning and machine learning in recent years, many scholars associate deep learning and machine learning with seismic data processing. For example, physical property prediction, first-arrival wave pickup, lithology discrimination, and the like are performed on seismic reservoirs by using methods of an Artificial Neural Network (ANN), a Convolutional Neural Network (CNN), and a Support Vector Machine (SVM). An artificial neural network is one of the hot spots in the discussion in recent years, and many scholars predict the logging data by using the artificial neural network and the BP neural network. The neural networks are all fully connected, and the neurons in the same layer are independent and are not connected. The reservoir parameter prediction is only related to the logging data of the corresponding depth, so that the influence of the logging data before and after the depth is ignored, and the authenticity of the predicted result is difficult to ensure. In the process of processing very huge data, an artificial neural network and a BP neural network are easy to fall into local minimum values, the accuracy rate is not high, and the sequence data is difficult to predict effectively. Many researchers also predict the prediction by improving the artificial neural network and the BP neural network, but the implementation process is very complicated.
In order to reasonably and accurately predict reservoir parameters by using logging data, the internal relation and the rule of the data depth are mined. The time sequence concept in the Recurrent Neural Network (RNN) is introduced into the framework of the neural network, so that the data can be more accurately characterized.
Disclosure of Invention
Aiming at the problems, the invention provides a porosity prediction model and a porosity prediction method based on a multilayer long-term and short-term memory neural network, which are used for mining the deep internal relation and rule of data and reasonably and accurately predicting reservoir parameters by using logging data.
The first purpose of the invention is to provide a multilayer long-short term memory neural network model for predicting porosity, which comprises the following steps:
an input layer configured to characterize raw logging parameters of porosity;
the hidden layer is formed by superposing a plurality of long and short term memory models;
and the output layer outputs the predicted value of the porosity through the full-connection layer from the last hidden layer.
Further, the long-term and short-term memory model structure is as follows:
the first layer is a forgetting gate structure, and the forgetting gate is utilized to select and reserve the current step time C as shown in formula (1)tBefore step time Ct-1The memory information of (2);
ft=σ(Wf[st-1,xt]+bf) (1)
Figure BDA0002418256900000021
in the formula: f. oftAn output representing a forgetting gate; st-1An output representing a previous hidden layer; wfA weight coefficient matrix representing a forgetting gate layer; bfA bias vector representing a forgetting gate layer; σ denotes the activation function, here the sigmoid function is used; wfsAnd WfxIs corresponding to the input information st-1And xtA matrix of weight coefficients;
the second layer is an input door layer structure, the transmission of input door control information to the unit state C is realized, and the input of current sequence data is processed; the layer is composed of two parts, wherein the first part uses a sigmoid function to determine the updated information state; the second part uses the tanh layer to calculate a new vector
Figure BDA0002418256900000022
it=σ(Wi[st-1,xt]+bi) (3)
Figure BDA0002418256900000031
In the formula: i.e. itRepresenting an input gate; sigma represents a sigmoid activation function; wi、biA weight coefficient matrix and an offset vector for the input gate;
Figure BDA0002418256900000032
representing the current cell unit state of the input layer; wcAnd bcRespectively, the weight coefficient matrix and the offset vector of the current input layer.
The third layer is an updating layer structure (Cell State) of the Cell unit C, and the addition and deletion of information are determined through a forgetting gate and an input gate; equation (5) uses the unit state of the previous step and the forgetting gate ftMultiplication decision Ct-1Adding the information to be discarded and the information of the input gate to finally update the cells;
Figure BDA0002418256900000033
wherein: ctAnd Ct-1Respectively representing the outputs of the cell unit update layers of the current layer and the previous layer.
The last layer is an Output Gate structure (Output Gate); firstly, determining the output equation (8) of a cell unit by using a sigmoid function, then processing the cell unit by using a tanh activation function and calculating the output equation (9) with a sigmoid gate;
Ot=σ(Wo[st-1,xt]+bo(6)
St=Ot*tanh(Ct) (7)
wherein: o istAn output representative of a current cell unit; stIs in a hidden stateDischarging; sigma represents a sigmoid activation function; wo、boThe weight coefficient matrix and the offset vector of the output gate.
The second purpose of the invention is to provide a porosity prediction method based on a multilayer long-short term memory neural network model, which comprises the following steps:
s1: constructing the multilayer long-short term memory neural network model;
s2: selecting a plurality of groups of logging parameters capable of representing the porosity of a known well as input data, randomly dividing the logging parameters into a training set and a verification set according to a proportion, inputting the training sets and the verification sets into the multilayer long and short term memory neural network model, and training the multilayer long and short term memory neural network model;
s3: inputting the depth data of the non-logging section in the known well for testing, and predicting the porosity by using the multi-layer long-short term memory neural network model trained in the step S2.
Further, the input data in step S2 is expressed by 7: and 3, dividing the training set and the test set.
Further, the logging parameters in step S2 are preferably four sets.
Further, the data of the training set and the test set are normalized:
Figure BDA0002418256900000041
wherein: x is the number ofi、oiRespectively an original parameter value and a normalized parameter value; maxxiIs the parameter class maximum; minxiIs the parameter class minimum.
The invention has the beneficial effects that:
firstly, the LSTM utilizes the conceptual structure of each gate to control the unit layer state of each moment in the recurrent neural network, and applies the structure of the gate in the network to reserve and process data information which needs to be relied on for a long time, thereby not only effectively solving the problems of gradient dispersion and gradient explosion, but also being capable of memorizing long-term effective related information and participating in the output of the subsequent data information.
Secondly, the LSTM is overlapped, a plurality of LSTM models are used for framework prediction, the multilayer LSTM recurrent neural network is basically not different from the LSTM, the LSTM model of the rear layer uses the output of the front layer as the input, and the deep LSTM model can continuously memorize long-term information and can also carry out stricter screening on time information. In the reservoir porosity prediction, because the logging data belong to sequence data, the logging curve shows the inherent link with the trend, secondly, the depth sampling of the logging curve parameters is dense, and the interval is small, and the reservoir parameter prediction is carried out by utilizing the characteristic of long-term memory of deep layer LSTM, thereby being more in line with the characteristic of the logging curve.
Thirdly, the invention adjusts the parameters of the learning rate, the number of network layers, the number of cell units and the sequence length in the network training, selects the optimal parameters to carry out porosity prediction experiments, and compares the model with several popular models at present, and the result shows that the model has good prediction performance. The method can better predict the inflection point of the porosity change, particularly obtain a high-precision target parameter predicted value under the conditions of less well positions and less dimensionality of logging parameters, and benefit from the efficient extraction of the characteristic information of a small amount of data.
Drawings
FIG. 1 is a schematic diagram of the structure of the LSTM cells of the present invention.
Fig. 2 is a schematic diagram of a porosity prediction flow of the MLSTM network of the present invention.
FIG. 3 is a comparison of porosity prediction for M1 wells for different models in accordance with an embodiment of the present invention.
FIG. 4 is a comparison of porosity prediction for M2 wells for different models in accordance with an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
As shown in FIG. 1, the LSTM provided by the present invention has four interaction layers with special connection mode, and the LSTM introduces the concept of "gate" to control the mutual transmission calculation in the hidden unit. The cell state C is the key to the process of hiding the cell, and the previous information memorized in C is used for calculation processing before and after passing through the LSTM. The input at the current moment and the state of the previous hidden unit are processed through 3 gate structures of the interaction layer and a cell unit tanh layer, so that the information in the cell moment is screened out and new information is added to be introduced into the calculation of the current moment.
The first layer is a forgetting Gate structure (Forget Gate), as shown in formula (1), the forgetting Gate is used for selecting and reserving to the current step time CtBefore step time Ct-1The memory information of (1).
ft=σ(Wf[st-1,xt]+bf) (1)
Figure BDA0002418256900000051
In the formula: f. oftAn output representing a forgetting gate; st-1An output representing a previous hidden layer; wfA weight coefficient matrix representing a forgetting gate layer; bfA bias vector representing a forgetting gate layer; σ denotes the activation function, here the sigmoid function is used; wfsAnd WfxIs corresponding to the input information st-1And xtThe weight coefficient matrix of (2).
The second layer is an Input Gate layer structure (Input Gate), and the Input Gate control information is transmitted to the unit state C to process the Input of the current sequence data. This layer consists of two parts, the first of which uses the sigmoid function to determine the updated information state. The second part uses the tanh layer to calculate a new vector
Figure BDA0002418256900000052
it=σ(Wi[st-1,xt]+bi) (3)
Figure BDA0002418256900000053
In the formula: i.e. itRepresenting an input gate; sigma represents a sigmoid activation function; wi、biA weight coefficient matrix and an offset vector for the input gate;
Figure BDA0002418256900000061
representing the current cell unit state of the input layer; wcAnd bcRespectively, the weight coefficient matrix and the offset vector of the current input layer.
The third layer is an update layer structure (Cell State) of the Cell unit C, and the addition and deletion of information are determined through a forgetting gate and an input gate. Equation (5) uses the unit state of the previous step and the forgetting gate ftMultiplication decision Ct-1The discarded information is needed, and the information of the input gate is added to finally update the cells.
Figure BDA0002418256900000062
Wherein: ctAnd Ct-1Respectively representing the outputs of the cell unit update layers of the current layer and the previous layer.
And the last layer is an Output Gate structure (Output Gate), firstly, the sigmoid function is utilized to determine the Output equation (8) of the cell unit, then, the tanh activation function is used to process the cell unit, and the Output equation (9) is calculated together with the sigmoid Gate.
Ot=σ(Wo[st-1,xt]+bo(6)
St=Ot*tanh(Ct) (7)
Wherein: stAn output in a hidden state; sigma represents a sigmoid activation function; wo、boThe weight coefficient matrix and the offset vector of the output gate.
A multi-layer long-short term memory neural network model is shown in FIG. 2, which comprises an input layer, a hidden layer and an output layer. The input layer inputs four sets of raw logging parameters, such as neutrons, density, natural gamma, and acoustic. The input data is transmitted into the hidden layer after being subjected to normalized preprocessing operation. The Relu activation function is used for adjusting the parameter dimension of the afferent hidden layer, because the non-negative interval of the Relu function is constant, the problem that the gradient disappears because of data dimension transformation does not exist, and the function of the Relu activation function is to enable the processed data to be in accordance with the input dimension requirement of the neural network.
The hidden layer is formed by overlapping a plurality of LSTMs, namely the previous layer outputs S in time sequencet-1And Ct-1Time series information S of cell units for current layer inputt-1And Ct-1Afferent and cellular unit state output StThe tanh activation function is chosen. The output layer outputs the predicted value of the porosity through the full-connection layer from the last hidden layer.
Hidden layer in FIG. 2 passes through S0And C0Initializing Unit information State, C1.1And S1.1Is the information output of the first unit in the first hidden layer of the LSTM, the information is predicted according to the cell state, and S is output at the same time1As an input layer to the next hidden layer. And sequentially transmitting calculation, wherein the sequence prediction of each unit of each layer is based on the prediction result of the previous unit sequence, and the output sequence passing through the last hidden layer is P ═ P1,P2,…,PLAnd enabling the output dimension to correspond to the target parameter dimension through the full connection layer. The model construction mode based on the layer number superposition is used for prediction, compared with the traditional LSTM, the model construction mode can better mine the change trend of sequence information, useless information is screened out through the control of a plurality of gates, and the characteristic information of logging parameters is reserved to the maximum extent, so that the porosity prediction precision is improved.
In the optimization of the supervision process, the Adam optimization algorithm with momentum is used for the self-adaptive learning rate, the Adam optimization algorithm combines the characteristics that the AdaGrad algorithm is superior to processing sparse gradients and the RMSprop algorithm is superior to processing non-stationary targets, and the accuracy and the stationarity of the gradient descent process are ensured and the turbulence is reduced. Meanwhile, the updating amplitude of the sparse parameters is increased, and the updating amplitude of the frequency parameters is reduced, namely the network achieves the optimal effect by using the minimum training iteration steps. For example, when the network depth increases in the network training, some parameters deviate from the set range and need to be debugged, in this case, the Adam algorithm calculates an exponentially weighted average of gradients during each training, and then updates the parameters by using the obtained gradient values.
The porosity prediction method based on the multilayer long-short term memory neural network model comprises the following steps:
s1: constructing the multilayer long-short term memory neural network model;
s2: selecting a plurality of groups of logging parameters capable of representing the porosity of a known well as input data, randomly dividing the logging parameters into a training set and a verification set according to a proportion, inputting the training sets and the verification sets into the multilayer long and short term memory neural network model, and training the multilayer long and short term memory neural network model;
s3: inputting the depth data of the known well in the non-logging section for testing, and predicting the porosity by using the multi-layer long-short term memory neural network model trained in S2.
The embodiment provided by the invention respectively adopts logging parameters of an M1 well and an M2 well, the logging depths of the M1 well and the M2 well are 1350-1760M and 6650-6888M respectively, wherein the sampling interval of the M1 well is 0.125M, and the sampling interval of the M2 well is 0.1524M. Wherein, Density (DEN), natural Gamma (GR), sound wave (AC) and argillaceous content (SH) are selected as the input of a neural network in an M1 well, and Porosity (POR) is used as a prediction output parameter. The M2 well selects density, natural gamma, acoustic and Compensated Neutrons (CNL) as input of a neural network, and Porosity (POR) as a prediction output parameter. The purpose of changing the input parameters (M1: DEN, GR, AC, SH; M2: DEN, GR, AC, CNL) of the two wells is to test the prediction accuracy change of the porosity under different input conditions, and the experimental data defines the original input parameter as Xo={x1,x2,…,xn}, training set Xtrain={x1,x2,…,xrAnd test set Xtest={x1,x2,…,xtAccording to 7: 3. Training set XtrainFor training networks, incorporating test set XtestA hyper-parameter in the network is determined.
Because the evaluation indexes of all characteristics in the evaluation system of the original data are different, different data have different dimensions and orders of magnitude. The divided data needs to be normalized, and the normalized data is beneficial to accelerating the convergence based on the gradient descent algorithm and improving the accuracy of the model. The normalized formula is:
Figure BDA0002418256900000081
wherein: x is the number ofi、oiRespectively an original parameter value and a normalized parameter value; maxxiIs the parameter class maximum; minxiIs the parameter class minimum.
The comparative model used in the embodiment of the present invention is: BPNN, Deep Neural Network (DNN), Convolutional Neural Network (CNN).
The BPNN adopts a single hidden layer structure, sigmoid activation functions are selected from an input layer to a hidden layer, the sigmoid functions are also selected from the hidden layer to an output layer, and the number of neurons in the hidden layer is 8. The Adam function is selected as the optimization function, the input layer is four neuron nodes, namely, the selected four logging parameters are input, and the output quantity is the porosity prediction value of the target well.
The Deep Neural Network (DNN) is structurally divided into an input layer, a hidden layer and an output layer, all the layers are in a full-connection mode, and neurons in the same layer are not connected and are independent of one another. Taking porosity prediction of reservoir parameters as an example, four groups of feature vectors are input from an input layer. And outputting the predicted value of the porosity by hidden layer extraction features, selection of an activation function, debugging of a hyper-parameter, definition of a loss function and selection of an optimizer. For the selection of the number of neurons, the model accuracy is generally higher as the number of neurons is larger, but the training speed is slower, and an overfitting phenomenon occurs. The invention selects the deep neural network with the hidden layer of four for training. The Relu function is selected from the input layer to the activation function between the hidden layer and each hidden layer, the Sigmoid activation function is selected from the output layer, and the parameters and the hyper-parameter values are initialized before each training.
The Convolutional Neural Network (CNN) is structurally divided into a convolutional layer and a pooling layer, the convolutional layer comprises a plurality of feature surfaces, data features are extracted through connection of neuron layers on the feature surfaces, connection of the neurons and the feature surfaces of the previous layer is achieved through convolutional kernels, the pooling layer performs dimensionality reduction on the convolved features through sampling, each feature surface corresponds to the previous feature surface, a logging parameter feature dimensionality is low when a porosity prediction model is constructed, a double convolutional layer CNN model without the pooling layer is designed, the logging parameters are randomly introduced into an input layer, the pixel dimensionality is 4 × 4, the first convolutional layer is provided with eight convolutional kernels of 3 × 3 × 1, the second convolutional layer is provided with sixteen convolutional kernels of 2 × 2 × 8, the two convolutional layers both adopt Relu activation functions, finally the fully-connected layer is connected, the parameter dimensionality after convolution is changed, dimensionality reduction processing is performed, the number of the fully-connected layer is set to be 64, three-dimensional data stretching is converted into a one-dimensional array, and the output layer adopts a sigmoid function.
In order to visually display the comparison of the prediction curves of different models, two wells are drawn to generate curves as shown in fig. 3 and 4. The graph is fitted with a training set of four models above the reference line and a porosity prediction curve generated based on the four models below the reference line, and compared with the actual porosity curve.
From the curve fitting degree of the training set in fig. 3, it can be seen that the correlation of MLSTM is higher than that of other models, two segments of actual porosities are 0 in 1400-1500m, and the other models have fluctuation except the relative agreement of the MLSTM training curve, wherein the fluctuation amplitude of CNN and BPNN is larger. Under-fitting phenomena generally occur in the well log at 1715- & gt 1725m of the test set. 1632 + 1710m depth segment the actual porosity curve shows a plurality of step increases. The MLSTM model based on multi-hidden-layer sequence information prediction can be combined with the influence of prediction before and after analysis of different-depth input characteristic information on the current, and then the trend diversity change of the porosity curve can be accurately predicted.
FIG. 4 is a log generated based on multi-model prediction for M2 wells, similar to FIG. 3, in that MLSTM still maintains a high fit to the curve in the training set. For noisy data, the MLSTM model is more robust in prediction, especially at the porosity depth inflection point. The change trend of the porosity can be effectively predicted.
Prediction performance of the MLSTM model shows strong prediction accuracy performance and high stability from the prediction data of M1 wells and M2 wells. This capability benefits from the superposition of LSTM, taking the per-layer per-cell sequence feature information calculation as new information and passing the calculation back. The multilayer mode not only can effectively extract hidden characteristic information from the input logging parameters, but also can improve the effective utilization of the information of the input parameters in model training prediction. In the interlayer connection, the output information of each unit of each layer is used as the information corresponding to the next layer and the next unit for inputting, and meanwhile, the prediction error is correspondingly transmitted to the next unit layer in the information transmission process.
Details not described in this specification are within the skill of the art that are well known to those skilled in the art.

Claims (6)

1. A multi-layered long-short term memory neural network-based model for predicting porosity, comprising:
an input layer configured to characterize raw logging parameters of porosity;
the hidden layer is formed by superposing a plurality of long and short term memory models;
and the output layer outputs the predicted value of the porosity through the full-connection layer from the last hidden layer.
2. The multi-layer long-short term memory-based neural network model of claim 1, wherein the long-short term memory model structure is:
the first layer is a forgetting gate structure, and the forgetting gate is utilized to select and reserve the current step time C as shown in formula (1)tBefore step time Ct-1The memory information of (2);
ft=σ(Wf[st-1,xt]+bf) (1)
Figure FDA0002418256890000011
in the formula: f. oftAn output representing a forgetting gate; st-1An output representing a previous hidden layer; wfA weight coefficient matrix representing a forgetting gate layer; bfA bias vector representing a forgetting gate layer; σ denotes the activation function, here the sigmoid function is used; wfsAnd WfxIs corresponding to the input information st-1And xtA matrix of weight coefficients;
the second layer is an input door layer structure, the transmission of input door control information to the unit state C is realized, and the input of current sequence data is processed; the layer is composed of two parts, wherein the first part uses a sigmoid function to determine the updated information state; the second part uses the tanh layer to calculate a new vector
Figure FDA0002418256890000012
it=σ(Wi[st-1,xt]+bi) (3)
Figure FDA0002418256890000013
In the formula: i.e. itRepresenting an input gate; sigma represents a sigmoid activation function; wi、biA weight coefficient matrix and an offset vector for the input gate;
Figure FDA0002418256890000014
representing the current cell unit state of the input layer; wcAnd bcRespectively, the weight coefficient matrix and the offset vector of the current input layer.
The third layer is an updating layer structure of the cell unit C, and the addition and deletion of information are determined through a forgetting gate and an input gate; equation (5) uses the unit state of the previous step and the forgetting gate ftMultiplication decision Ct-1Adding the information to be discarded and the information of the input gate to finally update the cells;
Figure FDA0002418256890000021
wherein: ctAnd Ct-1Respectively representing the outputs of the cell unit update layers of the current layer and the previous layer.
The last layer is an Output Gate structure (Output Gate); firstly, determining the output equation (8) of a cell unit by using a sigmoid function, then processing the cell unit by using a tanh activation function and calculating the output equation (9) with a sigmoid gate;
Ot=σ(Wo[st-1,xt]+bo(6)
St=Ot*tanh(Ct) (7)
wherein: o istAn output representative of a current cell unit; stAn output in a hidden state; sigma represents a sigmoid activation function; wo、boThe weight coefficient matrix and the offset vector of the output gate.
3. The porosity prediction method based on the multilayer long-short term memory neural network model as claimed in claim 1, characterized by comprising the following steps:
s1: constructing the multilayer long-short term memory neural network model;
s2: selecting a plurality of groups of logging parameters capable of representing the porosity of a known well as input data, randomly dividing the logging parameters into a training set and a verification set according to a proportion, inputting the training sets and the verification sets into the multilayer long and short term memory neural network model, and training the multilayer long and short term memory neural network model;
s3: inputting the depth data of the non-logging section in the known well for testing, and predicting the porosity by using the multi-layer long-short term memory neural network model trained in the step S2.
4. The porosity prediction method based on the multilayer long-short term memory neural network model according to claim 3, wherein: the input data in step S2 is as follows: and 3, dividing the training set and the test set.
5. The porosity prediction method based on the multilayer long-short term memory neural network model according to claim 3, wherein: the logging parameters in step S2 are preferably four sets.
6. The porosity prediction method based on the multilayer long-short term memory neural network model according to claim 3 or 4, wherein: carrying out normalization processing on the data of the training set and the test set:
Figure FDA0002418256890000022
wherein: x is the number ofi、oiRespectively an original parameter value and a normalized parameter value; maxxiIs the parameter class maximum; minxiIs the parameter class minimum.
CN202010197833.7A 2020-03-19 2020-03-19 Porosity prediction method based on multilayer long-short term memory neural network model Pending CN111324990A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010197833.7A CN111324990A (en) 2020-03-19 2020-03-19 Porosity prediction method based on multilayer long-short term memory neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010197833.7A CN111324990A (en) 2020-03-19 2020-03-19 Porosity prediction method based on multilayer long-short term memory neural network model

Publications (1)

Publication Number Publication Date
CN111324990A true CN111324990A (en) 2020-06-23

Family

ID=71173458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010197833.7A Pending CN111324990A (en) 2020-03-19 2020-03-19 Porosity prediction method based on multilayer long-short term memory neural network model

Country Status (1)

Country Link
CN (1) CN111324990A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111894551A (en) * 2020-07-13 2020-11-06 太仓中科信息技术研究院 Oil-gas reservoir prediction method based on LSTM
CN111895923A (en) * 2020-07-07 2020-11-06 上海辰慧源科技发展有限公司 Method for fitting and measuring thickness of thin film
CN112001482A (en) * 2020-08-14 2020-11-27 佳都新太科技股份有限公司 Vibration prediction and model training method and device, computer equipment and storage medium
CN112329983A (en) * 2020-09-30 2021-02-05 联想(北京)有限公司 Data processing method and device
CN112381316A (en) * 2020-11-26 2021-02-19 华侨大学 Electromechanical equipment health state prediction method based on hybrid neural network model
CN112971769A (en) * 2021-02-04 2021-06-18 杭州慧光健康科技有限公司 Home personnel tumble detection system and method based on biological radar
CN113326656A (en) * 2021-05-26 2021-08-31 东南大学 Digital integrated circuit technology corner time delay prediction method
CN113359212A (en) * 2021-06-22 2021-09-07 中国石油天然气股份有限公司 Reservoir characteristic prediction method and model based on deep learning
CN113541143A (en) * 2021-06-29 2021-10-22 国网天津市电力公司电力科学研究院 Harmonic prediction method based on ELM-LSTM
CN113642816A (en) * 2021-10-19 2021-11-12 西南石油大学 Well-seismic combined pre-drilling well logging curve prediction method based on gated cyclic unit
CN115755204A (en) * 2021-02-11 2023-03-07 中国石油化工股份有限公司 Formation porosity using multiple dual function probes and neural networks
CN116170384A (en) * 2023-04-24 2023-05-26 北京智芯微电子科技有限公司 Edge computing service perception method and device and edge computing equipment
CN116956754A (en) * 2023-09-21 2023-10-27 中石化经纬有限公司 Crack type leakage pressure calculation method combined with deep learning
CN117272841A (en) * 2023-11-21 2023-12-22 西南石油大学 Shale gas dessert prediction method based on hybrid neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562784A (en) * 2017-07-25 2018-01-09 同济大学 Short text classification method based on ResLCNN models
US20190080224A1 (en) * 2017-09-08 2019-03-14 Halliburton Energy Services, Inc. Optimizing Production Using Design of Experiment and Reservoir Modeling
CN109799533A (en) * 2018-12-28 2019-05-24 中国石油化工股份有限公司 A kind of method for predicting reservoir based on bidirectional circulating neural network
CN109840587A (en) * 2019-01-04 2019-06-04 长江勘测规划设计研究有限责任公司 Reservoir reservoir inflow prediction technique based on deep learning
CN110322009A (en) * 2019-07-19 2019-10-11 南京梅花软件系统股份有限公司 Image prediction method based on the long Memory Neural Networks in short-term of multilayer convolution
CN110415702A (en) * 2019-07-04 2019-11-05 北京搜狗科技发展有限公司 Training method and device, conversion method and device
CN110630256A (en) * 2019-07-09 2019-12-31 吴晓南 Low-gas-production oil well wellhead water content prediction system and method based on depth time memory network
US20200042799A1 (en) * 2018-07-31 2020-02-06 Didi Research America, Llc System and method for point-to-point traffic prediction

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107562784A (en) * 2017-07-25 2018-01-09 同济大学 Short text classification method based on ResLCNN models
US20190080224A1 (en) * 2017-09-08 2019-03-14 Halliburton Energy Services, Inc. Optimizing Production Using Design of Experiment and Reservoir Modeling
US20200042799A1 (en) * 2018-07-31 2020-02-06 Didi Research America, Llc System and method for point-to-point traffic prediction
CN109799533A (en) * 2018-12-28 2019-05-24 中国石油化工股份有限公司 A kind of method for predicting reservoir based on bidirectional circulating neural network
CN109840587A (en) * 2019-01-04 2019-06-04 长江勘测规划设计研究有限责任公司 Reservoir reservoir inflow prediction technique based on deep learning
CN110415702A (en) * 2019-07-04 2019-11-05 北京搜狗科技发展有限公司 Training method and device, conversion method and device
CN110630256A (en) * 2019-07-09 2019-12-31 吴晓南 Low-gas-production oil well wellhead water content prediction system and method based on depth time memory network
CN110322009A (en) * 2019-07-19 2019-10-11 南京梅花软件系统股份有限公司 Image prediction method based on the long Memory Neural Networks in short-term of multilayer convolution

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
FARAH ADEEBA,SARMAD HUSSAIN: "Native Language Identification in Very Short Utterances Using Bidirectional Long Short-Term Memory Network", 《IEEE》, vol. 7 *
JIANWEI LI; HAOYU CHEN; TING ZHOU; XIAOWEN LI: "Tailings Pond Risk Prediction Using Long Short-Term Memory Networks", 《IEEE》, vol. 7, pages 182527, XP011762915, DOI: 10.1109/ACCESS.2019.2959820 *
SHUYU YANG , DAWEN YANG , JINSONG CHEN , BAOXU ZHAO: "Real-time reservoir operation using recurrent neural networks and inflow forecast from a distributed hydrological model", 《ELSEVIER》, vol. 579 *
ZHANNING CAO, XIANG-YANG LI, JUN LIU, XILIN QIN, SHAOHAN SUN, ZONGJIE LI,ZHANYUAN CAO: "Carbonate fractured gas reservoir prediction based on P-wave azimuthal anisotropy and dispersion", 《CLARIVATE 》, vol. 15, no. 5, XP020331012, DOI: 10.1088/1742-2140/aabe58 *
宋辉,陈伟,李谋杰,王浩懿: "基于卷积门控循环单元网络的储层参数预测方法" *
宋辉,陈伟,李谋杰,王浩懿: "基于卷积门控循环单元网络的储层参数预测方法", 《油气地质与采收率》, vol. 26, no. 05, pages 73 - 78 *
张东晓,陈云天,孟晋: "基于循环神经网络的测井曲线生成方法", 《石油勘探与开发》, vol. 45, no. 04, pages 598 - 607 *
张明月: "《考虑产品特性的个性化推荐及应用》", 30 April 2019, 企业管理出版社, pages: 118 - 121 *
杨柳青,陈伟,查蓓: "1-6", vol. 34, no. 4, pages 1548 - 1555 *
杨柳青,陈伟,查蓓: "利用卷积神经网络对储层孔隙度的预测研究与应用", 《地球物理学进展》, no. 04, pages 1548 - 1555 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111895923A (en) * 2020-07-07 2020-11-06 上海辰慧源科技发展有限公司 Method for fitting and measuring thickness of thin film
CN111894551A (en) * 2020-07-13 2020-11-06 太仓中科信息技术研究院 Oil-gas reservoir prediction method based on LSTM
CN112001482A (en) * 2020-08-14 2020-11-27 佳都新太科技股份有限公司 Vibration prediction and model training method and device, computer equipment and storage medium
CN112329983A (en) * 2020-09-30 2021-02-05 联想(北京)有限公司 Data processing method and device
CN112381316A (en) * 2020-11-26 2021-02-19 华侨大学 Electromechanical equipment health state prediction method based on hybrid neural network model
CN112381316B (en) * 2020-11-26 2022-11-25 华侨大学 Electromechanical equipment health state prediction method based on hybrid neural network model
CN112971769A (en) * 2021-02-04 2021-06-18 杭州慧光健康科技有限公司 Home personnel tumble detection system and method based on biological radar
CN115755204A (en) * 2021-02-11 2023-03-07 中国石油化工股份有限公司 Formation porosity using multiple dual function probes and neural networks
CN113326656B (en) * 2021-05-26 2022-11-01 东南大学 Digital integrated circuit technology corner time delay prediction method
CN113326656A (en) * 2021-05-26 2021-08-31 东南大学 Digital integrated circuit technology corner time delay prediction method
US11755807B2 (en) 2021-05-26 2023-09-12 Southeast University Method for predicting delay at multiple corners for digital integrated circuit
CN113359212A (en) * 2021-06-22 2021-09-07 中国石油天然气股份有限公司 Reservoir characteristic prediction method and model based on deep learning
CN113359212B (en) * 2021-06-22 2024-03-15 中国石油天然气股份有限公司 Reservoir characteristic prediction method and model based on deep learning
CN113541143A (en) * 2021-06-29 2021-10-22 国网天津市电力公司电力科学研究院 Harmonic prediction method based on ELM-LSTM
CN113642816A (en) * 2021-10-19 2021-11-12 西南石油大学 Well-seismic combined pre-drilling well logging curve prediction method based on gated cyclic unit
CN116170384A (en) * 2023-04-24 2023-05-26 北京智芯微电子科技有限公司 Edge computing service perception method and device and edge computing equipment
CN116956754A (en) * 2023-09-21 2023-10-27 中石化经纬有限公司 Crack type leakage pressure calculation method combined with deep learning
CN116956754B (en) * 2023-09-21 2023-12-15 中石化经纬有限公司 Crack type leakage pressure calculation method combined with deep learning
CN117272841A (en) * 2023-11-21 2023-12-22 西南石油大学 Shale gas dessert prediction method based on hybrid neural network
CN117272841B (en) * 2023-11-21 2024-01-26 西南石油大学 Shale gas dessert prediction method based on hybrid neural network

Similar Documents

Publication Publication Date Title
CN111324990A (en) Porosity prediction method based on multilayer long-short term memory neural network model
CN108510741B (en) Conv1D-LSTM neural network structure-based traffic flow prediction method
CN109993280B (en) Underwater sound source positioning method based on deep learning
CN102622418B (en) Prediction device and equipment based on BP (Back Propagation) nerve network
CN102622515B (en) A kind of weather prediction method
CN109799533A (en) A kind of method for predicting reservoir based on bidirectional circulating neural network
CN111292525B (en) Traffic flow prediction method based on neural network
CN112989708B (en) Well logging lithology identification method and system based on LSTM neural network
Huang et al. An integrated neural-fuzzy-genetic-algorithm using hyper-surface membership functions to predict permeability in petroleum reservoirs
CN107463966A (en) Radar range profile's target identification method based on dual-depth neutral net
CN108596327A (en) A kind of seismic velocity spectrum artificial intelligence pick-up method based on deep learning
CN110807544B (en) Oil field residual oil saturation distribution prediction method based on machine learning
CN111058840A (en) Organic carbon content (TOC) evaluation method based on high-order neural network
CN114723095A (en) Missing well logging curve prediction method and device
Du et al. Reconstruction of three-dimensional porous media using deep transfer learning
CN114548591A (en) Time sequence data prediction method and system based on hybrid deep learning model and Stacking
CN115185937A (en) SA-GAN architecture-based time sequence anomaly detection method
CN114091333A (en) Shale gas content artificial intelligence prediction method based on machine learning
CN114219139B (en) DWT-LSTM power load prediction method based on attention mechanism
CN111382840A (en) HTM design method based on cyclic learning unit and oriented to natural language processing
Verma et al. Quantification of sand fraction from seismic attributes using Neuro-Fuzzy approach
Deng et al. A hybrid machine learning optimization algorithm for multivariable pore pressure prediction
Dai et al. Nonlinear inversion for electrical resistivity tomography based on chaotic DE-BP algorithm
Asoodeh et al. NMR parameters determination through ACE committee machine with genetic implanted fuzzy logic and genetic implanted neural network
García Benítez et al. Neural networks for defining spatial variation of rock properties in sparsely instrumented media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination