CN114065909A - Well logging curve completion method based on CNN _ AB _ Bi-LSTM - Google Patents

Well logging curve completion method based on CNN _ AB _ Bi-LSTM Download PDF

Info

Publication number
CN114065909A
CN114065909A CN202111190971.3A CN202111190971A CN114065909A CN 114065909 A CN114065909 A CN 114065909A CN 202111190971 A CN202111190971 A CN 202111190971A CN 114065909 A CN114065909 A CN 114065909A
Authority
CN
China
Prior art keywords
data
lstm
cnn
well
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111190971.3A
Other languages
Chinese (zh)
Inventor
潘少伟
牟昱辉
罗海宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Shiyou University
Original Assignee
Xian Shiyou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Shiyou University filed Critical Xian Shiyou University
Priority to CN202111190971.3A priority Critical patent/CN114065909A/en
Publication of CN114065909A publication Critical patent/CN114065909A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21BEARTH OR ROCK DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
    • E21B47/00Survey of boreholes or wells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21BEARTH OR ROCK DRILLING; OBTAINING OIL, GAS, WATER, SOLUBLE OR MELTABLE MATERIALS OR A SLURRY OF MINERALS FROM WELLS
    • E21B2200/00Special features related to earth drilling for obtaining oil, gas or water
    • E21B2200/22Fuzzy logic, artificial intelligence, neural networks or the like

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geology (AREA)
  • Mining & Mineral Resources (AREA)
  • Geophysics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Fluid Mechanics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geochemistry & Mineralogy (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a well logging curve completion method based on CNN _ AB _ Bi-LSTM, which comprises the following steps: step 101: establishing a CNN _ AB _ Bi-LSTM model; step 102: inputting a training set training model; step 103: inputting a test set completion logging curve; the well logging curve completion method based on the CNN _ AB _ Bi-LSTM model can be applied to geological tasks for completing all well logging curve parameters, fully utilizes the time sequence characteristics of well logging data, and improves the defects existing in the conventional well logging curve completion method, so that a good prediction result can be obtained.

Description

Well logging curve completion method based on CNN _ AB _ Bi-LSTM
Technical Field
The invention belongs to the technical field of well logging curve completion in petroleum geology, and particularly relates to a well logging curve completion method based on CNN _ AB _ Bi-LSTM.
Background
A log is a curve formed after geophysical logging that may indirectly, conditionally reflect a certain side of the geological properties of the formation. The geologist utilizes the logging curve to better explore the underground oil-gas reservoir, evaluate the characteristics of the reservoir stratum and find oil-gas resources, the logging curve can be said to be a 'beacon' of the distribution condition of the underground oil-gas resources, and the geologist can further deepen the cognition of the underground reservoir stratum by researching and analyzing the logging curve.
While well log data plays an important role in petroleum geology research, it is not easy to obtain complete and valid well log data. Due to instrument failure, borehole collapse, borehole diameter expansion and the like, partial logging data are lost or distorted in actual operation. Firstly, the re-measurement of data consumes a large amount of manpower and financial resources; secondly, for the oil and gas well after being well-fixed, the re-measurement of the well is difficult in the practical engineering operation. Researchers have therefore attempted to complement the entire interval information by artificially generating missing well log data using existing well log data.
There are four main methods for generating well logs manually. At first, researchers use a physical model to invert logging data, the method aims to idealize actual underground rocks through certain assumed conditions and establish a general theoretical relationship through a physical method, but the method greatly simplifies real geological conditions and has strong subjective colors when the model is selected, so that the method has poor performance on the task of completing a logging curve. Researchers have also used cross-plots or multiple regression methods to regenerate well logs. Because geologic bodies often have strong heterogeneity, different logging curves obtained by logging instruments also have strong nonlinear relation and complex mapping relation, and therefore, a better result cannot be obtained by complementing the logging curves by using an intersection graph or a multiple regression method. In recent years, machine learning methods have been widely used in the field of petroleum geology. For example, researchers use an artificial neural network to generate a logging curve, but the neural network is a fully-connected neural network, and most of the constructed neural network is point-to-point mapping, that is, the regenerated logging curve data is only related to logging data at the same depth, and the front-back change correlation between the logging data of the same well and the trend of the logging data changing along with the depth are ignored, so that the fully-connected neural network has certain limitation in the research of completing the logging curve. In addition, an LSTM network in deep learning is used for completing a logging curve, the LSTM fully utilizes the characteristic that logging data has time sequence, a network model of the LSTM is improved on the basis of RNN, a door mechanism is introduced, and the problems of gradient disappearance and gradient explosion are solved to a certain extent. Setting the sequence length of each group of data to be 100 means that the well logging task needs to solve the problem of overlong input sequence, but the LSTM can lose important data information when the input sequence is overlong, and the LSTM can only read sequence data in one direction.
From the existing method, the well completion problem of the logging curve is better solved by using the LSTM model in deep learning, but the single LSTM model also has the limitations: firstly, if the input sequence is too long, important information can be lost; secondly, the phenomenon that mutual influence exists between the front and the back of input data is not considered, so that the well logging curve after completion does not achieve an ideal effect.
Disclosure of Invention
In order to better solve the problem of well logging curve completion, the invention provides a well logging curve completion method based on CNN _ AB _ Bi-LSTM, which comprises the following steps:
step 101: establishing a CNN _ AB _ Bi-LSTM model;
step 102: inputting a training set training model;
step 103: and inputting a test set completion logging curve.
Further, the step 102: the specific process of inputting the training set training model is as follows:
step 201, inputting training sequence data;
step 202, performing feature extraction on the input long sequence data by using CNN, and extracting local features in logging curve data;
step 203: inputting the obtained data into a pooling layer, and extracting significant features of different convolution mapping attributes;
204, distributing different weights to the well depths of the sequence data by using a well depth dimension attention mechanism;
step 205: the data arrives at the first Bi-LSTM network, and the LSTM layers in two directions fully consider the influence of the front and back data points of each data point in the logging data on the data;
step 206: the data arrives at a second Bi-LSTM network, and the LSTM layers in two directions fully consider the influence of the front and back data points of each data point in the logging data on the data;
step 207: and (4) predicting the data through a regression prediction layer.
Further, the step 103: the specific process of inputting the test set completion log is as follows:
step 301: dividing a data set into a training set, a verification set and a test set;
step 302: after the model is trained, inputting test data into the model, adopting MAE and MSE as evaluation indexes, and inputting well section data of a well logging curve needing to be completed into the model meeting the error requirement after the error of the model fitting the test set meets the requirement, so that the completed well logging curve can be obtained.
Further, the CNN _ AB _ Bi-LSTM model mainly includes: CNN convolution layer, CNN pooling layer, well depth dimension attention mechanism layer, Bi-LSTM layer and regression prediction layer.
The invention has the advantages that: the well logging curve completion method based on the CNN _ AB _ Bi-LSTM model can be applied to the task of completing all well logging curve parameters, fully utilizes the characteristics of well logging data, and improves the defects of the conventional well logging curve completion method, so that a good prediction result can be obtained.
The present invention will be described in detail below with reference to the drawings, tables, and examples.
Drawings
FIG. 1 is a flow chart of a well log completion method based on CNN _ AB _ Bi-LSTM.
FIG. 2 is a model diagram of CNN _ AB _ Bi-LSTM.
FIG. 3 is a diagram showing effects of embodiment 1.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined purpose, the following detailed description is given to the specific embodiments and the effects of the structural features of the present invention with reference to the accompanying drawings and the embodiments.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "aligned", "overlapping", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Acronym interpretation
CNN: convolutional Neural Networks;
LSTM: long Short-Term Memory, Long Short-Term Memory neural network;
RNN: recurrent Neural Networks, Recurrent Neural Networks;
Bi-LSTM: bi-directional Long Short-Term Memory neural network;
CNN _ AB _ Bi-LSTM: the Convolutional Neural network Based on the Attention mechanism is combined with the Neural network of the bidirectional Long and Short Term Memory Neural network;
AB _ Bi-LSTM: Attention-Based _ Bi-directional Long Short-Term Memory, a neural network of a bidirectional Long Short-Term Memory neural network Based on an Attention mechanism;
CNN _ Bi-LSTM: the Convolutional Neural network is combined with a Neural network of a bidirectional Long and Short Term Memory Neural network;
CNN _ AB _ LSTM: the Convolutional Neural network Based on the Attention mechanism is combined with the Neural network of the Long-Short Term Memory Neural network;
BP: back Propagation, BP neural network;
MAE: mean Absolute Error, Absolute Error;
MSE: mean Square Error, Mean Square Error;
ReLU: rectisfied Linear Units, Linear correction Unit activation function.
Example 1
In order to better solve the problem of completing the well logging curve, the present embodiment provides a well logging curve completing method based on CNN _ AB _ Bi-LSTM as shown in fig. 1 and fig. 2, which includes the following steps:
step 101: establishing a CNN _ AB _ Bi-LSTM model;
step 102: inputting a training set training model;
step 103: and inputting a test set completion logging curve.
Further, the step 102: the specific process of inputting the training set training model is as follows:
step 201, inputting training sequence data;
step 202, performing feature extraction on the input long sequence data by using CNN, and extracting local features in logging curve data;
step 203: inputting the obtained data into a pooling layer, and extracting significant features of different convolution mapping attributes;
204, distributing different weights to the well depths of the sequence data by using a well depth dimension attention mechanism;
step 205: the data arrives at the first Bi-LSTM network, and the LSTM layers in two directions fully consider the influence of the front and back data points of each data point in the logging data on the data;
step 206: the data arrives at a second Bi-LSTM network, and the LSTM layers in two directions fully consider the influence of the front and back data points of each data point in the logging data on the data;
step 207: and (4) predicting the data through a regression prediction layer.
Further, the step 103: the specific process of inputting the test set completion log is as follows:
step 301: dividing a data set into a training set, a verification set and a test set;
step 302: after the model is trained, inputting test data into the model, adopting MAE and MSE as evaluation indexes, and inputting well section data of a well logging curve needing to be completed into the model meeting the error requirement after the error of the model fitting the test set meets the requirement, so that the completed well logging curve can be obtained.
Further, the CNN _ AB _ Bi-LSTM model mainly includes: CNN convolution layer, CNN pooling layer, well depth dimension attention mechanism layer, Bi-LSTM layer and regression prediction layer.
Example 2
The experimental data of the embodiment are from a certain fault block in a certain oil field in the south of China, and specifically comprise an acoustic wave time difference value (AC), a natural potential value (SP), a natural Gamma (GR), a deep induction value (RILD), a middle induction value (RILM) and a shallow lateral value (RS). Firstly, assuming that an SP curve in one well is a well log to be completed, then using the rest complete data of the well to be completed and the adjacent wells as training data, finally 14355 sets of training data are provided, the sequence length of each set of data is 100, the batch size (batch _ size) in the training process is 256, that is, 256 sets of data are taken out each time in the training stage for training. The data format of the training set is [256,100,5 ]. The experimental model comprises 1 convolutional layer, 1 pooling layer, a well depth dimension attention layer, two-way long and short term memory layers and a full connection layer. The hidden state of the convolutional layer is 64 dimensions, the hidden states of the two-way long and short term memory layer are 128 dimensions and 64 dimensions respectively, the model adopts discarding operation (dropout) to avoid overfitting, the discarding probability is 20 percent, the iteration frequency is 200 times, the learning rate is reduced along with the reduction of the iteration frequency, and the learning rate is reduced by 90 percent in every 50 iterations.
Taking the SP well logging curve of a well number Z2-15 as an example, the training data consists of the rest well logging curve data of a well to be completed (Z2-15) and the rest well logging curve data of adjacent wells (Z2-9, Z2-10, Z2-11 and Z2-12), and a plurality of models are selected for experimental comparison and are respectively a CNN _ AB _ Bi-LSTM model, an AB _ Bi-LSTM model, a CNN _ AB _ LSTM model, a Bi-LSTM model, an LSTM model and a BP model which have the same super-parameter setting. In the embodiment, MAE and MSE are used as evaluation indexes of the completion effect, and MAE is an average value of absolute errors of a real value and a predicted value and can represent the real situation of the error between the predicted value and the real value; while MSE is generally used to represent the deviation between the predicted value and the true value. The calculation formula is as follows:
Figure BDA0003301051380000081
Figure BDA0003301051380000082
in the above formula: y isiIn order to predict the value of the target,
Figure BDA0003301051380000083
n is the number of samples for the actual value. The smaller the MAE and MSE values are, the better the completion effect of the model on the logging curve is.
The concrete operation process of completing the SP logging curve of the Z2-15 well by applying the logging curve completing method based on the CNN _ AB _ Bi-LSTM is as follows:
step 101: establishing a CNN _ AB _ Bi-LSTM model;
step 102: inputting a training set training model;
step 103: and inputting a test set completion logging curve.
Further, the step 101: and establishing a CNN _ AB _ Bi-LSTM network, wherein the network adopts a hierarchical stacked network structure. The first layer is a CNN network layer and comprises a one-dimensional convolutional layer and a one-dimensional maximum pooling layer, wherein the number of convolution kernels of the one-dimensional convolutional layer is 64, the size of the convolution kernels is 3, an activation function is ReLU, SAME is adopted in a padding mode, the size of a pooling window of the one-dimensional maximum pooling layer is 5, and SAME is adopted in the padding mode; the second layer is a well depth dimensional attention layer; the third layer comprises two Bi-LSTM layers, the number of the hidden nodes of the first Bi-LSTM layer is 128, and the number of the hidden nodes of the second Bi-LSTM layer is 64; the fourth layer is a fully-connected layer, the number of nodes of the layer is set to be 1, the whole model adopts drop operation (dropout) to avoid overfitting, and the drop probability is 20%.
Further, the step 102: the specific process of inputting the training set training model is as follows:
step 201, inputting training sequence data, wherein the training data come from five wells (Z2-9, Z2-10, Z2-11, Z2-12 and Z2-15) of a certain fault block in a certain oil field in south China, and the logging data of each well comprises 6 columns which are respectively as follows: time difference value (AC), natural potential value (SP), natural Gamma (GR), deep induction value (RILD), medium induction value (RILM), shallow lateral value (RS), wherein the SP curve is the well logging curve to be completed. The training data input into the model consists of the rest of the well log data of the non-to-be-compensated segment of the to-be-compensated well (Z2-15) and the rest of the well log data of the adjacent wells (Z2-9, Z2-10, Z2-11 and Z2-12), finally 14355 training data sets are shared, the sequence length of each data set is 100, the batch size (batch _ size) in the training process is set to be 256, and 256 data sets are taken out for training in each training phase. The data format of the training set is [256,100,5 ].
Step 202, performing feature extraction on the input long sequence data by using CNN, and extracting local features in logging curve data; because the input data is longer and the information quantity is excessive, the more important local features of the logging data are extracted by the convolutional layer of the CNN;
step 203: inputting the obtained data into a pooling layer, and extracting significant features of different convolution mapping attributes; the function of further filtering information is achieved;
204, distributing different weights to the well depths of the sequence data by using a well depth dimension attention mechanism; the well depth dimension attention mechanism automatically analyzes the importance degree of each historical depth data point to the predicted depth data point, selects the historical information of the key data point of the network, highlights the proportion of the key depth data point and helps the model to make more accurate prediction;
step 205: the data arrives at the first Bi-LSTM network, and the LSTM layers in two directions fully consider the influence of the front and back data points of each data point in the logging data on the data;
step 206: the data arrives at a second Bi-LSTM network, and the LSTM layers in two directions fully consider the influence of the front and back data points of each data point in the logging data on the data;
step 207: and (4) predicting the data through a regression prediction layer.
Further, the above steps 201 to 207 are a training procedure of one iteration, and the number of iterations set in this experiment is 200, that is, the steps are repeated more than 200 times. And the Adam algorithm and the dynamic adjustment learning rate are adopted to train the internal parameters of the model, the learning rate is reduced along with the reduction of the iteration times, and the learning rate is reduced by 90% every 50 iterations.
Further, the step 103: the specific process of inputting the test set completion log is as follows:
step 301: the data set is divided into a training set, a verification set and a test set, wherein the test set is to-be-supplemented fragment data of a to-be-supplemented well (Z2-15), and specifically comprises an AC curve, a GR curve, a RILD curve, a RILM curve, an RS curve and an SP curve of a Z2-15 well. The input data of the model, i.e. the AC curve, GR curve, RILD curve, RILM curve and RS curve of the well Z2-15, is finally 443 groups of test data, and the data format of the test set is [256,100,5 ].
Step 302: after the model is trained, the test data is input into the model, 443 data are predicted, and the data are SP curve data obtained by model completion. In the experiment, MAE and MSE are used as evaluation indexes, 443 SP curve data obtained by model completion and 443 real SP curve data of a Z2-15 well are used for calculating the MAE and the MSE of the model, and well section data of a logging curve needing to be completed is input into the model meeting the error requirement after the error of the model fitting a test set meets the requirement, so that a completed logging curve can be obtained.
The CNN is used for extracting the features of the input long sequence data, and extracting the more important local features in the logging curve data, wherein the CNN is mainly used for a convolution layer and a pooling layer.
And then, an attention mechanism is introduced, different weights are distributed on the important features extracted by the CNN, and more attention is given to the important attributes. Since the input data is in the format of (batch _ size, time _ steps, input _ dim) (without taking the batch _ size into account), where time _ steps is the time dimension of the sequence data, e.g., time _ steps is set to 100, there will be 100 data points per sequence sample, corresponding to different well depths on the log data; the well depth dimension attention mechanism can automatically select historical information of network key data points, highlight the proportion of the key depth data points and help a model to make more accurate prediction.
The two methods can solve the problem of important information loss caused by too long input to a certain extent, and then the embodiment introduces a Bi-LSTM model aiming at the problem that LSTM can only read sequence data in one direction and the influence of sequence data in the opposite direction is not fully considered, wherein the Bi-LSTM model has two LSTM hidden layers, and the forward LSTM hidden layer is responsible for forward characteristic extraction; the reverse LSTM hidden layer is responsible for reverse feature extraction. The Bi-LSTM model can be used to better account for the effect of the pre-and post-data points on each data point in the sequence data.
In the application of the completed SP curve of the well Z2-15, in addition to the CNN _ AB _ Bi-LSTM model proposed in this embodiment, a different model is used to complete the SP log curve of the well Z2-15, and the effect is shown in FIG. 3: wherein (a), (b), (c), (d), (e), (f) and (g) are respectively a SP well logging curve completion effect schematic diagram of a CNN _ AB _ Bi-LSTM model, an AB _ Bi-LSTM model, a CNN _ AB _ LSTM model, a Bi-LSTM model, an LSTM model and a BP model, wherein a dark color curve is real data, and a light color curve is predicted data.
Further, the evaluation results of the SP well log completion effect of different models on the well Z2-15 are shown in table 1, and careful observation of MAE and MSE in table 1 can find that: the MAE and the MSE of the CNN _ AB _ Bi-LSTM model are obviously smaller than those of other models, which shows that the well logging curve completion experiment of the CNN _ AB _ Bi-LSTM model has an experiment effect obviously better than that of other models. Therefore, the method provided by the embodiment can be well applied to the task of completing the well logging curve.
TABLE 1 MAE and MSE List generated by well completion Z2-15
Model (model) MAE MSE
CNN_AB_Bi-LSTM 5.028 38.824
CNN_Bi-LSTM 6.510 58.280
AB_Bi-LSTM 7.066 89.017
CNN_AB_LSTM 5.366 46.425
Bi-LSTM 6.062 53.483
LSTM 6.677 69.023
BP 7.069 74.966
In summary, the well logging curve completion method based on the CNN _ AB _ Bi-LSTM model can be applied to geological work for completing all well logging curve parameters, the method makes full use of the time sequence characteristics of well logging data, and improves the defects of the conventional well logging curve completion method, so that a good prediction result can be obtained.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (4)

1. A well logging curve completion method based on CNN _ AB _ Bi-LSTM is characterized by comprising the following steps:
step 101: establishing a CNN _ AB _ Bi-LSTM model;
step 102: inputting a training set training model;
step 103: and inputting a test set completion logging curve.
2. A well logging curve completion method based on CNN _ AB _ Bi-LSTM is characterized in that: the step 102: the specific process of inputting the training set training model is as follows:
step 201, inputting training sequence data;
step 202, performing feature extraction on the input long sequence data by using CNN, and extracting local features in logging curve data;
step 203: inputting the obtained data into a pooling layer, and extracting significant features of different convolution mapping attributes;
204, distributing different weights to the well depths of the sequence data by using a well depth dimension attention mechanism;
step 205: the data arrives at the first Bi-LSTM network, and the LSTM layers in two directions fully consider the influence of the front and back data points of each data point in the logging data on the data;
step 206: the data arrives at a second Bi-LSTM network, and the LSTM layers in two directions fully consider the influence of the front and back data points of each data point in the logging data on the data;
step 207: and (4) predicting the data through a regression prediction layer.
3. The well log completion method based on CNN _ AB _ Bi-LSTM, as claimed in claim 1, wherein: the step 103: the specific process of inputting the test set completion log is as follows:
step 301: dividing a data set into a training set, a verification set and a test set;
step 302: after the model is trained, inputting test data into the model, adopting MAE and MSE as evaluation indexes, and inputting well section data of a well logging curve needing to be completed into the model meeting the error requirement after the error of the model fitting the test set meets the requirement, so that the completed well logging curve can be obtained.
4. The CNN _ AB _ Bi-LSTM-based well log completion method of claim 3, wherein: the CNN _ AB _ Bi-LSTM model mainly comprises: CNN convolution layer, CNN pooling layer, well depth dimension attention mechanism layer, Bi-LSTM layer and regression prediction layer.
CN202111190971.3A 2021-10-13 2021-10-13 Well logging curve completion method based on CNN _ AB _ Bi-LSTM Pending CN114065909A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111190971.3A CN114065909A (en) 2021-10-13 2021-10-13 Well logging curve completion method based on CNN _ AB _ Bi-LSTM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111190971.3A CN114065909A (en) 2021-10-13 2021-10-13 Well logging curve completion method based on CNN _ AB _ Bi-LSTM

Publications (1)

Publication Number Publication Date
CN114065909A true CN114065909A (en) 2022-02-18

Family

ID=80234655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111190971.3A Pending CN114065909A (en) 2021-10-13 2021-10-13 Well logging curve completion method based on CNN _ AB _ Bi-LSTM

Country Status (1)

Country Link
CN (1) CN114065909A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116108368A (en) * 2023-02-21 2023-05-12 北京金阳普泰石油技术股份有限公司 Deposition microphase identification method and device based on deep learning mixed model
CN116259168A (en) * 2023-05-16 2023-06-13 陕西天成石油科技有限公司 Alarm method and device for oilfield logging

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116108368A (en) * 2023-02-21 2023-05-12 北京金阳普泰石油技术股份有限公司 Deposition microphase identification method and device based on deep learning mixed model
CN116108368B (en) * 2023-02-21 2023-09-01 北京金阳普泰石油技术股份有限公司 Deposition microphase identification method and device based on deep learning mixed model
CN116259168A (en) * 2023-05-16 2023-06-13 陕西天成石油科技有限公司 Alarm method and device for oilfield logging
CN116259168B (en) * 2023-05-16 2023-07-21 陕西天成石油科技有限公司 Alarm method and device for oilfield logging

Similar Documents

Publication Publication Date Title
CN113901681B (en) A three-dimensional compressibility evaluation method for double sweet spots in shale gas reservoirs with full life cycle
CN112901137B (en) Deep well drilling ROP prediction method based on deep neural network Sequential model
CN108573320B (en) Calculation method and system for ultimate recoverable reserves of shale gas reservoirs
US20220170366A1 (en) 3d in-situ characterization method for heterogeneity in generating and reserving performances of shale
CN105488583B (en) Method and device for predicting recoverable reserve of dense oil in region to be evaluated
CN112444841B (en) A seismic prediction method for lithology with thin layers based on subscale multi-input convolutional network
CN115049173B (en) Deep learning and Eaton method coupled to drive formation pore pressure prediction method
CN114065909A (en) Well logging curve completion method based on CNN _ AB _ Bi-LSTM
CN111027882A (en) A method for evaluating brittleness index using conventional logging data based on high-order neural network
CN114358434A (en) Prediction method of drilling machine ROP based on LSTM recurrent neural network model
CN110988997A (en) Hydrocarbon source rock three-dimensional space distribution quantitative prediction technology based on machine learning
CN117272841A (en) Shale gas dessert prediction method based on hybrid neural network
CN112100930A (en) Formation pore pressure calculation method based on convolutional neural network and Eaton formula
CN110671092A (en) Oil gas productivity detection method and system
CN112578475B (en) Data mining-based dense reservoir dual dessert identification method
CN117150875A (en) Pre-drilling logging curve prediction method based on deep learning
CN118428406A (en) A formation pore pressure logging prediction method based on LSTM-PINN method
CN114114414A (en) Artificial intelligence prediction method for 'dessert' information of shale reservoir
CN117633658B (en) Rock reservoir lithology identification method and system
CN117910358A (en) Lithology category and rock stratum thickness prediction method and system of drilling and electronic equipment
CN117251802A (en) Heterogeneous reservoir parameter prediction method and system based on transfer learning
CN117345208B (en) Quantitative characterization method and device for fracturing advantage area, electronic equipment and medium
CN117235628B (en) Well logging curve prediction method and system based on hybrid Bayesian deep network
CN115293462B (en) A method for predicting the size range of missing channels based on deep learning
CN114966838B (en) Seismic intelligent inversion method and system driven by geological modeling and seismic forward modeling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination