CN117590471A - Intelligent identification method for distinguishing bright crystals of shale - Google Patents

Intelligent identification method for distinguishing bright crystals of shale Download PDF

Info

Publication number
CN117590471A
CN117590471A CN202311443563.3A CN202311443563A CN117590471A CN 117590471 A CN117590471 A CN 117590471A CN 202311443563 A CN202311443563 A CN 202311443563A CN 117590471 A CN117590471 A CN 117590471A
Authority
CN
China
Prior art keywords
bright
bright crystal
layer
logging
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311443563.3A
Other languages
Chinese (zh)
Inventor
洪玉真
邓少贵
李志君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202311443563.3A priority Critical patent/CN117590471A/en
Publication of CN117590471A publication Critical patent/CN117590471A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/40Seismology; Seismic or acoustic prospecting or detecting specially adapted for well-logging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. analysis, for interpretation, for correction
    • G01V1/282Application of seismic models, synthetic seismograms
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. analysis, for interpretation, for correction
    • G01V1/30Analysis
    • G01V1/307Analysis for determining seismic attributes, e.g. amplitude, instantaneous phase or frequency, reflection strength or polarity
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention discloses an intelligent identification method for distinguishing bright crystals of shale, and relates to the technical field of geophysical well logging. According to the mineral composition and structural characteristics of a shale reservoir, after bright crystals are manually calibrated on logging data of a target well section, preprocessing the logging data, analyzing logging parameters, carrying out Pearson correlation analysis, removing redundant curves, constructing a sample library, extracting samples from the sample library, constructing a training set and a verification set, expanding bright crystal samples in the training set by using a smote method, constructing a bright crystal discriminator based on a CNN-BiLSTM-Attention combined model of an Attention mechanism, training the bright crystal discriminator by using the training set, verifying the recognition effect of the bright crystal discriminator by using the verification set, and finally carrying out bright crystal recognition on the well section to be recognized by using the bright crystal discriminator after being qualified. The method solves the problem that bright crystals in the complex shale reservoir are difficult to accurately identify, improves the rate of identifying the bright crystals of the shale, and is beneficial to exploration and development of the complex shale reservoir.

Description

Intelligent identification method for distinguishing bright crystals of shale
Technical Field
The invention relates to the technical field of geophysical well logging, in particular to an intelligent identification method for distinguishing bright crystals of shale.
Background
The bright crystal discrimination is used as an important basic work in petroleum exploration and development, and provides favorable evidence for targeted layer selection in shale oil exploration from a brand new geological point of view.
At present, common bright crystal dividing methods comprise a rock core description method, a plate method, a statistical analysis method and the like, however, the method is influenced by complex geological conditions, and a shale reservoir has the characteristics of strong heterogeneity, rapid layer sequence change and high complexity, so that the mapping relation of a bright crystal structure in a logging response is weak and the sample size is small, and great challenges are brought to bright crystal discrimination. Core description, although elaborate, is time-consuming and labor-consuming and has a small sample size; although the conventional plate method is simple and visual, the plate method is not obviously influenced by the logging response of the shale reservoir, and bright crystals cannot be accurately distinguished; statistical analysis, while capable to some extent of identifying partially linearly separable bright crystals, has limitations in the application of complex reservoirs.
Therefore, it is difficult to realize fine discrimination of the bright crystal structure using the existing conventional method. Therefore, it is needed to propose an intelligent recognition method for distinguishing bright crystals of shale, which solves the difficulties of small sample size, difficult nonlinear feature extraction, rapid layer sequence change and the like, and realizes accurate recognition of bright crystals in a complex shale reservoir environment.
Disclosure of Invention
The intelligent recognition method for distinguishing the bright crystals of the shale is provided for solving the problem that the bright crystals are difficult to accurately distinguish in the complex shale reservoir, solving the problem that bright crystal recognition training is limited due to small bright crystal sample size and unbalanced sample types.
The invention adopts the following technical scheme:
an intelligent identification method for distinguishing bright crystals of shale, which specifically comprises the following steps:
step 1, acquiring a logging curve of a target well section in a shale reservoir, obtaining logging data of the target well section, analyzing mineral composition and structural characteristics in the shale reservoir by combining core data, manually determining a bright crystal layer section and a non-bright crystal layer section on the logging curve, and manually calibrating a label on the logging curve;
step 2, preprocessing logging data of a target well section, performing Pearson correlation analysis on each logging parameter, screening redundant curves in the logging curve of the target well section according to Pearson correlation coefficients of each logging parameter, and constructing a sample library comprising a bright crystal sample set and an non-bright crystal sample set by combining a bright crystal label and an non-bright crystal label which are manually calibrated on the Duan Cejing curve of the target well;
step 3, randomly extracting bright crystal samples and non-bright crystal samples from a sample library to serve as training samples and verification samples, forming a training set and a verification set, expanding the bright crystal samples in the training set based on a smote method, and enabling the number of the bright crystal samples in the training set to be equal to the number of the non-bright crystal samples;
step 4, constructing a bright crystal discriminator based on a CNN-BiLSTM-Attention combination model of an Attention mechanism;
step 5, training the bright crystal discriminator constructed by the CNN-BiLSTM-Attention combination model based on the Attention mechanism by using a training set to obtain a trained bright crystal discriminator;
step 6, verifying the bright crystal discriminator by using the verification set to obtain a verified bright crystal discriminator;
and 7, carrying out bright crystal identification on the well section to be identified by using the verified bright crystal discriminator, and inputting logging data extracted from the well section logging curve to be identified into the bright crystal discriminator according to a depth sequence to obtain a bright crystal identification result of the well section to be identified.
Preferably, the logging curves include a natural gamma curve, a natural potential curve, a sonic jet lag curve, a density curve, a compensated neutron curve, and a resistivity curve.
Preferably, in the step 2, the method specifically includes the following steps:
step 2.1, preprocessing logging data of a target well section, wherein the preprocessing comprises depth correction and denoising;
step 2.2, carrying out Pearson correlation analysis on each logging parameter, wherein a calculation formula of the Pearson correlation coefficient of the logging parameter is as follows:
wherein ρ is X,Y The Pearson correlation coefficients, X, Y are all logging parameters, i is a serial number, n is the total number of logging parameters, and X i For the measurement of the ith logging parameter X,for the average value of the logging parameter X, Y i For the measurement of the ith logging parameter Y,is the average value of logging parameters Y;
the Pearson correlation coefficient ρ X,Y Less than 0, indicating that there is a negative correlation between the logging parameters X and Y; the Pearson correlation coefficient ρ X,Y When the value is greater than 0, the positive correlation between the logging parameters X and Y is shown, and the Pearson correlation coefficient ρ X,Y The closer to 0, the poorer the correlation between logging parameters X and Y;
step 2.3, according to the Pearson correlation coefficient ρ X,Y Screening logging curves of a target well section, and determining |ρ X,Y And taking the logging curve when the I is more than or equal to 0.9 as a redundancy curve, removing the redundancy curve in the logging curve of the target well section, utilizing logging data on the logging curve of the target well section after screening, combining a bright crystal label and an non-bright crystal label on the logging curve, taking all logging data positioned at the same depth point in a bright crystal layer section as a bright crystal sample, obtaining a plurality of bright crystal samples, constructing a bright crystal sample set, taking all logging data positioned at the same depth point in a non-bright crystal layer section as a non-bright crystal sample, obtaining a plurality of non-bright crystal samples, and constructing a non-bright crystal sample set to obtain a sample library comprising the bright crystal sample set and the non-bright crystal sample set.
Preferably, the number ratio between training samples in the training set and verification samples in the verification set is 9:1.
Preferably, the bright crystal discriminator is constructed based on a CNN-BiLSTM-Attention combination model of an Attention mechanism, wherein the CNN-BiLSTM-Attention combination model of the Attention mechanism comprises a convolutional neural network model and a BiLSTM network model which are sequentially connected;
the convolutional neural network model based on the attention mechanism comprises CNN branches, attention mechanism branches and a multiplexing layer which are arranged in parallel;
the CNN branch is configured to extract a feature vector of an input sample, and includes a first convolution layer, a first activation layer, a second convolution layer, and a second activation layer, where the first convolution layer and the second convolution layer are both 3×1 convolution layers, where the first convolution layer is provided with 32 convolution kernels, the second convolution layer is provided with 64 convolution kernels, and convolution operations of the first convolution layer and the second convolution layer are shown in formula (2):
in the formula, h i Feature vector, x, obtained for convolution operation i Input value, w, for the ith element in the convolutional layer i B is the weight matrix value corresponding to the ith element i N x N is the size of the convolution kernel, which is the offset value of the ith element;
the first active layer and the second active layer both use a ReLU function as an active function, as shown in formula (3):
y i =max(0,h i ) (3)
wherein y is i To activate the function value, max (0, h i ) Is a maximum function and is used for selecting a characteristic vector h obtained by 0 and convolution operation i Maximum value of (2);
the attention mechanism branch is used for assisting the convolutional neural network to extract feature vectors and comprises a global average pooling layer, a first full-connection layer, a first activation layer, a second full-connection layer and a second activation layer, wherein the number of channels of the first full-connection layer is set to 16, the number of channels of the second full-connection layer is set to 64, a ReLU function is adopted as an activation function by the first activation layer, and a sigmoid function is adopted as an activation function by the second activation layer;
the attention mechanism branch adopts an SE module of a soft attention mechanism to assist a convolutional neural network to extract feature vectors, and a weight representing importance degree is added for paying attention to a specific channel, wherein the feature vectors comprise a Squeeze part, an expression part and a weight part;
the global average pooling layer of the attention mechanism branches corresponds to the Squeeze part of the SE module, and performs global aggregation operation on the feature vectors in the input samples, and performs average pooling on the feature vectors in each channel to obtain a global information vector, as shown in formula (4):
wherein z is c For the global information vector output by the global average pooling layer, W is the maximum width of the input feature vector, H is the maximum height of the input feature vector, C is the channel number of the global average pooling layer, u c (I, J) is an input feature vector value of width I and height J;
the first full-connection layer, the first activation layer, the second full-connection layer and the second activation layer of the attention mechanism branch correspond to the expression part of the SE module, and the global information vector acquired by the global average pooling layer sequentially passes through the first full-connection layer, the first activation layer, the second full-connection layer and the second activation layer to generate a channel weight vector for representing the importance of a channel, as shown in a formula (5):
s c =σ(W 2 δ(W 1 z c )) (5)
wherein s is c For channel weight vector, σ (·) is sigmoid activation function value, δ (·) is ReLU activation function value, W 1 For use inIn the compressed full connection layer parameters, W 2 Is a full connection layer parameter for the restore dimension;
the second activation layer of the CNN branch and the second activation layer of the attention mechanism branch are both connected with a multiplexing layer, and the multiplexing layer corresponds to a weight part of the SE module and is used for carrying out data fusion on the feature vector extracted by the CNN branch and the channel weight vector generated by the attention mechanism branch, and the channel weight vector is multiplied by the extracted feature vector to obtain fused data after feature enhancement, as shown in formula (6):
in the method, in the process of the invention,for feature enhanced fusion data, F scale (. Cndot.) is a Reweight function, u c Is an input feature vector value;
the BiLSTM network model takes fusion data output by a convolutional neural network model based on an attention mechanism as input data, and is used for capturing and transmitting information of adjacent depth in the input data; the BiLSTM network model comprises a forward LSTM network for forward learning and a backward LSTM network for backward learning, wherein the forward LSTM network and the backward LSTM network comprise an input, a cell state, a temporary cell state, a hidden layer state, a forgetting gate, a memory gate and an output gate at the time t; the input data enter a BiLSTM network model and then sequentially enter a forward LSTM network and a backward LSTM network, the forward LSTM network processes the input data according to a forward sequence, the backward LSTM network processes the input data according to a reverse sequence, and after the hidden states at the current moment are calculated by the input data and the hidden states at the previous moment in the forward LSTM network and the backward LSTM network respectively, the hidden states of the forward LSTM network and the backward LSTM network are connected or combined to obtain the output of the BiLSTM network model;
forward propagation of the forward LSTM network, as shown in equation (7):
wherein x is t For the input at the time t,forgetting door for forward transmission at time t, < >>For a forgetting gate which is transmitted forwards at the time t-1, LSTM (·) is a bidirectional long-short-time memory neural network model;
the backward propagation of the backward LSTM network is as shown in equation (8):
in the method, in the process of the invention,forgetting door for backward transmission at time t, < >>A forgetting gate which is transmitted backwards at the time t+1;
the output of the BiLSTM network model is as follows:
in the formula, h t And hiding the state of the layer for the BiLSTM network model at the moment t.
Preferably, the bright crystal discriminant constructed by the CNN-BiLSTM-Attention combination model based on the Attention mechanism is trained by adopting an Adam gradient descent algorithm, the bright crystal discriminant adaptively adjusts the learning rate and updates the network weight in the training process, and a cross entropy loss function is adopted as a loss function in the training process for evaluating the accuracy of the bright crystal discriminant in recognizing bright crystals, as shown in a formula (10):
where L (·) is the minimization objective function and m is the number of samples; y is a true category type, and the value is 0 or 1; h (x) is the prediction probability of the bright crystal discriminator on the sample x, and θ is the model parameter of the bright crystal discriminator.
Preferably, in the step 5, the method specifically includes the following steps:
step 5.1, setting the maximum training times and the initial learning rate of the bright crystal discriminator;
step 5.2, randomly selecting training samples in the training set, inputting the training samples into a bright crystal discriminator, and identifying whether the training samples are bright crystals or not by using the bright crystal discriminator, wherein the bright crystal discriminator automatically marks bright crystal labels on the training samples with bright crystals identified and marks non-bright crystal labels on the training samples without non-bright crystals identified;
step 5.3, comparing the label marked by the bright crystal discriminator with the manual marking label in the training sample, if the label marked by the bright crystal discriminator is consistent with the manual marking label in the training sample, entering step 5.4, otherwise, adaptively adjusting the learning rate of the bright crystal discriminator and adjusting the network weight to update the bright crystal discriminator, and returning to step 5.2 to train the bright crystal discriminator continuously;
and 5.4, judging whether the current training coefficient reaches the preset maximum training times, if the current training times reach the maximum training times, stopping training the bright crystal discriminator to obtain the trained bright crystal discriminator, otherwise, returning to the step 5.2 to continuously train the bright crystal discriminator.
Preferably, the maximum training times are not less than 2000 times, the initial learning rate is set to be 0.001, and the loss function value of the bright crystal discriminator in the training process is gradually stabilized to be 0.2.
Preferably, in the step 6, the method specifically includes the following steps:
step 6.1, sequentially inputting verification samples in the verification set into a bright crystal discriminator, identifying whether the verification samples are bright crystals or not by using the bright crystal discriminator, automatically marking bright crystal labels for the verification samples with the bright crystals identified and marking non-bright crystal labels for the verification samples without the non-bright crystals identified, comparing the marked non-bright crystal labels with manual calibration labels in the verification samples, and determining whether the bright crystal identification result of the bright crystal discriminator is accurate or not;
and 6.2, after the bright crystal discriminator finishes the bright crystal recognition of all verification samples, acquiring the accuracy, the precision and the recall rate of the bright crystal discriminator in the bright crystal recognition process, comparing the accuracy, the precision and the recall rate with the preset accuracy, the precision and the recall rate, if the accuracy, the precision and the recall rate of the bright crystal discriminator reach the preset accuracy, the precision and the recall rate, entering a step 7, otherwise, returning to the step 5 to continuously train the bright crystal discriminator.
The invention has the following beneficial effects:
the invention provides an intelligent recognition method for distinguishing bright crystals of shale, which comprises the steps of extracting sample data by carrying out Pearson correlation analysis on each logging parameter to remove redundant curves in logging curves, constructing a bright crystal discriminator based on a CNN-BiLSTM-Attention combined model of an Attention mechanism after expanding a bright crystal sample set in a sample library by matching with a smote method, and carrying out bright crystal recognition on a well section to be recognized by using the bright crystal discriminator after training the bright crystal discriminator and verifying the recognition accuracy of the bright crystal, so as to realize the accurate recognition of the bright crystal in a complex shale reservoir.
According to the invention, the bright crystal sample data is expanded by adopting a smote method, the problems that the number of the bright crystal samples of the shale is small and the categories are unbalanced in the field are effectively solved, data support is provided for training of the bright crystal discriminator, meanwhile, the bright crystal discriminator is constructed on the basis of a CNN-BiLSTM-Attention combination model of an Attention mechanism, the feature extraction and recognition are carried out by utilizing a convolutional neural network parallel model based on the Attention mechanism, the problem of nonlinear feature extraction is effectively solved, the correlation of sequence view of a logging curve depth sequence is fully excavated by matching with the BiLSTM network model, the rate of the bright crystal recognition of the shale is improved, the accurate recognition of the bright crystal of the shale is realized, and a foundation is laid for guiding the exploration and development of a complex shale reservoir.
Drawings
FIG. 1 is a flow chart of an intelligent identification method for discriminating bright crystals of shale.
Fig. 2 is a graph of Pearson correlation analysis results.
FIG. 3 is a schematic illustration of a bright crystal sample extended based on a smote method; in the figure, the graph (a) is a bright crystal sample set before expansion, and the graph (b) is a bright crystal sample set after expansion by a smote method.
Fig. 4 is a schematic structural diagram of a convolutional neural network model based on an attention mechanism.
Fig. 5 is a graph of the bright shale crystal discrimination effect of the X-well in the embodiment.
Fig. 6 is a graph of the effect of distinguishing the bright shale crystals of the Y well in the embodiment.
Detailed Description
The following uses the area of a certain shale reservoir as a research area, and the specific embodiment of the invention is further described with reference to the accompanying drawings:
the invention provides an intelligent identification method for distinguishing bright crystals of shale, which is shown in fig. 1 and specifically comprises the following steps:
step 1, acquiring a logging curve of a target well section in a shale reservoir to obtain logging data of the target well section, wherein the logging data comprise a natural gamma curve, a natural potential curve, a sonic time difference curve, a density curve, a compensated neutron curve and a resistivity curve, and by combining core data, analyzing mineral composition and structural characteristics in the shale reservoir by observing core sample and core sheet data and matching with geological data of a reference research area, manually determining a bright crystal layer section and an non-bright crystal layer section on the logging curve, and manually calibrating a label on the logging curve.
Step 2, preprocessing logging data of a target well section, performing Pearson correlation analysis on logging parameters, screening redundant curves in the logging curve of the target well section according to Pearson correlation coefficients of the logging parameters, and constructing a sample library comprising a bright crystal sample set and an non-bright crystal sample set by combining manually calibrated bright crystal labels and non-bright crystal labels on the Duan Cejing curve of the target well section, wherein the method specifically comprises the following steps:
and 2.1, performing preprocessing such as depth correction and denoising on the logging data of the target well section to obtain preprocessed logging data.
And 2.2, carrying out Pearson correlation analysis on each logging parameter, obtaining Pearson correlation coefficients among the logging parameters as shown in fig. 2, and evaluating the correlation closeness among the logging parameters by using the Pearson correlation coefficients.
In this embodiment, the calculation formula of the Pearson correlation coefficient of the logging parameter is:
wherein ρ is X,Y The Pearson correlation coefficients, X, Y are all logging parameters, i is a serial number, n is the total number of logging parameters, and X i For the measurement of the ith logging parameter X,for the average value of the logging parameter X, Y i For the measurement of the ith logging parameter Y,is the average value of the logging parameter Y.
The Pearson correlation coefficient ρ X,Y Less than 0, indicating that there is a negative correlation between the logging parameters X and Y; the Pearson correlation coefficient ρ X,Y When the value is greater than 0, the positive correlation between the logging parameters X and Y is shown, and the Pearson correlation coefficient ρ X,Y The closer to 0, the poorer the correlation between the logging parameters X and Y.
Step 2.3, according to the Pearson correlation coefficient ρ X,Y Screening logging curves of a target well section, and determining |ρ X,Y Taking the logging curve with the I being more than or equal to 0.9 as a redundancy curve, removing the redundancy curve in the logging curve of the target well section, and using logging data on the logging curve of the target well section after screening, combining a bright crystal label and a non-bright crystal label on the logging curve, and locating the bright crystal layer section at the same depth pointTaking all logging data at the position as a bright crystal sample, obtaining a plurality of bright crystal samples, constructing a bright crystal sample set, taking all logging data at the same depth point in an amorphous layer section as an amorphous crystal sample, obtaining a plurality of amorphous crystal samples, constructing an amorphous crystal sample set, and obtaining a sample library comprising the bright crystal sample set and the amorphous crystal sample set.
And 3, randomly extracting bright crystal samples and non-bright crystal samples from a sample library to serve as training samples and verification samples, forming a training set and a verification set, wherein the number ratio between the training samples in the training set and the verification samples in the verification set is 9:1, expanding the bright crystal samples in the training set based on a smote method, so that the number of the bright crystal samples in the training set is equal to the number of the non-bright crystal samples, and balancing the bright crystal samples and the non-bright crystal samples in the training set.
In this embodiment, the process of expanding the bright crystal sample based on the smote method is shown in fig. 3, where the square block in fig. 3 represents the existing bright crystal sample, and the round dot represents the bright crystal sample expanded by the smote method.
And 4, constructing a bright crystal discriminator based on a CNN-BiLSTM-Attention combination model of an Attention mechanism.
In this embodiment, the bright crystal discriminator is constructed based on a CNN-BiLSTM-Attention combination model of an Attention mechanism, where the CNN-BiLSTM-Attention combination model of an Attention mechanism includes a Convolutional Neural Network (CNN) model and a BiLSTM network model based on an Attention mechanism, and the convolutional neural network model and the BiLSTM network model based on an Attention mechanism are sequentially connected.
The convolutional neural network model based on the attention mechanism has excellent feature extraction and recognition capability, and comprises CNN branches, attention mechanism branches and multiple layers which are arranged in parallel, as shown in figure 4.
The CNN branch is used for extracting a characteristic vector of an input sample and comprises a first convolution layer, a first activation layer, a second convolution layer and a second activation layer, wherein the first convolution layer and the second convolution layer are 3 multiplied by 1 convolution layers, 32 convolution kernels are arranged in the first convolution layer, 64 convolution kernels are arranged in the second convolution layer, and the convolution operation of the first convolution layer and the second convolution layer is performed. Because the bright crystal sample and the non-bright crystal sample are one-dimensional data, the one-dimensional convolution check input sample is selected for convolution operation, as shown in formula (2):
in the formula, h i Feature vector, x, obtained for convolution operation i Input value, w, for the ith element in the convolutional layer i B is the weight matrix value corresponding to the ith element i N x N is the size of the convolution kernel, which is the offset value of the ith element;
in order to better solve the problem of nonlinear feature extraction of the input sample, the first active layer and the second active layer both adopt a ReLU function as an active function, as shown in formula (3):
y i =max(0,h i ) (3)
wherein y is i To activate the function value, max (0, h i ) Is a maximum function and is used for selecting a characteristic vector h obtained by 0 and convolution operation i Is the maximum value of (a).
In order to improve the attention of the convolutional neural network to important features in the input samples, attention mechanism branches are arranged to extract the salient features of the input samples in parallel, the attention mechanism is used as an imaging method of artificial priori in a machine learning component, and the attention can be focused on the important features when the input samples are processed by the network, so that the defect that the importance of signals cannot be distinguished when the information is processed by the traditional neural network is effectively avoided.
The attention mechanism branch is used for assisting the convolutional neural network to extract feature vectors and comprises a global average pooling layer, a first full-connection layer, a first activation layer, a second full-connection layer and a second activation layer, wherein the number of channels of the first full-connection layer is set to 16, the number of channels of the second full-connection layer is set to 64, a ReLU function is adopted as an activation function by the first activation layer, and a sigmoid function is adopted as an activation function by the second activation layer.
The attention mechanism branch adopts an SE module of a soft attention mechanism to assist the convolutional neural network to extract feature vectors, and adopts the soft attention mechanism to pay attention to a specific channel by adding a weight representing importance degree, so that the accuracy of feature extraction is improved, and the feature extraction comprises a squeize part, an expression part and a weight part.
The global average pooling layer of the attention mechanism branches corresponds to the Squeeze part of the SE module, and performs global aggregation operation on the feature vectors in the input samples, and performs average pooling on the feature vectors in each channel to obtain a global information vector, as shown in formula (4):
wherein z is c For the global information vector output by the global average pooling layer, W is the maximum width of the input feature vector, H is the maximum height of the input feature vector, C is the channel number of the global average pooling layer, u c (I, J) is an input feature vector value of width I and height J.
The first full-connection layer, the first activation layer, the second full-connection layer and the second activation layer of the attention mechanism branch correspond to the expression part of the SE module, and the global information vector acquired by the global average pooling layer sequentially passes through the first full-connection layer, the first activation layer, the second full-connection layer and the second activation layer to generate a channel weight vector for representing the importance of a channel, as shown in a formula (5):
s c =σ(W 2 δ(W 1 z c )) (5)
wherein s is c For channel weight vector, σ (·) is sigmoid activation function value, δ (·) is ReLU activation function value, W 1 W, for full connection layer parameters for compression 2 Is a full connection layer parameter for the restore dimension.
The second activation layer of the CNN branch and the second activation layer of the attention mechanism branch are both connected with a multiplexing layer, and the multiplexing layer corresponds to a weight part of the SE module and is used for carrying out data fusion on the feature vector extracted by the CNN branch and the channel weight vector generated by the attention mechanism branch, and the channel weight vector is multiplied by the extracted feature vector to obtain fused data after feature enhancement, as shown in formula (6):
in the method, in the process of the invention,for feature enhanced fusion data, F scale (. Cndot.) is a Reweight function, u c Is the input feature vector value.
Considering that the convolutional neural network model based on the attention mechanism cannot fully utilize the potential information of sedimentary rocks on the depth domain sequence in the bright crystal discrimination, the fusion data after feature enhancement is used as the input of the BiLSTM network model to continue the depth sequence modeling.
The BiLSTM network model is used for capturing and transmitting information of adjacent depth in input data, and comprises a forward LSTM network for forward learning and a backward LSTM network for backward learning, wherein the forward LSTM network and the backward LSTM network comprise an input, a cell state, a temporary cell state, a hidden layer state, a forgetting gate, a memory gate and an output gate at the moment t.
In the embodiment, the BiLSTM network model integrates the gating architecture and the bidirectional characteristics by adding the backward LSTM network and matching with the forward LSTM network, and fully considers the influence of history information and future information on the current state, so that the BiLSTM network model has better memory and robustness. Meanwhile, the BiLSTM network model regards logging data along the depth direction as an ordered sequence, and can further extract characteristic information in an input sample and capture characteristic information of adjacent depth propagation by utilizing the BiLSTM network model, and the long-term dependence characteristics of the input sample are fully utilized for learning.
When input data enter a BiLSTM network model and then sequentially enter a forward LSTM network and a backward LSTM network, the forward LSTM network processes the input data according to a forward sequence, the backward LSTM network processes the input data according to a reverse sequence, and after the hidden states at the current moment are calculated by the input data and the hidden states at the previous moment in the forward LSTM network and the backward LSTM network respectively, the hidden states of the forward LSTM network and the backward LSTM network are connected or combined to obtain the output of the BiLSTM network model.
Wherein,
forward propagation of the forward LSTM network, as shown in equation (7):
wherein x is t For the input at the time t,forgetting door for forward transmission at time t, < >>For the forgetting gate transmitted forward at the time t-1, LSTM (·) is a bidirectional long and short time memory neural network model.
The backward propagation of the backward LSTM network is as shown in equation (8):
in the method, in the process of the invention,forgetting door for backward transmission at time t, < >>And a forgetting gate which is transmitted backwards at the time t+1.
The output of the BiLSTM network model is as follows:
in the formula, h t And hiding the state of the layer for the BiLSTM network model at the moment t.
In this embodiment, the bright crystal discriminator constructed by the CNN-BiLSTM-Attention combination model based on the Attention mechanism is trained by adopting Adam gradient descent algorithm, the initial learning rate of the bright crystal discriminator is preset to be 0.001 before training, the bright crystal discriminator adaptively adjusts the learning rate and updates the network weight during training, and a cross entropy loss function is adopted as a loss function during training to evaluate the accuracy of the bright crystal discriminator in recognizing bright crystals, as shown in formula (10):
where L (·) is the minimization objective function and m is the number of samples; y is a true category type, and the value is 0 or 1; h (x) is the prediction probability of the bright crystal discriminator on the sample x, and θ is the model parameter of the bright crystal discriminator.
And step 5, training the bright crystal discriminator constructed by the CNN-BiLSTM-Attention combination model based on the Attention mechanism by using a training set to obtain the trained bright crystal discriminator, wherein the method specifically comprises the following steps of:
and 5.1, setting the maximum training frequency of the bright crystal discriminator to 2000 times, and setting the initial learning rate of the bright crystal discriminator to 0.001.
And 5.2, randomly selecting training samples in the training set, inputting the training samples into a bright crystal discriminator, and identifying whether the training samples are bright crystals or not by using the bright crystal discriminator, wherein the bright crystal discriminator automatically marks bright crystal labels on the training samples with bright crystals identified and marks non-bright crystal labels on the training samples without bright crystals identified.
And 5.3, comparing the label marked by the bright crystal discriminator with the manual marking label in the training sample, if the label marked by the bright crystal discriminator is consistent with the manual marking label in the training sample, entering the step 5.4, otherwise, adaptively adjusting the learning rate of the bright crystal discriminator, adjusting the network weight, updating the bright crystal discriminator, and returning to the step 5.2 to train the bright crystal discriminator continuously.
And 5.4, judging whether the current training coefficient reaches the preset maximum training times, if the current training times reach the maximum training times, stopping training the bright crystal discriminator to obtain the trained bright crystal discriminator, otherwise, returning to the step 5.2 to continuously train the bright crystal discriminator.
In this embodiment, the loss function of the bright crystal discriminator gradually tends to be stable along with the increase of the training times in the training process, and finally the loss function value of the bright crystal discriminator gradually stabilizes at 0.2, so that the recognition effect of the bright crystal discriminator is best.
And 6, verifying the bright crystal discriminator by using a verification set to obtain the verified bright crystal discriminator, wherein the method specifically comprises the following steps of:
and 6.1, sequentially inputting verification samples in the verification set into a bright crystal discriminator, identifying whether the verification sample is bright crystal or not by using the bright crystal discriminator, automatically marking a bright crystal label for the verification sample with the bright crystal identified and marking an non-bright crystal label for the verification sample without the non-bright crystal identified, comparing the marked non-bright crystal label with a manual calibration label in the verification sample, and determining whether the bright crystal identification result of the bright crystal discriminator is accurate or not.
And 6.2, after the bright crystal discriminator finishes the bright crystal recognition of all verification samples, acquiring the accuracy, the precision and the recall rate of the bright crystal discriminator in the bright crystal recognition process, comparing the accuracy, the precision and the recall rate with the preset accuracy, the precision and the recall rate, if the accuracy, the precision and the recall rate of the bright crystal discriminator reach the preset accuracy, the precision and the recall rate, entering a step 7, otherwise, returning to the step 5 to continuously train the bright crystal discriminator.
And 7, carrying out bright crystal identification on the well section to be identified by using the verified bright crystal discriminator, and inputting logging data extracted from the well section logging curve to be identified into the bright crystal discriminator according to a depth sequence to obtain a bright crystal identification result of the well section to be identified.
In this embodiment, in order to compare the effect of bright crystal identification of a CNN network, a BiLSTM network, and a bright crystal discriminator constructed by the present invention (CNN-BiLSTM-Attention combination model based on an Attention mechanism), single well logging data and adjacent well logging data are input into the CNN network, the BiLSTM network, and the bright crystal discriminator constructed by the present invention, and a single well bright crystal identification experiment and an adjacent well bright crystal identification experiment are performed, so as to obtain bright crystal identification results of the CNN network, the BiLSTM network, and the bright crystal discriminator constructed by the present invention in the single well bright crystal identification experiment and the adjacent well bright crystal identification experiment, respectively, as shown in fig. 5 and 6, and obtain accuracy, precision, and recall of the CNN network, the BiLSTM network, and the bright crystal discriminator constructed by the present invention in the single well bright crystal identification experiment and the adjacent well bright crystal identification experiment, as shown in tables 1 and 2.
Table 1 evaluation table of effect of recognition of bright crystals of each model in Shan Jingliang crystal recognition experiment
Table 2 evaluation table of effect of recognition of bright crystals of each model in the experiment of recognition of bright crystals of adjacent wells
By comparing the table 1 with the table 2, the accuracy, the precision and the recall rate of each model in the bright crystal identification process are compared, and the bright crystal discriminator constructed by adopting the CNN-BiLSTM-Attention combination model based on the Attention mechanism by adopting the method is optimal in Shan Jingliang crystal identification experiments and adjacent well bright crystal identification in combination with the core picture, the core description and the sheet data discovery of the phase field, so that the bright crystal discrimination method can be obtained.
It should be understood that the above description is not intended to limit the invention to the particular embodiments disclosed, but to limit the invention to the particular embodiments disclosed, and that the invention is not limited to the particular embodiments disclosed, but is intended to cover modifications, adaptations, additions and alternatives falling within the spirit and scope of the invention.

Claims (9)

1. An intelligent identification method for distinguishing bright crystals of shale is characterized by comprising the following steps:
step 1, acquiring a logging curve of a target well section in a shale reservoir, obtaining logging data of the target well section, analyzing mineral composition and structural characteristics in the shale reservoir by combining core data, manually determining a bright crystal layer section and a non-bright crystal layer section on the logging curve, and manually calibrating a label on the logging curve;
step 2, preprocessing logging data of a target well section, performing Pearson correlation analysis on each logging parameter, screening redundant curves in the logging curve of the target well section according to Pearson correlation coefficients of each logging parameter, and constructing a sample library comprising a bright crystal sample set and an non-bright crystal sample set by combining a bright crystal label and an non-bright crystal label which are manually calibrated on the Duan Cejing curve of the target well;
step 3, randomly extracting bright crystal samples and non-bright crystal samples from a sample library to serve as training samples and verification samples, forming a training set and a verification set, expanding the bright crystal samples in the training set based on a smote method, and enabling the number of the bright crystal samples in the training set to be equal to the number of the non-bright crystal samples;
step 4, constructing a bright crystal discriminator based on a CNN-BiLSTM-Attention combination model of an Attention mechanism;
step 5, training the bright crystal discriminator constructed by the CNN-BiLSTM-Attention combination model based on the Attention mechanism by using a training set to obtain a trained bright crystal discriminator;
step 6, verifying the bright crystal discriminator by using the verification set to obtain a verified bright crystal discriminator;
and 7, carrying out bright crystal identification on the well section to be identified by using the verified bright crystal discriminator, and inputting logging data extracted from the well section logging curve to be identified into the bright crystal discriminator according to a depth sequence to obtain a bright crystal identification result of the well section to be identified.
2. The intelligent identification method for discriminating bright crystals of shale according to claim 1 wherein the logging curve includes a natural gamma curve, a natural potential curve, a sonic moveout curve, a density curve, a compensated neutron curve and a resistivity curve.
3. The intelligent recognition method for distinguishing the bright crystals of the shale according to claim 1, wherein the step 2 specifically comprises the following steps:
step 2.1, preprocessing logging data of a target well section, wherein the preprocessing comprises depth correction and denoising;
step 2.2, carrying out Pearson correlation analysis on each logging parameter, wherein a calculation formula of the Pearson correlation coefficient of the logging parameter is as follows:
wherein ρ is X,Y The Pearson correlation coefficients, X, Y are all logging parameters, i is a serial number, n is the total number of logging parameters, and X i For the measurement of the ith logging parameter X,for the average value of the logging parameter X, Y i For the measurement of the ith logging parameter Y, -/-, a. About.>Is the average value of logging parameters Y;
the Pearson correlation coefficient ρ X,Y Less than 0, indicating that there is a negative correlation between the logging parameters X and Y; the Pearson correlation coefficient ρ X,Y When the value is greater than 0, the positive correlation between the logging parameters X and Y is shown, and the Pearson correlation coefficient ρ X,Y The closer to 0, the poorer the correlation between logging parameters X and Y;
step 2.3, according to the Pearson correlation coefficient ρ X,Y Screening logging curves of a target well section, and determining |ρ X,Y Taking the logging curve with the I being more than or equal to 0.9 as a redundancy curve, removing the redundancy curve in the target well section logging curve, and utilizing the screened target well section logging curveThe method comprises the steps of combining logging data on a logging curve with bright crystal labels and non-bright crystal labels, taking logging data located at the same depth point in a bright crystal layer section as a bright crystal sample, obtaining a plurality of bright crystal samples, constructing a bright crystal sample set, taking logging data located at the same depth point in a non-bright crystal layer section as a non-bright crystal sample, obtaining a plurality of non-bright crystal samples, and constructing a non-bright crystal sample set to obtain a sample library comprising the bright crystal sample set and the non-bright crystal sample set.
4. The intelligent recognition method for distinguishing bright crystals of shale according to claim 1, wherein the number ratio between training samples in the training set and verification samples in the verification set is 9:1.
5. The intelligent recognition method for judging the bright crystals of the shale according to claim 1, wherein the bright crystal judging device is constructed based on a CNN-BiLSTM-Attention combination model of an Attention mechanism, the CNN-BiLSTM-Attention combination model of the Attention mechanism comprises a convolutional neural network model and a BiLSTM network model which are sequentially connected;
the convolutional neural network model based on the attention mechanism comprises CNN branches, attention mechanism branches and a multiplexing layer which are arranged in parallel;
the CNN branch is configured to extract a feature vector of an input sample, and includes a first convolution layer, a first activation layer, a second convolution layer, and a second activation layer, where the first convolution layer and the second convolution layer are both 3×1 convolution layers, where the first convolution layer is provided with 32 convolution kernels, the second convolution layer is provided with 64 convolution kernels, and convolution operations of the first convolution layer and the second convolution layer are shown in formula (2):
in the formula, h i Obtained by convolution operationFeature vector, x i Input value, w, for the ith element in the convolutional layer i B is the weight matrix value corresponding to the ith element i N x N is the size of the convolution kernel, which is the offset value of the ith element;
the first active layer and the second active layer both use a ReLU function as an active function, as shown in formula (3):
y i =max(0,h i ) (3)
wherein y is i To activate the function value, max (0, h i ) Is a maximum function and is used for selecting a characteristic vector h obtained by 0 and convolution operation i Maximum value of (2);
the attention mechanism branch is used for assisting the convolutional neural network to extract feature vectors and comprises a global average pooling layer, a first full-connection layer, a first activation layer, a second full-connection layer and a second activation layer, wherein the number of channels of the first full-connection layer is set to 16, the number of channels of the second full-connection layer is set to 64, a ReLU function is adopted as an activation function by the first activation layer, and a sigmoid function is adopted as an activation function by the second activation layer;
the attention mechanism branch adopts an SE module of a soft attention mechanism to assist a convolutional neural network to extract feature vectors, and a weight representing importance degree is added for paying attention to a specific channel, wherein the feature vectors comprise a Squeeze part, an expression part and a weight part;
the global average pooling layer of the attention mechanism branches corresponds to the Squeeze part of the SE module, and performs global aggregation operation on the feature vectors in the input samples, and performs average pooling on the feature vectors in each channel to obtain a global information vector, as shown in formula (4):
wherein z is c The global information vector output by the global average pooling layer is represented by W, H, and C, wherein W is the maximum width of the input feature vector, H is the maximum height of the input feature vector, and C is the global average pooling layerChannel number, u c (I, J) is an input feature vector value of width I and height J;
the first full-connection layer, the first activation layer, the second full-connection layer and the second activation layer of the attention mechanism branch correspond to the expression part of the SE module, and the global information vector acquired by the global average pooling layer sequentially passes through the first full-connection layer, the first activation layer, the second full-connection layer and the second activation layer to generate a channel weight vector for representing the importance of a channel, as shown in a formula (5):
s c =σ(W 2 δ(W 1 z c )) (5)
wherein s is c For channel weight vector, σ (·) is sigmoid activation function value, δ (·) is ReLU activation function value, W 1 W, for full connection layer parameters for compression 2 Is a full connection layer parameter for the restore dimension;
the second activation layer of the CNN branch and the second activation layer of the attention mechanism branch are both connected with a multiplexing layer, and the multiplexing layer corresponds to a weight part of the SE module and is used for carrying out data fusion on the feature vector extracted by the CNN branch and the channel weight vector generated by the attention mechanism branch, and the channel weight vector is multiplied by the extracted feature vector to obtain fused data after feature enhancement, as shown in formula (6):
in the method, in the process of the invention,for feature enhanced fusion data, F scale (. Cndot.) is a Reweight function, u c Is an input feature vector value;
the BiLSTM network model takes fusion data output by a convolutional neural network model based on an attention mechanism as input data, and is used for capturing and transmitting information of adjacent depth in the input data; the BiLSTM network model comprises a forward LSTM network for forward learning and a backward LSTM network for backward learning, wherein the forward LSTM network and the backward LSTM network comprise an input, a cell state, a temporary cell state, a hidden layer state, a forgetting gate, a memory gate and an output gate at the time t; the input data enter a BiLSTM network model and then sequentially enter a forward LSTM network and a backward LSTM network, the forward LSTM network processes the input data according to a forward sequence, the backward LSTM network processes the input data according to a reverse sequence, and after the hidden states at the current moment are calculated by the input data and the hidden states at the previous moment in the forward LSTM network and the backward LSTM network respectively, the hidden states of the forward LSTM network and the backward LSTM network are connected or combined to obtain the output of the BiLSTM network model;
forward propagation of the forward LSTM network, as shown in equation (7):
wherein x is t For the input at the time t,forgetting door for forward transmission at time t, < >>For a forgetting gate which is transmitted forwards at the time t-1, LSTM (·) is a bidirectional long-short-time memory neural network model;
the backward propagation of the backward LSTM network is as shown in equation (8):
in the method, in the process of the invention,forgetting door for backward transmission at time t, < >>A forgetting gate which is transmitted backwards at the time t+1;
the output of the BiLSTM network model is as follows:
in the formula, h t And hiding the state of the layer for the BiLSTM network model at the moment t.
6. The intelligent recognition method for distinguishing the bright crystals of the shale according to claim 5, wherein the bright crystal discriminator constructed based on the CNN-BiLSTM-Attention combination model of an Attention mechanism is trained by adopting an Adam gradient descent algorithm, the bright crystal discriminator adaptively adjusts a learning rate and updates a network weight in the training process, and a cross entropy loss function is adopted as a loss function in the training process, so as to evaluate the accuracy of the bright crystal discriminator in distinguishing the bright crystals, as shown in a formula (10):
where L (·) is the minimization objective function and m is the number of samples; y is a true category type, and the value is 0 or 1; h (x) is the prediction probability of the bright crystal discriminator on the sample x, and θ is the model parameter of the bright crystal discriminator.
7. The intelligent recognition method for distinguishing the bright crystals of the shale according to claim 1, wherein the step 5 specifically comprises the following steps:
step 5.1, setting the maximum training times and the initial learning rate of the bright crystal discriminator;
step 5.2, randomly selecting training samples in the training set, inputting the training samples into a bright crystal discriminator, and identifying whether the training samples are bright crystals or not by using the bright crystal discriminator, wherein the bright crystal discriminator automatically marks bright crystal labels on the training samples with bright crystals identified and marks non-bright crystal labels on the training samples without non-bright crystals identified;
step 5.3, comparing the label marked by the bright crystal discriminator with the manual marking label in the training sample, if the label marked by the bright crystal discriminator is consistent with the manual marking label in the training sample, entering step 5.4, otherwise, adaptively adjusting the learning rate of the bright crystal discriminator and adjusting the network weight to update the bright crystal discriminator, and returning to step 5.2 to train the bright crystal discriminator continuously;
and 5.4, judging whether the current training coefficient reaches the preset maximum training times, if the current training times reach the maximum training times, stopping training the bright crystal discriminator to obtain the trained bright crystal discriminator, otherwise, returning to the step 5.2 to continuously train the bright crystal discriminator.
8. The intelligent recognition method for discriminating bright crystals of shale according to claim 7 wherein said maximum training time is not less than 2000 times, initial learning rate is set to 0.001, and loss function value of bright crystal discriminator in training process is gradually stabilized at 0.2.
9. The intelligent recognition method for distinguishing the bright-grain mixture of the shale according to claim 1, wherein the step 6 specifically comprises the following steps:
step 6.1, sequentially inputting verification samples in the verification set into a bright crystal discriminator, identifying whether the verification samples are bright crystals or not by using the bright crystal discriminator, automatically marking bright crystal labels for the verification samples with the bright crystals identified and marking non-bright crystal labels for the verification samples without the non-bright crystals identified, comparing the marked non-bright crystal labels with manual calibration labels in the verification samples, and determining whether the bright crystal identification result of the bright crystal discriminator is accurate or not;
and 6.2, after the bright crystal discriminator finishes the bright crystal recognition of all verification samples, acquiring the accuracy, the precision and the recall rate of the bright crystal discriminator in the bright crystal recognition process, comparing the accuracy, the precision and the recall rate with the preset accuracy, the precision and the recall rate, if the accuracy, the precision and the recall rate of the bright crystal discriminator reach the preset accuracy, the precision and the recall rate, entering a step 7, otherwise, returning to the step 5 to continuously train the bright crystal discriminator.
CN202311443563.3A 2023-11-01 2023-11-01 Intelligent identification method for distinguishing bright crystals of shale Pending CN117590471A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311443563.3A CN117590471A (en) 2023-11-01 2023-11-01 Intelligent identification method for distinguishing bright crystals of shale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311443563.3A CN117590471A (en) 2023-11-01 2023-11-01 Intelligent identification method for distinguishing bright crystals of shale

Publications (1)

Publication Number Publication Date
CN117590471A true CN117590471A (en) 2024-02-23

Family

ID=89910597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311443563.3A Pending CN117590471A (en) 2023-11-01 2023-11-01 Intelligent identification method for distinguishing bright crystals of shale

Country Status (1)

Country Link
CN (1) CN117590471A (en)

Similar Documents

Publication Publication Date Title
US11010629B2 (en) Method for automatically extracting image features of electrical imaging well logging, computer equipment and non-transitory computer readable medium
CN106405640A (en) Automatic microseismic signal arrival time picking method based on depth belief neural network
CN107356958A (en) A kind of fluvial depositional reservoir substep seismic facies Forecasting Methodology based on geological information constraint
CN103336305B (en) A kind of method dividing Sandstone Gas Reservoir high water cut based on gray theory
CN109800954B (en) Reservoir evaluation method based on logging data
CN108897975A (en) Coalbed gas logging air content prediction technique based on deepness belief network
CN115061219B (en) Fracture type reservoir prediction identification method and system based on petroleum and natural gas detection
CN108447057A (en) SAR image change detection based on conspicuousness and depth convolutional network
CN111783616B (en) Nondestructive testing method based on data-driven self-learning
CN115393656B (en) Automatic classification method for stratum classification of logging-while-drilling image
CN117292148B (en) Tunnel surrounding rock level assessment method based on directional drilling and test data
CN111079783A (en) Method for identifying stratum lithology parameters based on multi-core ensemble learning
CN116168224A (en) Machine learning lithology automatic identification method based on imaging gravel content
CN111462037B (en) Geological exploration drilling quality detection method
CN105787045A (en) Precision enhancing method for visual media semantic indexing
CN117093922A (en) Improved SVM-based complex fluid identification method for unbalanced sample oil reservoir
CN112837269A (en) Rock mass quality evaluation method based on deep learning model
CN117590471A (en) Intelligent identification method for distinguishing bright crystals of shale
CN110552693A (en) layer interface identification method of induction logging curve based on deep neural network
CN114881171A (en) Continental facies shale oil and rock facies type identification method and system based on convolutional neural network
CN114972906A (en) Soil quality type identification method for excavation surface of soil pressure balance shield
CN112987091A (en) Reservoir detection method and device, electronic equipment and storage medium
Wang et al. Auto recognition of carbonate microfacies based on an improved back propagation neural network
CN116990865B (en) Microseism event detection method and system based on deep migration learning
CN110969272B (en) Method for predicting type of oil reservoir flow unit based on logging data probability statistics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination