CN112329804A - Naive Bayes lithofacies classification integrated learning method and device based on feature randomness - Google Patents

Naive Bayes lithofacies classification integrated learning method and device based on feature randomness Download PDF

Info

Publication number
CN112329804A
CN112329804A CN202010613340.7A CN202010613340A CN112329804A CN 112329804 A CN112329804 A CN 112329804A CN 202010613340 A CN202010613340 A CN 202010613340A CN 112329804 A CN112329804 A CN 112329804A
Authority
CN
China
Prior art keywords
lithofacies
training
feature
classifier
base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010613340.7A
Other languages
Chinese (zh)
Inventor
玉龙飞雪
宋先知
李根生
黄中伟
田守嶒
肖立志
廖广志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum Beijing
Original Assignee
China University of Petroleum Beijing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum Beijing filed Critical China University of Petroleum Beijing
Priority to CN202010613340.7A priority Critical patent/CN112329804A/en
Publication of CN112329804A publication Critical patent/CN112329804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/254Extract, transform and load [ETL] procedures, e.g. ETL data flows in data warehouses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The specification provides a naive Bayes lithofacies classification and ensemble learning method and device based on feature randomness, wherein the method comprises the following steps: acquiring and preprocessing various logging data of a target work area; randomly sampling the preprocessed various logging data into a training set and a testing set according to a proportion; randomly generating a plurality of training subsets according to the feature combinations randomly selected from the training subsets and the component numbers thereof; utilizing a plurality of training subsets to train a plurality of first base classifiers in parallel to obtain a plurality of second base classifiers and performance index values thereof; the first base classifier is a naive Bayes classifier; determining the voting weight of each second base classifier according to the performance index value of the second base classifier; utilizing a plurality of second base classifiers to classify the parallel lithofacies of the test set to obtain a classification sub-result of each second base classifier; and voting and combining the classification sub-results according to the voting weight to obtain lithofacies classification results. The method and the device can improve the classification accuracy and the learning efficiency of the naive Bayes-based lithofacies classifier.

Description

Naive Bayes lithofacies classification integrated learning method and device based on feature randomness
Technical Field
The specification relates to the technical field of petroleum and natural gas exploration and development, in particular to a naive Bayes lithofacies classification and integration learning method and device based on feature randomness.
Background
The lithofacies classification is not only an important work in the aspects of stratum evaluation and geological analysis, but also has an important significance for reserve prediction and reservoir description in the field of oil and gas exploration and development. At present, lithofacies classification is usually carried out by analyzing exploratory well cuttings and rock cores by experts to determine lithofacies, and the process is time-consuming, labor-consuming and expensive and has great human factors.
For this reason, Naive Bayes (Naive Bayes, NB for short) gaussian lithofacies classification methods have appeared at present. In practical application, in order to simplify the calculation of the joint conditional probability, the naive Bayes method introduces a characteristic conditional independence hypothesis. For continuity variables, it is generally assumed that the data samples follow a gaussian distribution. However, the real distribution of the logging data is often complex and diverse, so that the fitting effect of the gaussian distribution is poor. In addition, the feature condition independence assumption is often not true in practical tasks, so that the classification accuracy of the lithofacies classifier based on naive bayes is not high.
Disclosure of Invention
An object of an embodiment of the present specification is to provide a feature-random-based naive bayes lithofacies classification ensemble learning method and apparatus, so as to improve the classification accuracy and learning efficiency of a naive bayes-based lithofacies classifier.
In order to achieve the above object, in one aspect, an embodiment of the present specification provides a feature-random-based naive bayes lithofacies classification ensemble learning method, including:
acquiring various logging data of a target work area and preprocessing the logging data;
carrying out random layered sampling on the various pretreated logging data, and forming a training set and a test set according to a preset proportion;
randomly generating a plurality of training subsets according to the feature combinations randomly selected from the training subsets and the number of the components of the feature combinations;
training a plurality of first base classifiers correspondingly and parallelly by using the plurality of training subsets to obtain a plurality of second base classifiers and performance index values thereof; the first base classifier is a naive Bayes classifier;
determining the voting weight of each second base classifier according to the performance index value of the second base classifier;
performing lithofacies classification on the test set in parallel by using the plurality of second base classifiers, and correspondingly obtaining lithofacies classification sub-results of each second base classifier;
and voting and combining the lithofacies classification sub-results according to the voting weight so as to obtain lithofacies classification results.
In one embodiment of the present description, the training of the first base classifier is based on a formula for training discrete features in a subset
Figure RE-GDA0002863859570000021
Calculating the probability distribution of the discrete type features, and taking the probability distribution as the class conditional probability of the discrete type features;
wherein c represents a lithofacies category; x is the number ofiThe value of the ith characteristic is shown; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; dcRepresenting the total number of samples with the lithofacies category c;
Figure RE-GDA0002863859570000022
the category of the lithofacies is represented as c, and the ith characteristic value is represented as xiOf (2) a sampleAnd (4) the number.
In one embodiment of the present specification, when training the first base classifier, for a continuous type feature in the training subset, a probability density distribution of the continuous type feature is calculated according to the following formula, and the probability density distribution is used as a class conditional probability of the continuous type feature;
Figure RE-GDA0002863859570000023
wherein, p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; c represents lithofacies categories; x is the number ofiThe value of the ith characteristic is shown; k is a Gaussian component number; mu.skIs the mean vector of the kth Gaussian component; alpha is alphakThe weight coefficient of the kth Gaussian component; sigmakIs the standard deviation of the kth gaussian component.
In one embodiment of the present specification, the objective function when training the first base classifier is:
Figure RE-GDA0002863859570000024
wherein h is*(x) Is an objective function; x is a group of input characteristic values to be grouped; x is the number ofiThe value of the ith characteristic is shown; c represents lithofacies categories; p (c) is the prior probability of facies class c; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; n is the number of randomly selected features.
In an embodiment of the present specification, the determining the voting weight according to the performance index value of each second base classifier includes:
according to the formula
Figure RE-GDA0002863859570000025
Determining a voting weight of each second base classifier; where w is the voting weight, woriFor the performance index value, a is the weight decay factor.
In another aspect, an embodiment of the present specification further provides a feature-random-based naive bayes lithofacies classification ensemble learning apparatus, including:
the acquisition module is used for acquiring various logging data of a target work area and preprocessing the logging data;
the dividing module is used for randomly sampling the preprocessed various logging data in a layered manner and forming a training set and a testing set according to a preset proportion;
the generating module is used for randomly generating a plurality of training subsets according to the feature combinations randomly selected from the training subsets and the component number thereof;
the training module is used for correspondingly and parallelly training the first base classifiers by utilizing the training subsets to obtain a plurality of second base classifiers and performance index values thereof; the first base classifier is a naive Bayes classifier;
the determining module is used for determining the voting weight of each second base classifier according to the performance index value of the second base classifier;
the testing module is used for carrying out lithofacies classification on the test set in parallel by utilizing the plurality of second base classifiers and correspondingly obtaining lithofacies classification sub-results of each second base classifier;
and the voting module is used for voting and combining the lithofacies classification sub-results according to the voting weight so as to obtain lithofacies classification results.
In one embodiment of the present description, the training of the first base classifier is based on a formula for training discrete features in a subset
Figure RE-GDA0002863859570000031
Calculating the probability distribution of the discrete type features, and taking the probability distribution as the class conditional probability of the discrete type features;
wherein c represents a lithofacies category; x is the number ofiThe value of the ith characteristic is shown; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; dcRepresenting the total number of samples with the lithofacies category c; dc,xiThe category of the lithofacies is represented as c, and the ith characteristic value is represented as xiThe number of samples.
In one embodiment of the present specification, when training the first base classifier, for a continuous type feature in the training subset, a probability density distribution of the continuous type feature is calculated according to the following formula, and the probability density distribution is used as a class conditional probability of the continuous type feature;
Figure RE-GDA0002863859570000032
wherein, p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; c represents lithofacies categories; x is the number ofiThe value of the ith characteristic is shown; k is a Gaussian component number; mu.skIs the mean vector of the kth Gaussian component; alpha is alphakThe weight coefficient of the kth Gaussian component; sigmakIs the standard deviation of the kth gaussian component.
In one embodiment of the present specification, the objective function when training the first base classifier is:
Figure RE-GDA0002863859570000041
wherein h is*(x) Is an objective function; x is a group of input characteristic values to be grouped; x is the number ofiThe value of the ith characteristic is shown; c represents lithofacies categories; p (c) is the prior probability of facies class c; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; n is the number of randomly selected features.
In an embodiment of the present specification, the determining the voting weight according to the performance index value of each second base classifier includes:
according to the formula
Figure RE-GDA0002863859570000042
Determining a voting weight of each second base classifier; where w is the voting weight, woriFor the performance index value, a is the weight decay factor.
According to the technical scheme provided by the implementation scheme of the specification, the integrated learning is carried out by adopting the combination strategy of weighted voting, so that the finally obtained lithofacies classifier has better generalization performance and overfitting resistance, and the classification accuracy of the lithofacies classifier based on naive Bayes is improved. In addition, as the embodiment of the specification can randomly generate a plurality of training subsets according to the feature combinations and the component numbers thereof randomly selected from the training subsets, the time and labor consumption caused by manual selection of the training subsets can be avoided, and the learning efficiency of the naive Bayes-based lithofacies classifier is improved. Moreover, parallel processing is adopted in training and testing in the embodiment of the specification, and the learning efficiency of the naive Bayes-based lithofacies classifier is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the specification, and other drawings can be obtained by those skilled in the art without inventive labor. In the drawings:
FIG. 1 is a flow diagram of a naive Bayes lithofacies classification ensemble learning method based on feature randomness in an embodiment provided in the present specification;
FIG. 2 is a partial tabular representation of well log data after ETL cleaning in an embodiment provided herein;
FIG. 3 is a schematic diagram illustrating a training set and test set partition ratio of pre-processed well log data according to an embodiment provided herein;
FIG. 4 is a Gaussian mixture fit of selected random features when training a base classifier in the embodiments provided herein;
FIG. 5 is a confusion matrix of predicted results of integrated classifiers in an embodiment provided herein;
FIG. 6 is a ROC curve of the prediction results for the ensemble classifier in an embodiment provided herein;
fig. 7 is a block diagram of a feature-random-based naive bayesian lithofacies classification ensemble learning apparatus in an embodiment provided in the present specification.
Detailed Description
In order to make the technical solutions in the present specification better understood, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only a part of the embodiments of the present specification, but not all of the embodiments. All other embodiments obtained by a person skilled in the art without making creative efforts based on the embodiments in the present specification shall fall within the protection scope of the present specification.
Referring to fig. 1, in some embodiments of the present description, a feature-random-based naive bayesian lithofacies classification ensemble learning method may comprise the steps of:
s101, obtaining various logging data of a target work area and preprocessing the logging data.
In embodiments of the present description, the logging data that may be used for lithofacies interpretation is typically stored in a logging software database in a particular format. Therefore, the logging data first needs to be preprocessed before it can be mined and learned.
In one embodiment of the present description, the pre-treatment may include: firstly, ETL cleaning (namely extraction, conversion and loading) is carried out on logging data, the logging data are processed into structured data which can be processed by a machine learning model (namely a naive Bayesian algorithm), and the structured data are stored in a unified and standard format. Wherein, the ETL cleaning process comprises the following steps: reading the logging data source file line by line, and identifying effective information in the logging data source file by taking keywords in the logging data source file as identifiers, wherein the effective information comprises feature names and feature values. After the valid information is formatted and the outliers are filtered, the data may be saved in the form of Comma-Separated Values (CSV) tables.
For example, in an exemplary embodiment of the present description, the well log data is from 10 wells in a work area, all stored in LAS format (a data format specific to PETREL software). The well log data may be pre-processed as shown in FIG. 2 (only a portion of the data is shown in FIG. 2). In FIG. 2, DEPTH represents DEPTH, facilities represents lithofacies, and Wellname represents well identification. After preprocessing, the logging data is presented as a table, with approximately 50000 data items, and contains 9 available logging features (i.e., the input space is a 9-dimensional vector): natural Gamma (GR), photoelectric index (PEF), permeability (PERM _ KLINK), effective Porosity (PHIE), total Porosity (PHIT), formation true Resistivity (RT), total water Saturation (SWT), shale content (VSH), SAND porosity (PHI _ SAND). Meanwhile, the well logging data subjected to data preprocessing further comprises 3 lithofacies labels (namely, the output space is a 3-dimensional vector, such as lithofacies 1, lithofacies 2 and lithofacies 3 in fig. 3).
S102, randomly and hierarchically sampling the preprocessed various logging data, and forming a training set and a testing set according to a preset proportion.
In embodiments of the present description, the training process of the facies classifier based on log data is a supervised learning process. Before training begins, the preprocessed data needs to be divided into training sets and test sets. The training set is used for estimating model parameters, and the test set is used for evaluating the performance of the model. When the data set is divided, attention should be paid to keeping the consistency of data distribution, and the influence on the final result caused by introducing extra deviation in the data dividing process is avoided. Therefore, in some embodiments of the present disclosure, in order to ensure the data representativeness, the sample of each lithofacies may be randomly divided into two parts according to a specified ratio (e.g., 7: 3) in a random hierarchical sampling manner, and the two parts are respectively used as a training set and a testing set.
S103, randomly generating a plurality of training subsets according to the feature combinations randomly selected from the training subsets and the component number of the feature combinations.
Because the embodiment of the specification can randomly generate a plurality of training subsets according to the feature combinations randomly selected from the training subsets and the number of the components thereof, the problems of time and labor consumption, high price and the like caused by manual selection of the training subsets can be avoided. Moreover, the randomness is mainly embodied in the following aspects:
(1) feature randomization
And training a random subset of the feature space, wherein randomly selected features are not allowed to be repeated, and the number of the selected features is greater than 0 and less than or equal to a preset upper limit of the number.
(2) Gaussian mixture model component random
When each base classifier is trained, the selected features are divided into discrete features and continuous features. Aiming at the continuous features, fitting the probability density distribution of the features by adopting a Gaussian mixture model; and during fitting, the Gaussian component quantity of the Gaussian mixture model is random, and the value of the Gaussian component quantity is greater than 0 and less than or equal to a preset upper quantity limit.
(3) Checking for random partitioning of data sets
In the embodiment of the present specification, when each base classifier is trained, the whole training set is not directly handed over to the model for training, but a part of data is reserved from the training set as a check set. The division of the check set is random layered sampling, and the division ratio supports manual setting.
For example, in an exemplary embodiment of the present specification, if the maximum feature number is set to 4(max _ features ═ 4), and the maximum component number (the component number is the number of gaussian mixture components) is set to 6(max _ components ═ 6), then the total number of permutation and combination schemes of the randomly generated training subsets is:
Figure RE-GDA0002863859570000071
if the number of the base classifiers is set to 500 (identifiers 500), the number of the base classifiers can be determined from the number of the base classifiers
Figure RE-GDA0002863859570000072
And randomly extracting 500 schemes corresponding to the training subset of each base classifier, wherein the extracted schemes can be stored in a system memory in an array form. For example, in an exemplary embodiment, one of the randomly generated combinations of features is [ PHIE, RT, SWT, VSH]The corresponding component amounts are [2, 5, 4, 6 ]]The corresponding log data may be as shown in fig. 4. In fig. 4, phi denotes an effective apertureDegree, RT for true formation resistivity, SWT for total water saturation, and VSH for shale content.
S104, training a plurality of first base classifiers correspondingly and parallelly by using the training subsets to obtain a plurality of second base classifiers and performance index values thereof; the first base classifier is a naive Bayes classifier and is a trained initial model.
In the embodiment of the present specification, since the first base classifiers are independent from each other, multiple threads can be started to train in parallel, so as to improve the efficiency of the training process. The first base classifier training is based on a naive Bayes algorithm, and an input space X is a set of n-dimensional vectors, and n is the number of randomly selected features. The output space Y is a set of m-dimensional vectors (e.g., { c1, c2, c3 }). The naive Bayes method introduces a characteristic condition independence assumption, and an objective function of solving the classification problem can be expressed as:
Figure RE-GDA0002863859570000073
wherein h is*(x) Is an objective function; x is a set of input characteristic values to be grouped, and x is (x)1,x2,…,xn);xiThe value of the ith characteristic is shown; c represents lithofacies categories; p (c) is the prior probability of facies class c; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; n is the number of randomly selected features.
In training the first base classifier, the discrete features in the subset can be trained according to a formula
Figure RE-GDA0002863859570000074
And calculating the probability distribution of the discrete type features, and taking the probability distribution as the class conditional probability of the discrete type features. Wherein c represents a lithofacies category; x is the number ofiThe value of the ith characteristic is shown; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; dcRepresenting the total number of samples with the lithofacies category c;
Figure RE-GDA0002863859570000075
the category of the lithofacies is represented as c, and the ith characteristic value is represented as xiThe number of samples.
When training the first base classifier, for the continuous features in the training subset, calculating the probability density distribution of the continuous features according to the following formula, and taking the probability density distribution as the class conditional probability of the continuous features;
Figure RE-GDA0002863859570000076
wherein, p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; c represents lithofacies categories; x is the number ofiThe value of the ith characteristic is shown; k is a Gaussian component number; mu.skIs the mean vector of the kth Gaussian component; alpha is alphakThe weight coefficient of the kth Gaussian component; sigmakIs the standard deviation of the kth gaussian component. It can be seen that the gaussian mixture model has 3k model parameters to be estimated, and a maximum Expectation algorithm (EM) can be used for solving the model parameters.
In embodiments of the present description, the second base classifier is a trained model obtained after training the first base classifier. And correspondingly obtaining a second base classifier after the training of each first base classifier is finished. The check set can be used to evaluate a specified performance index of the second base classifier to provide a reference for voting weight selection in subsequent ensemble learning.
In some embodiments of the present description, specifying a performance indicator may select precision (precision), recall (call), F1 index, total recall (total _ call), micro F1 index (micro F1), macro F1 index (macro F1), weight F1 index (weighted _ F1), or accuracy (accuracycacy), among others.
And S105, determining the voting weight of each second base classifier according to the performance index value of the second base classifier.
In the embodiments of the present specification, in order to improve the accuracy of ensemble learning, a correspondence may be given to each second-base classifier with reference to the performance index obtained when it is trainedThe voting weight of (2). For example, in one embodiment of the present description, the formula may be based on
Figure RE-GDA0002863859570000081
A voting weight for each second base classifier is determined. Where w is the voting weight, a is the weight decay factor, woriIs a performance index value.
S106, performing lithofacies classification on the test set in parallel by using the plurality of second base classifiers, and correspondingly obtaining lithofacies classification sub-results of each second base classifier.
Similar to training, because the second base classifiers are independent from each other, when a plurality of second base classifiers are tested by using a test set, a plurality of threads can be started to perform parallel testing, so that the efficiency of test processing is improved. And after the parallel test is completed, each second base classifier can obtain a lithofacies classification sub-result.
S107, voting combination is carried out on the lithofacies classification sub-results according to the voting weight, and therefore lithofacies classification results are obtained.
In the embodiment of the present specification, because the voting weights of the second base classifiers are different, when the lithofacies classification sub-results output by two second base classifiers are the same, if the voting weights are different, the influence of the lithofacies classification sub-results on the voting combination is also different, and the voting combination of the second base classifiers can form an integrated classifier. For example, in an exemplary embodiment, if there are second base classifiers N1, N2, N3, N4, their corresponding voting weights are 0.5, 0.8, 1, 0.9. After the four second base classifiers N1, N2, N3 and N4 are tested in parallel by using the test set, the corresponding lithofacies classification sub-result is lithofacies 1 (c)1) Lithofacies 1 (c)1) Lithofacies 2 (c)2) Lithofacies 2 (c)2). Then after the votes are combined:
the vote for facies 1 is: 0.5c1+0.8c1=1.3c1
The vote for facies 2 is: c. C2+0.9c2=1.9c2
Obviously, the vote of the facies 2 is higher, so the facies classification result is finally obtained as the facies 2 through voting combination.
For example, in an exemplary embodiment of the present specification, the confusion matrix and ROC curve of the prediction results of a certain ensemble classifier are shown in fig. 5 and 6, respectively. In fig. 6, the abscissa represents the false alarm rate, and the ordinate represents the true rate (i.e., the correct rate determined to be true). As can be seen from fig. 5 and 6, the integrated classifier has a high lithofacies recognition capability.
Therefore, the integrated learning is carried out by adopting the combination strategy of weighted voting in the implementation scheme of the specification, so that the finally obtained lithofacies classifier has better generalization performance and overfitting resistance, and the classification accuracy of the lithofacies classifier based on naive Bayes is improved. In addition, as the embodiment of the specification can randomly generate a plurality of training subsets according to the feature combinations and the component numbers thereof randomly selected from the training subsets, the time and labor consumption caused by manual selection of the training subsets can be avoided, and the learning efficiency of the naive Bayes-based lithofacies classifier is improved. Moreover, parallel processing is adopted in training and testing in the embodiment of the specification, and the learning efficiency of the naive Bayes-based lithofacies classifier is further improved.
Corresponding to the naive Bayes lithofacies classification and ensemble learning method based on feature randomness, the specification also provides electronic equipment. In some embodiments of the present description, the electronic device may include a memory, a processor, and a computer program stored on the memory, the computer program when executed by the processor may perform the steps of:
acquiring various logging data of a target work area and preprocessing the logging data;
carrying out random layered sampling on the various pretreated logging data, and forming a training set and a test set according to a preset proportion;
randomly generating a plurality of training subsets according to the feature combinations randomly selected from the training subsets and the number of the components of the feature combinations;
training a plurality of first base classifiers correspondingly and parallelly by using the plurality of training subsets to obtain a plurality of second base classifiers and performance index values thereof; the first base classifier is a naive Bayes classifier;
determining the voting weight of each second base classifier according to the performance index value of the second base classifier;
performing lithofacies classification on the test set in parallel by using the plurality of second base classifiers, and correspondingly obtaining lithofacies classification sub-results of each second base classifier;
and voting and combining the lithofacies classification sub-results according to the voting weight so as to obtain lithofacies classification results.
While the process flows described above include operations that occur in a particular order, it should be appreciated that the processes may include more or less operations that are performed sequentially or in parallel (e.g., using parallel processors or a multi-threaded environment).
Corresponding to the naive Bayes lithofacies classification and ensemble learning method based on the feature randomness, the specification also provides a naive Bayes lithofacies classification and ensemble learning device based on the feature randomness. Referring to fig. 7, in some embodiments of the present description, the feature-random-based naive bayes lithofacies classification ensemble learning apparatus may comprise:
the acquisition module 71 may be configured to acquire and preprocess a plurality of kinds of logging data of a target work area;
the dividing module 72 may be configured to perform random hierarchical sampling on the preprocessed multiple kinds of logging data, and form a training set and a test set according to a preset ratio;
a generating module 73, configured to randomly generate a plurality of training subsets according to the feature combinations randomly selected from the training sets and the number of components thereof;
a training module 74, configured to train a plurality of first base classifiers in parallel correspondingly by using the plurality of training subsets, so as to obtain a plurality of second base classifiers and performance index values thereof; the first base classifier is a naive Bayes classifier;
a determining module 75, configured to determine a voting weight of each second base classifier according to the performance index value of the second base classifier;
the testing module 76 may be configured to perform facies classification on the test set in parallel by using the plurality of second base classifiers, and correspondingly obtain a facies classification sub-result of each second base classifier;
the voting module 77 may be configured to perform voting combination on the lithofacies classification sub-results according to the voting weights, so as to obtain lithofacies classification results.
In some embodiments of the present description, the training of the first base classifier may be based on a formula for training discrete features in the subset
Figure RE-GDA0002863859570000101
Calculating the probability distribution of the discrete type features, and taking the probability distribution as the class conditional probability of the discrete type features;
wherein c represents a lithofacies category; x is the number ofiThe value of the ith characteristic is shown; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; dcRepresenting the total number of samples with the lithofacies category c;
Figure RE-GDA0002863859570000102
the category of the lithofacies is represented as c, and the ith characteristic value is represented as xiThe number of samples.
In some embodiments of the present description, in training the first base classifier, for a continuous type feature in the training subset, a probability density distribution of the continuous type feature may be calculated according to the following formula, and the probability density distribution is used as a class conditional probability of the continuous type feature;
Figure RE-GDA0002863859570000111
wherein, p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; c represents lithofacies categories; x is the number ofiThe value of the ith characteristic is shown; k is a Gaussian component number; mu.skIs the mean vector of the kth Gaussian component; alpha is alphakThe weight coefficient of the kth Gaussian component; sigmakIs the standard deviation of the kth gaussian component.
In some embodiments of the present description, the objective function in training the first base classifier may be:
Figure RE-GDA0002863859570000112
wherein h is*(x) Is an objective function; x is a set of input characteristic values to be grouped, and x is (x)1,x2,…,xn);xiThe value of the ith characteristic is shown; c represents lithofacies categories; p (c) is the prior probability of facies class c; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; n is the number of randomly selected features.
In some embodiments of the present description, the determining the voting weight according to the performance index value of each second base classifier may include:
according to the formula
Figure RE-GDA0002863859570000113
Determining a voting weight of each second base classifier; where w is the voting weight, woriFor the performance index value, a is the weight decay factor.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The described embodiments may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for system embodiments, because they are substantially similar to process embodiments, the description is relatively simple, and reference may be made to some descriptions of process embodiments for related points. In the description of the specification, reference to the description of the term "one embodiment", "some embodiments", "an example", "a specific example", or "some examples", etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the embodiments of the specification. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples and features of the various embodiments or examples described in this specification can be combined and combined by those skilled in the art without contradiction.
The above description is only an embodiment of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A naive Bayes lithofacies classification ensemble learning method based on feature randomness is characterized by comprising the following steps:
acquiring various logging data of a target work area and preprocessing the logging data;
carrying out random layered sampling on the various pretreated logging data, and forming a training set and a test set according to a preset proportion;
randomly generating a plurality of training subsets according to the feature combinations randomly selected from the training subsets and the number of the components of the feature combinations;
training a plurality of first base classifiers correspondingly and parallelly by using the plurality of training subsets to obtain a plurality of second base classifiers and performance index values thereof; the first base classifier is a naive Bayes classifier;
determining the voting weight of each second base classifier according to the performance index value of the second base classifier;
performing lithofacies classification on the test set in parallel by using the plurality of second base classifiers, and correspondingly obtaining lithofacies classification sub-results of each second base classifier;
and voting and combining the lithofacies classification sub-results according to the voting weight so as to obtain lithofacies classification results.
2. The feature-stochastic-based naive Bayes lithofacies classification ensemble learning method of claim 1, wherein discrete features in the training subset are formulated for discrete features in the training subset during training of the first base classifier
Figure FDA0002562894980000011
Calculating the probability distribution of the discrete type features, and taking the probability distribution as the class conditional probability of the discrete type features;
wherein c represents a lithofacies category; x is the number ofiThe value of the ith characteristic is shown; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; dcRepresenting the total number of samples with the lithofacies category c; dc,xiThe category of the lithofacies is represented as c, and the ith characteristic value is represented as xiThe number of samples.
3. The feature-random-based naive bayesian lithofacies classification ensemble learning method of claim 1, wherein, when training the first base classifier, for a continuous feature in the training subset, a probability density distribution of the continuous feature is calculated according to the following formula and is taken as a class conditional probability of the continuous feature;
Figure FDA0002562894980000012
wherein, p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; c represents lithofacies categories; x is the number ofiThe value of the ith characteristic is shown; k is a Gaussian component number; mu.skIs the mean vector of the kth Gaussian component; alpha is alphakThe weight coefficient of the kth Gaussian component; sigmakIs the standard deviation of the kth gaussian component.
4. The feature-stochastic-based naive bayes lithofacies classification ensemble learning method of claim 1, wherein an objective function when training the first base classifier is:
Figure FDA0002562894980000021
wherein h is*(x) Is an objective function; x is a group of input characteristic values to be grouped; x is the number ofiThe value of the ith characteristic is shown; c represents lithofacies categories; p (c) is the prior probability of facies class c; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; n is the number of randomly selected features.
5. The feature-stochastic-based naive bayes lithofacies classification ensemble learning method of claim 1, wherein said determining voting weights thereof according to the performance index value of each second base classifier comprises:
according to the formula
Figure FDA0002562894980000022
Determining a voting weight of each second base classifier; where w is the voting weight, woriFor the performance index value, a is the weight decay factor.
6. The utility model provides a naive Bayes lithofacies classification integrated learning device based on feature randomness which is characterized in that includes:
the acquisition module is used for acquiring various logging data of a target work area and preprocessing the logging data;
the dividing module is used for randomly sampling the preprocessed various logging data in a layered manner and forming a training set and a testing set according to a preset proportion;
the generating module is used for randomly generating a plurality of training subsets according to the feature combinations randomly selected from the training subsets and the component number thereof;
the training module is used for correspondingly and parallelly training the first base classifiers by utilizing the training subsets to obtain a plurality of second base classifiers and performance index values thereof; the first base classifier is a naive Bayes classifier;
the determining module is used for determining the voting weight of each second base classifier according to the performance index value of the second base classifier;
the testing module is used for carrying out lithofacies classification on the test set in parallel by utilizing the plurality of second base classifiers and correspondingly obtaining lithofacies classification sub-results of each second base classifier;
and the voting module is used for voting and combining the lithofacies classification sub-results according to the voting weight so as to obtain lithofacies classification results.
7. The feature-stochastic-based naive Bayes lithofacies classification ensemble learning apparatus of claim 6, wherein discrete features in the training subset are formulated for discrete features in the training subset during training of the first base classifier
Figure FDA0002562894980000023
Calculating the probability distribution of the discrete type features, and taking the probability distribution as the class conditional probability of the discrete type features;
wherein c represents a lithofacies category; x is the number ofiThe value of the ith characteristic is shown; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; dcRepresenting the total number of samples with the lithofacies category c;
Figure FDA0002562894980000024
the category of the lithofacies is represented as c, and the ith characteristic value is represented as xiThe number of samples.
8. The naive bayes lithofacies classification ensemble learning apparatus of claim 6, wherein, in training the first basis classifier, for a continuous feature in the training subset, a probability density distribution of the continuous feature is calculated according to the following formula and is taken as a class conditional probability of the continuous feature;
Figure FDA0002562894980000031
wherein, p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; c represents lithofacies categories; x is the number ofiThe value of the ith characteristic is shown; k is a Gaussian component number; mu.skIs the mean vector of the kth Gaussian component; alpha is alphakThe weight coefficient of the kth Gaussian component; sigmakIs the standard deviation of the kth gaussian component.
9. The feature-stochastic-based naive bayes lithofacies classification ensemble learning apparatus of claim 6, wherein an objective function in training the first base classifier is:
Figure FDA0002562894980000032
wherein h is*(x) Is an objective function; x is a group of input characteristic values to be grouped; x is the number ofiThe value of the ith characteristic is shown; c represents lithofacies categories; p (c) is the prior probability of facies class c; p (x)i| c) denotes x under the condition that the lithofacies category is ciThe probability of occurrence; n is the number of randomly selected features.
10. The feature-random-based naive bayes lithofacies classification ensemble learning apparatus of claim 6, wherein said determining the voting weight thereof according to the performance index value of each second base classifier comprises:
according to the formula
Figure FDA0002562894980000033
Determining a voting weight of each second base classifier; where w is the voting weight, woriFor the performance index value, a is the weight decay factor.
CN202010613340.7A 2020-06-30 2020-06-30 Naive Bayes lithofacies classification integrated learning method and device based on feature randomness Pending CN112329804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010613340.7A CN112329804A (en) 2020-06-30 2020-06-30 Naive Bayes lithofacies classification integrated learning method and device based on feature randomness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010613340.7A CN112329804A (en) 2020-06-30 2020-06-30 Naive Bayes lithofacies classification integrated learning method and device based on feature randomness

Publications (1)

Publication Number Publication Date
CN112329804A true CN112329804A (en) 2021-02-05

Family

ID=74304329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010613340.7A Pending CN112329804A (en) 2020-06-30 2020-06-30 Naive Bayes lithofacies classification integrated learning method and device based on feature randomness

Country Status (1)

Country Link
CN (1) CN112329804A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926680A (en) * 2021-03-29 2021-06-08 成都理工大学 Microbial rock deposition microphase identification method based on Bayesian neural network
CN113344359A (en) * 2021-05-31 2021-09-03 西南石油大学 Method for quantitatively evaluating quality master control factors of tight sandstone gas reservoir based on random forest

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050246307A1 (en) * 2004-03-26 2005-11-03 Datamat Systems Research, Inc. Computerized modeling method and a computer program product employing a hybrid Bayesian decision tree for classification
CN106683122A (en) * 2016-12-16 2017-05-17 华南理工大学 Particle filtering method based on Gaussian mixture model and variational Bayes
WO2017082897A1 (en) * 2015-11-11 2017-05-18 Halliburton Energy Services Inc. Method for computing lithofacies probability using lithology proximity models
CN106980822A (en) * 2017-03-14 2017-07-25 北京航空航天大学 A kind of rotary machinery fault diagnosis method learnt based on selective ensemble
CN107967452A (en) * 2017-11-24 2018-04-27 广州博进信息技术有限公司 A kind of deep-sea mineral distribution recognition methods and system based on video
CN108062448A (en) * 2017-12-25 2018-05-22 济南大学 Predict modeling and analysis method, the equipment and storage medium of slope stability
CN108388921A (en) * 2018-03-05 2018-08-10 中国石油集团工程技术研究院有限公司 A kind of overflow leakage real-time identification method based on random forest
CN109036568A (en) * 2018-09-03 2018-12-18 浪潮软件集团有限公司 Method for establishing prediction model based on naive Bayes algorithm
CN109164491A (en) * 2018-10-15 2019-01-08 中国石油大学(北京) A kind of seismic facies recognition methods and system based on category support vector machines
CN109611087A (en) * 2018-12-11 2019-04-12 中国石油大学(北京) A kind of Volcanic Reservoir reservoir parameter intelligent Forecasting and system
CN109919184A (en) * 2019-01-28 2019-06-21 中国石油大学(北京) A kind of more well complex lithology intelligent identification Methods and system based on log data
CN110222744A (en) * 2019-05-23 2019-09-10 成都信息工程大学 A kind of Naive Bayes Classification Model improved method based on attribute weight
US20200183047A1 (en) * 2018-12-11 2020-06-11 Exxonmobil Upstream Research Company Automated Reservoir Modeling Using Deep Generative Networks

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050246307A1 (en) * 2004-03-26 2005-11-03 Datamat Systems Research, Inc. Computerized modeling method and a computer program product employing a hybrid Bayesian decision tree for classification
WO2017082897A1 (en) * 2015-11-11 2017-05-18 Halliburton Energy Services Inc. Method for computing lithofacies probability using lithology proximity models
CN106683122A (en) * 2016-12-16 2017-05-17 华南理工大学 Particle filtering method based on Gaussian mixture model and variational Bayes
CN106980822A (en) * 2017-03-14 2017-07-25 北京航空航天大学 A kind of rotary machinery fault diagnosis method learnt based on selective ensemble
CN107967452A (en) * 2017-11-24 2018-04-27 广州博进信息技术有限公司 A kind of deep-sea mineral distribution recognition methods and system based on video
CN108062448A (en) * 2017-12-25 2018-05-22 济南大学 Predict modeling and analysis method, the equipment and storage medium of slope stability
CN108388921A (en) * 2018-03-05 2018-08-10 中国石油集团工程技术研究院有限公司 A kind of overflow leakage real-time identification method based on random forest
CN109036568A (en) * 2018-09-03 2018-12-18 浪潮软件集团有限公司 Method for establishing prediction model based on naive Bayes algorithm
CN109164491A (en) * 2018-10-15 2019-01-08 中国石油大学(北京) A kind of seismic facies recognition methods and system based on category support vector machines
CN109611087A (en) * 2018-12-11 2019-04-12 中国石油大学(北京) A kind of Volcanic Reservoir reservoir parameter intelligent Forecasting and system
US20200183047A1 (en) * 2018-12-11 2020-06-11 Exxonmobil Upstream Research Company Automated Reservoir Modeling Using Deep Generative Networks
CN109919184A (en) * 2019-01-28 2019-06-21 中国石油大学(北京) A kind of more well complex lithology intelligent identification Methods and system based on log data
CN110222744A (en) * 2019-05-23 2019-09-10 成都信息工程大学 A kind of Naive Bayes Classification Model improved method based on attribute weight

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
XUEFENG ZHANG: "New method for radar HRRP recognition and rejection based on weighted majority voting combination of multiple classifiers", 2011 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATIONS AND COMPUTING (ICSPCC), 27 October 2011 (2011-10-27), pages 1 - 4 *
吴泽泰: "基于位置指纹的WiFi室内定位算法研究与系统设计", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, no. 2018, 15 June 2018 (2018-06-15), pages 136 - 545 *
方匡南: "《随机森林组合预测理论在金融中的应用》", vol. 2012, 31 May 2012, 厦门大学出版社, pages: 1 - 228 *
玉龙飞雪: "基于深度神经网络的岩性识别方法研究", 中国优秀硕士学位论文全文数据库(基础科学辑), no. 2023, pages 011 - 202 *
瞿晓婷: "面向复杂储层岩性识别的非均衡数据分类算法研究", 中国优秀硕士学位论文全文数据库(基础科学辑), no. 2018, pages 011 - 727 *
赵铭: "基于EM和GMM的朴素贝叶斯岩性识别", 计算机系统应用, no. 2019, pages 38 - 44 *
陈松峰: "利用PCA和AdaBoost建立基于贝叶斯的组合分类器", 中国优秀硕士学位论文全文数据库(信息科技辑), no. 2011, pages 138 - 274 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926680A (en) * 2021-03-29 2021-06-08 成都理工大学 Microbial rock deposition microphase identification method based on Bayesian neural network
CN113344359A (en) * 2021-05-31 2021-09-03 西南石油大学 Method for quantitatively evaluating quality master control factors of tight sandstone gas reservoir based on random forest

Similar Documents

Publication Publication Date Title
Ross et al. P wave arrival picking and first‐motion polarity determination with deep learning
Saporetti et al. Machine learning approaches for petrographic classification of carbonate-siliciclastic rocks using well logs and textural information
CN107678059B (en) A kind of method, apparatus and system of reservoir gas-bearing identification
Liu et al. Deep classified autoencoder for lithofacies identification
CN111783825A (en) Well logging lithology identification method based on convolutional neural network learning
Wei et al. Characterizing rock facies using machine learning algorithm based on a convolutional neural network and data padding strategy
CN109113729B (en) Lithology identification method and device based on well logging curve
CN113344050A (en) Lithology intelligent identification method and system based on deep learning
CN110837115B (en) Seismic identification method and device for lithology of land-facies mixed rock compact reservoir
CN110097069A (en) A kind of support vector machines Lithofacies Identification method and device based on depth Multiple Kernel Learning
Wrona et al. 3D seismic interpretation with deep learning: A brief introduction
CN112329804A (en) Naive Bayes lithofacies classification integrated learning method and device based on feature randomness
Kim et al. Selection of augmented data for overcoming the imbalance problem in facies classification
Brown et al. Machine learning on Crays to optimize petrophysical workflows in oil and gas exploration
CN117272841A (en) Shale gas dessert prediction method based on hybrid neural network
CN116427915A (en) Conventional logging curve crack density prediction method and system based on random forest
CN117408167A (en) Debris flow disaster vulnerability prediction method based on deep neural network
CN114064459A (en) Software defect prediction method based on generation countermeasure network and ensemble learning
CN111832636A (en) Naive Bayes lithofacies classification method and device based on feature combination
CN112990567A (en) Method, device, terminal and storage medium for establishing coal bed gas content prediction model
Hong et al. A novel approach to the automatic classification of wireline log-predicted sedimentary microfacies based on object detection
CN115660221B (en) Oil and gas reservoir economic recoverable reserve assessment method and system based on hybrid neural network
Kurniadi et al. Local mean imputation for handling missing value to provide more accurate facies classification
Saikia et al. Reservoir facies classification using convolutional neural networks
CN111580179A (en) Method, device and system for determining organic carbon content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination