CN113392924A - Method for identifying acoustoelectric imaging log and related equipment - Google Patents

Method for identifying acoustoelectric imaging log and related equipment Download PDF

Info

Publication number
CN113392924A
CN113392924A CN202110731946.5A CN202110731946A CN113392924A CN 113392924 A CN113392924 A CN 113392924A CN 202110731946 A CN202110731946 A CN 202110731946A CN 113392924 A CN113392924 A CN 113392924A
Authority
CN
China
Prior art keywords
imaging log
neural network
acoustoelectric imaging
acoustoelectric
log
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110731946.5A
Other languages
Chinese (zh)
Other versions
CN113392924B (en
Inventor
黄琳
侯振学
张国强
范川
高永德
徐大年
黄瑞
张宏伟
钱玉萍
王晓飞
向威
韩东春
蔡瑞豪
刘世伟
杨福林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Oilfield Services Ltd
Original Assignee
China Oilfield Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Oilfield Services Ltd filed Critical China Oilfield Services Ltd
Priority to CN202110731946.5A priority Critical patent/CN113392924B/en
Publication of CN113392924A publication Critical patent/CN113392924A/en
Application granted granted Critical
Publication of CN113392924B publication Critical patent/CN113392924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The application relates to the technical field of well logging, and particularly provides a recognition method of an acoustoelectric imaging log and related equipment, wherein the method comprises the following steps: acquiring an acoustoelectric imaging log; carrying out geological feature recognition by a neural network model according to the acoustoelectric imaging log map to obtain a geological feature type corresponding to the acoustoelectric imaging log map; the neural network model is obtained by training a sample acoustoelectric imaging log and a label corresponding to the sample acoustoelectric imaging log, wherein the label is used for indicating the geological feature type corresponding to the sample acoustoelectric imaging log; the geological feature types include a bedding type, a fracture type, and a hole type. By the method, the geological feature type is automatically identified, and the identification efficiency is improved.

Description

Method for identifying acoustoelectric imaging log and related equipment
Technical Field
The application relates to the technical field of well logging, in particular to a recognition method of an acoustoelectric imaging log and related equipment.
Background
Imaging logging is a method of imaging the wall of a borehole and the objects surrounding the borehole with physical parameters based on the observation of the geophysical field in the borehole. Borehole wall imaging is an important ring in imaging logging, and specifically comprises borehole wall acoustic imaging and Formation micro-resistivity scanning imaging (FMI). Images obtained by borehole wall acoustic imaging or formation microresistivity scanning imaging may be collectively referred to as an acoustoelectric imaging log. The acoustoelectric imaging log can provide rich information of a well wall and the periphery of the well, and the acoustoelectric imaging log can be subjected to quantitative and qualitative analysis through image processing so as to explain lithology, structure and bedding of different geological characteristics and find an oil-gas layer, so that the acoustoelectric imaging log has an important effect on oil-gas exploration.
In the related art, before analyzing the sono-electric imaging log, a technician is required to manually identify the geological feature types corresponding to the sono-electric imaging log, and the address feature types include a bedding type for indicating bedding, a fracture type for indicating fractures, and a hole type for indicating holes. The technical personnel are relied on for geological feature type identification, so that the problems of large workload and low efficiency exist.
Disclosure of Invention
The embodiment of the application provides an identification method of an acoustoelectric imaging log and related equipment, and aims to solve the problems of large workload and low efficiency in the related technology due to the fact that technical personnel manually identify geological feature types corresponding to the acoustoelectric imaging log.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided an identification method of an acoustoelectric imaging log, including: acquiring an acoustoelectric imaging log; carrying out geological feature recognition by a neural network model according to the acoustoelectric imaging log map to obtain a geological feature type corresponding to the acoustoelectric imaging log map; the neural network model is obtained by training a sample acoustoelectric imaging log and a label corresponding to the sample acoustoelectric imaging log, wherein the label is used for indicating the geological feature type corresponding to the sample acoustoelectric imaging log; the geological feature types include a bedding type, a fracture type, and a hole type.
In some embodiments of the present application, the neural network model includes an input layer, a convolutional neural network, a fully-connected neural network, and an output layer;
the geological feature recognition is carried out by the neural network model according to the acoustoelectric imaging log map, and the geological feature type corresponding to the acoustoelectric imaging log map is obtained, and the method comprises the following steps:
inputting the sono-electric imaging log into the input layer;
performing convolution feature extraction on the output of the input layer by the convolution neural network to obtain a convolution feature vector corresponding to the acoustoelectric imaging log;
fully connecting the convolution characteristic vectors by the fully-connected neural network to obtain fully-connected characteristic vectors corresponding to the acoustoelectric imaging log;
and classifying by the output layer according to the fully-connected feature vector, and outputting a geological feature label corresponding to the acoustoelectric imaging log, wherein the geological feature label is used for indicating the geological feature type corresponding to the acoustoelectric imaging log.
In some embodiments of the present application, the convolutional neural network comprises one or more cascaded first neural network layers comprising a cascaded convolutional layer, a first activation function layer, and a pooling layer; the fully-connected neural network includes one or more second neural network layers including a cascaded fully-connected layer, a second activation function layer, and a Dropout layer.
In some embodiments of the present application, the method further comprises:
acquiring a training sample set, wherein the training sample set comprises one or more sample acoustoelectric imaging well logs and label labels corresponding to the sample acoustoelectric imaging well logs, and the label labels are used for indicating geological feature types actually corresponding to the sample acoustoelectric imaging well logs;
performing geological feature type prediction on the sample acoustoelectric imaging log by the neural network model to obtain a prediction label corresponding to the sample acoustoelectric imaging log;
adjusting parameters of the neural network model according to the prediction label corresponding to the sample acoustoelectric imaging log and the labeling label corresponding to the sample acoustoelectric imaging log;
and when the convergence condition of the neural network model is reached, finishing the training of the neural network model.
In some embodiments of the present application, the acquiring an acoustoelectric imaging log includes:
acquiring an original acoustoelectric imaging log;
and preprocessing the original acoustoelectric imaging log to obtain the acoustoelectric imaging log, wherein the preprocessing comprises at least one of filtering processing and image enhancement processing.
In some embodiments of the present application, after the performing, by the neural network model, geological feature recognition according to the acoustoelectric imaging log to obtain a geological feature type corresponding to the acoustoelectric imaging log, the method further includes:
carrying out image segmentation on the acoustoelectric imaging log to obtain a binary image;
performing edge detection according to the gray value of each pixel point in the binary image, and determining a contour line in the acoustoelectric imaging log;
and calculating geological characteristic parameters according to the determined pixel points where the contour lines in the acoustoelectric imaging log are located, wherein the geological characteristic parameters are used for describing geological characteristics shown by the acoustoelectric imaging log, and the geological characteristics comprise bedding, cracks and holes.
In some embodiments of the present application, the performing edge detection according to the gray-scale value of each pixel point in the binary image to determine a contour line in the acoustoelectric imaging log includes:
marking the entity object in the binary image according to the gray value of each pixel point in the binary image;
carrying out contour tracking and extraction on the marked entity object in the binary image, and determining the boundary point of the entity object;
and performing curve fitting according to the boundary points of the entity object to obtain a contour line in the acoustoelectric imaging log.
According to an aspect of an embodiment of the present application, there is provided an apparatus for identifying an acoustoelectric imaging log, including: the acquisition module is used for acquiring an acoustoelectric imaging log; the geological feature recognition module is used for carrying out geological feature recognition by the neural network model according to the acoustoelectric imaging log map to obtain a geological feature type corresponding to the acoustoelectric imaging log map; the neural network model is obtained by training a sample acoustoelectric imaging log and a label corresponding to the sample acoustoelectric imaging log, wherein the label is used for indicating the geological feature type corresponding to the sample acoustoelectric imaging log; the geological feature types include a bedding type, a fracture type, and a hole type.
In some embodiments of the present application, the neural network model includes an input layer, a convolutional neural network, a fully-connected neural network, and an output layer; the geological feature recognition module comprises: an input unit for inputting the sono-electric imaging log into the input layer; the convolution characteristic extraction unit is used for performing convolution characteristic extraction on the output of the input layer by the convolution neural network to obtain a convolution characteristic vector corresponding to the acoustoelectric imaging log; the full-connection unit is used for performing full connection on the convolution characteristic vectors by the full-connection neural network to obtain full-connection characteristic vectors corresponding to the acoustoelectric imaging log map; and the classification unit is used for classifying the output layer according to the full-connection characteristic vector and outputting a geological characteristic label corresponding to the acoustoelectric imaging log, wherein the geological characteristic label is used for indicating the geological characteristic type corresponding to the acoustoelectric imaging log.
In some embodiments of the present application, the convolutional neural network comprises one or more cascaded first neural network layers comprising a cascaded convolutional layer, a first activation function layer, and a pooling layer; the fully-connected neural network includes one or more second neural network layers including a cascaded fully-connected layer, a second activation function layer, and a Dropout layer.
In some embodiments of the present application, the apparatus for identifying an acoustoelectric imaging log further comprises: the system comprises a training sample set acquisition module, a data acquisition module and a data processing module, wherein the training sample set acquisition module is used for acquiring a training sample set, the training sample set comprises one or more sample acoustoelectric imaging well logs and label labels corresponding to the sample acoustoelectric imaging well logs, and the label labels are used for indicating geological feature types actually corresponding to the sample acoustoelectric imaging well logs; the prediction module is used for predicting the geological feature type of the sample acoustoelectric imaging log by the neural network model to obtain a prediction label corresponding to the sample acoustoelectric imaging log; the parameter adjusting module is used for adjusting parameters of the neural network model according to the prediction label corresponding to the sample acoustoelectric imaging log and the labeling label corresponding to the sample acoustoelectric imaging log; and the training ending module is used for ending the training of the neural network model when the convergence condition of the neural network model is reached.
In some embodiments of the present application, the obtaining module comprises: the acquisition unit is used for acquiring an original acoustoelectric imaging log; and the preprocessing unit is used for preprocessing the original acoustoelectric imaging log to obtain the acoustoelectric imaging log, and the preprocessing comprises at least one of filtering processing and image enhancement processing.
In some embodiments of the present application, the apparatus for identifying an acoustoelectric imaging log further comprises: the image segmentation module is used for carrying out image segmentation on the acoustoelectric imaging log to obtain a binary image; the edge detection module is used for carrying out edge detection according to the gray value of each pixel point in the binary image and determining a contour line in the acoustoelectric imaging log map; and the geological characteristic parameter calculation module is used for calculating geological characteristic parameters according to the determined pixel points where the contour lines in the acoustoelectric imaging log are located, wherein the geological characteristic parameters are used for describing geological characteristics shown by the acoustoelectric imaging log, and the geological characteristics comprise bedding, cracks and holes.
In some embodiments of the present application, the edge detection module comprises: the marking unit is used for marking the entity object in the binary image according to the gray value of each pixel point in the binary image; the boundary point determining unit is used for carrying out contour tracking and extraction on the marked entity object in the binary image and determining the boundary point of the entity object; and the curve fitting unit is used for performing curve fitting according to the boundary points of the entity object to obtain a contour line in the acoustoelectric imaging log.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: a processor; a memory having computer readable instructions stored thereon which, when executed by the processor, implement a method of identification of an acoustoelectric imaging log as described above.
According to an aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor, implement a method of identification of an acoustoelectric imaging log as described above.
According to the scheme, after the neural network model is obtained through training, the geological feature type corresponding to the acoustoelectric imaging log map can be automatically identified through the neural network model, and the acoustoelectric imaging log map can be conveniently subjected to targeted processing subsequently according to the geological feature type corresponding to the acoustoelectric imaging log map. Because the geological feature type corresponding to the acoustoelectric imaging log is not required to be identified and determined by technicians, the workload can be greatly reduced, and the identification efficiency and speed are improved. The neural network model is used for identifying the geological feature types after training, and the accuracy of the identified result can be ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 is a schematic diagram illustrating an environment for implementing the present application, according to one embodiment.
FIG. 2 is a flow diagram illustrating a method of identification of an acoustoelectric imaging log according to one embodiment of the present application.
FIG. 3 is a flow diagram illustrating a method of identification of an acoustoelectric imaging log according to another embodiment of the present application.
FIG. 4 is a flow diagram of step 220 in one embodiment.
FIG. 5 is a flowchart illustrating steps subsequent to step 220, according to one embodiment.
FIG. 6 is a block diagram illustrating an identification device of an acoustoelectric imaging log according to one embodiment.
FIG. 7 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It should be noted that: reference herein to "a plurality" means two or more. "and/or" describe the association relationship of the associated objects, meaning that there may be three relationships, e.g., A and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
FIG. 1 is a schematic diagram illustrating an environment for implementing the present application, according to one embodiment. As shown in fig. 1, the implementation environment includes an acoustoelectric imaging logging device 110 and a computer device 120, wherein the acoustoelectric imaging logging device 110 is used for performing imaging logging to obtain a raw acoustoelectric imaging log;
the computer device 120 may be a server or a server cluster, and is configured to identify the acquired original sonoelectric imaging log according to the method of the present application, so as to determine a geological feature type corresponding to the original imaging log. The geological feature types comprise a bedding type for indicating bedding, a crack type for indicating a crack and a hole type for indicating a hole. From the identified type of geologic feature, it can be determined which geologic feature (also referred to as a geologic structure) the sono-electric imaging log shows. Correspondingly, geological features include bedding, fractures, and holes.
In some embodiments of the present application, since a large amount of image Processing is involved in the process of identifying the sonographic imaging log image, and a GPU (Graphics Processing Unit) has an advantage of fast Processing speed in image parallel Processing, an image processor may be deployed in the computer device 120 to identify the sonographic imaging log image based on the method of the present application.
In some embodiments of the present application, the computer device 120 may further include a display device for displaying the sono-electric imaging log, a recognition result obtained by recognizing the sono-electric imaging log, and the like.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 2 is a flow chart illustrating a method for identifying an acoustoelectric imaging log according to an embodiment of the present application, which may be performed by a computer device with processing capability, such as a terminal, a server, etc., and is not particularly limited herein. Referring to fig. 2, the method includes at least steps 210 to 220, which are described in detail as follows:
step 210, acquiring an acoustoelectric imaging log.
The acoustoelectric imaging log can be an original image (also called as an original acoustoelectric imaging log) obtained by performing acoustoelectric imaging logging by an acoustoelectric imaging logging device, and can also be an image obtained by preprocessing the original acoustoelectric imaging log.
The sonographic log may include images from imaging of a borehole acoustic and images from microresistivity scan imaging.
In some embodiments of the present application, step 210 further comprises: acquiring an original acoustoelectric imaging log; and preprocessing the original acoustoelectric imaging log to obtain the acoustoelectric imaging log, wherein the preprocessing comprises at least one of filtering processing and image enhancement processing.
Wherein the filtering process may be a median filtering process. The median filtering process is to replace the gray value of a pixel point in the digital image with the median gray value of the pixel point in one field of the pixel point so as to make the gray value of the surrounding pixel points close to the true value, thereby eliminating the isolated noise point. The median filtering process can be expressed as:
g (x, y) ═ mean { f (x-k, y-l), (x, y) ∈ W }; (formula 1)
Wherein, (x, y) represents the coordinates of a target pixel point to be subjected to median filtering, and median { f (x-k, y-l, (x, y) ∈ W represents the median gray value of all pixel points in a neighborhood W of the length of the target pixel point, wherein the length of the neighborhood W is k, and the width of the neighborhood W is l.
The original acoustoelectric imaging log is preprocessed, so that the acoustoelectric imaging log which is clear and appropriate can be obtained, and the influence of noise points or definition and other reasons in the original acoustoelectric imaging log on the geological feature recognition result is avoided.
And step 220, carrying out geological feature recognition by the neural network model according to the acoustoelectric imaging log so as to obtain a geological feature type corresponding to the acoustoelectric imaging log.
The neural network model is obtained by training a sample acoustoelectric imaging log and a label corresponding to the sample acoustoelectric imaging log, wherein the label is used for indicating the geological feature type corresponding to the sample acoustoelectric imaging log; the geological feature types include a bedding type, a fracture type, and a hole type.
The bedding type is used for indicating that the geological features shown in the acoustoelectric imaging log are bedding; the fracture type is used to indicate that the geological feature shown in the acoustoelectric imaging log is a fracture; the hole type is used to indicate that the geological feature shown in the sono-electric imaging log is a hole.
According to the scheme, the geological feature type to which the geological feature shown in the acoustic-electric imaging log is attributed is automatically identified by means of the trained neural network model, so that the acoustic-electric imaging log is conveniently subjected to targeted processing after the geological feature type corresponding to the acoustic-electric imaging log is identified.
The neural network model can be a model constructed based on a convolutional neural network, a fully-connected neural network, a long-term and short-term memory neural network and the like.
In order to ensure the accuracy of the neural network model in geological feature recognition, after the neural network model is constructed, the neural network model needs to be trained through training samples.
The specific process of training the neural network model may be as shown in fig. 3. As shown in fig. 3, includes:
step 310, a training sample set is obtained, wherein the training sample set comprises a plurality of sample acoustoelectric imaging well logs and labeling labels corresponding to the sample acoustoelectric imaging well logs, and the labeling labels are used for indicating geological feature types actually corresponding to the sample acoustoelectric imaging well logs.
The sample acoustoelectric imaging log refers to an acoustoelectric imaging log used for training a built neural network model. Similarly, the sample sonoelectrical imaging log may be a pre-processed image of the original sonoelectrical imaging log.
And the corresponding label of the sample acoustoelectric imaging log is used for indicating the geological characteristics marked on the sample acoustoelectric imaging log by a technician. Geological features may include cracks, holes, and bedding, among others.
And 320, performing geological feature type prediction on the sample acoustoelectric imaging log by the neural network model to obtain a prediction label corresponding to the sample acoustoelectric imaging log.
And the prediction label corresponding to the sample acoustoelectric imaging log is used for indicating the geological feature type predicted by the sample acoustoelectric imaging log of the neural network model.
And 330, adjusting parameters of the neural network model according to the prediction label corresponding to the sample acoustoelectric imaging log and the labeling label corresponding to the sample acoustoelectric imaging log.
Specifically, if the geological feature type indicated by the prediction label corresponding to the sample acoustoelectric imaging log is not consistent with the geological feature type indicated by the labeling label corresponding to the sample acoustoelectric imaging log, the parameters of the neural network model are adjusted. And after the parameters of the neural network model are adjusted, predicting the geological feature type through the sample acoustoelectric imaging log of the neural network model again until the geological feature type indicated by the prediction label corresponding to the sample acoustoelectric imaging log is consistent with the geological feature type indicated by the label corresponding to the sample acoustoelectric imaging log.
And if the geological feature type indicated by the prediction label corresponding to the sample acoustoelectric imaging log is consistent with the geological feature type indicated by the label corresponding to the sample acoustoelectric imaging log, continuing to train the neural network model by using the next sample acoustoelectric imaging log.
And 340, finishing the training of the neural network model when the convergence condition of the neural network model is reached.
In a specific embodiment, the convergence condition of the neural network model may be convergence of a loss function set for the neural network model, and the loss function may be a mean square error function, a cross entropy error function, or the like, and may be specifically set according to actual needs.
The convergence condition of the neural network model can also be that the prediction accuracy of the neural network model after training meets the set accuracy requirement. To obtain the prediction accuracy of the neural network model, the neural network model is tested via other acousto-electric imaging well logs that are distinct from the sample acousto-electric imaging well log (for ease of description, the acousto-electric imaging well log used to test the neural network model is referred to as a test acousto-electric imaging well log).
Specifically, a test acoustoelectric imaging log is input into a trained neural network model, and the geological feature type of the test acoustoelectric imaging log is predicted by the neural network model to obtain a corresponding prediction label; and then calculating the prediction accuracy of the neural network model based on the geological features indicated by the prediction labels corresponding to the plurality of test acoustoelectric imaging well logs and the geological features indicated by the corresponding label labels.
In other embodiments, the convergence condition of the neural network model may also be that the number of iterations of the neural network model reaches a set number.
Through the training process, the trained neural network model can be ensured to accurately predict the geological feature type corresponding to the acoustoelectric imaging log according to the acoustoelectric imaging log.
In some embodiments of the present application, the neural network model includes an input layer, a convolutional neural network, a fully-connected neural network, and an output layer; in this embodiment, as shown in fig. 4, step 220 further includes:
step 410, inputting the sonographic log into the input layer.
And 420, performing convolution feature extraction on the output of the input layer by the convolution neural network to obtain a convolution feature vector corresponding to the acoustoelectric imaging log.
In some embodiments of the present application, the convolutional neural network may include several cascaded first neural network layers. Convolution feature extraction is carried out in a multi-level mode through a plurality of first neural network layers in a cascading mode, so that the finally obtained convolution feature vectors can more comprehensively express the features of the acoustoelectric imaging log. In other embodiments, the convolutional neural network may further include a first neural network layer.
In some embodiments of the present application, the first neural network layer comprises a cascaded convolutional layer, a first activation function layer, and a pooling layer. The convolution layer performs a convolution operation on the output of the input layer by the convolution kernel set therein. The first activation layer is used for activating the first neural network layer and improving the nonlinear expression capability of the first neural network layer. In a specific embodiment, the activation function in the first activation function layer may be a ReLU function, and the expression of the ReLU function is:
f (x) max (0, x); (formula 2)
Of course, the activation function in the first activation function layer is not limited to the ReLU function, but may be other activation functions, such as a sigmoid function.
The main purpose of the pooling layer is to compress the image and reduce the parameters without affecting the image quality by means of down-sampling.
And 430, fully connecting the convolution characteristic vectors by the fully-connected neural network to obtain fully-connected characteristic vectors corresponding to the acoustoelectric imaging log.
In some embodiments of the present application, the fully-connected neural network may include a layer of the second neural network layer, and may also include several cascaded second neural network layers, and the fully-connected neural network is performed in multiple stages through the cascaded second neural network layers.
In some embodiments of the present application, the second neural network layer includes a cascaded fully-connected layer, a second activation function layer, and a Dropout layer.
The core operation of full connection in the full connection layer is matrix vector multiplication, which is essentially linear transformation from one feature space to another, and the full connection layer is equivalent to weighting local features extracted by the previous neural network layer by a weight matrix.
The second activation function layer is used for improving the nonlinear expression capability of the second neural network layer, like the first activation function layer. The activation function provided in the second activation function layer may be the same as or different from the activation function provided in the first activation function layer, and is not particularly limited herein.
The Dropout layer is used for solving the problem that the neural network model is easy to over-fit, and specifically, some neurons in the Dropout layer are discarded according to a certain probability in the training process of the neural network model. The Dropout layer is processed as follows: firstly, randomly arranging a half of hidden neurons in a layer, and keeping input neurons and output neurons unchanged; the input is then propagated forward through the modified network and the resulting loss results are then propagated backward through the modified network. After a small batch of training samples finish the process, updating corresponding parameters on the undeleted neurons according to a random gradient descent method; and repeating the two steps.
Because some neurons are randomly made to not work, the phenomenon that some features only take effect under fixed combination can be avoided, the network is consciously made to learn some common commonalities instead of some characteristics of some training samples, and therefore overfitting can be effectively prevented.
And 440, classifying by the output layer according to the fully-connected feature vectors, and outputting a geological feature label corresponding to the acoustoelectric imaging log, wherein the geological feature label is used for indicating a geological feature type corresponding to the acoustoelectric imaging log. Wherein the output layer can classify the fully connected feature vectors by a softmax function.
In this embodiment, the convolutional neural network and the fully-connected neural network may be collectively referred to as a hidden layer of the neural network model, and of course, the hidden layer is a plurality of layers.
In one embodiment of the present application, the neural network model includes nine layers, wherein the first layer is an input layer; the second layer is a hidden layer and comprises a convolution layer, an activation function layer and a pooling layer; the third layer is a hidden layer which comprises a convolution layer, an activation function layer and a pooling layer; the fourth layer is a hidden layer and comprises a convolution layer, an activation function layer and a pooling layer; the fifth layer is a hidden layer which comprises a convolution layer, an activation function layer and a pooling layer; the sixth layer is a hidden layer and comprises a full connection layer, an activation function layer and a Dropout layer; the seventh layer is a hidden layer and comprises a full connection layer, an activation function layer and a Dropout layer; the eighth layer is an output layer, which includes a softmax layer.
In this embodiment, the second layer, the third layer, the fourth layer and the fifth layer may be respectively used as the first neural network layer, and the second layer, the third layer, the fourth layer and the fifth layer constitute the convolutional neural network. The sixth layer, the seventh layer and the eighth layer can be respectively used as a second neural network layer, and the sixth layer, the seventh layer and the eighth layer form the fully-connected neural network.
Through the process, the neural network model can output the geological feature label corresponding to the acoustoelectric imaging log map based on the input acoustoelectric imaging log map, and further determine the geological feature to which the acoustoelectric imaging log belongs based on the geological feature label.
According to the scheme, after the neural network model is obtained through training, the geological feature type corresponding to the acoustoelectric imaging log map can be automatically identified through the neural network model, and the acoustoelectric imaging log map can be conveniently subjected to targeted processing subsequently according to the geological feature type corresponding to the acoustoelectric imaging log map. In the related art, technicians are needed to identify and determine the geological feature types corresponding to the acoustoelectric imaging well logs, if the number of the acoustoelectric imaging well logs to be identified is large, the workload is large, the time is long, and the result of the determined geological feature types is wrong due to subjective factors of the technicians. According to the scheme, the geological feature type of the acoustoelectric imaging log is automatically identified through the trained neural network model without being determined by technical personnel, so that the workload can be greatly reduced, and the identification efficiency and speed are improved. Because the neural network model is used for identifying the geological feature types after training, the accuracy of the identified result can be ensured, and the identified result is not influenced by subjective factors.
In some embodiments of the present application, by using the method of the present application, geological feature type recognition may be performed on each of the sonographic imaging well logs in the sonographic imaging well log set, and on the basis of determining a geological feature type corresponding to each of the sonographic imaging well logs, the sonographic imaging well logs in the sonographic imaging well log set may be classified based on the geological feature type. And then processing the acoustoelectric imaging log belonging to the corresponding geological feature type according to the processing method corresponding to each geological feature type.
In some embodiments of the present application, a processing procedure for performing geologic feature parameter calculation is set for each geologic feature type, so after the above classification procedure is completed, the geologic feature parameter calculation may be performed based on the acoustoelectric imaging log according to the processing procedure for the corresponding geologic feature parameter calculation.
In some embodiments of the present application, as shown in fig. 5, after step 220, the method further comprises:
and 510, carrying out image segmentation on the acoustoelectric imaging log to obtain a binary image.
In step 510, the sono-electric imaging log may be image segmented based on a set threshold. The matrix corresponding to the binary image has only 0 and 1 values, the pixel point with the value of 0 is displayed as one color in the image, and the pixel point with the value of 1 is displayed as another color in the image. Therefore, in the image segmentation process, for the pixel points with the gray values larger than the set threshold value in the acoustoelectric imaging log, setting the gray values of the pixel points as 1; and setting the gray value of the pixel point as 0 for the pixel point with the gray value not greater than the set threshold value in the acoustoelectric imaging log.
In some embodiments of the present application, since noise may occur during the image segmentation process, the binary image may be further subjected to denoising processing before step 520. Specifically, the binary image is subjected to opening operation (firstly corroded and then expanded) and closing operation (firstly swelled and then corroded) in sequence, so that some isolated noises in the binary image can be removed through the opening operation, and some small holes can be filled through the closing operation; noise in the binary image can be eliminated by performing opening operation and closing operation, and denoising is realized.
And 520, performing edge detection according to the gray value of each pixel point in the binary image, and determining a contour line in the acoustoelectric imaging log.
In the binary image, the contour line corresponds to the position where the gray value changes suddenly, along the trend of the contour line, the gray value changes more slowly and is vertical to the trend of the contour line, and the gray value changes more severely. Therefore, the contour lines in the acoustoelectric imaging log can be determined by analyzing the gradient or the breakpoint of the gray value in the binary image to perform edge detection.
In some embodiments of the present application, step 520 further comprises:
and 521, marking the entity object in the binary image according to the gray value of each pixel point in the binary image.
In some embodiments of the present application, a target number notation may be employed to mark solid objects in the binary image. Specifically, as long as two pixel points in the binary image are eight-point connected, we consider that the two pixel points belong to the same entity object. For an acoustoelectric imaging log, the entities represented in the geological formation may be referred to as solid objects, while void regions in the geological formation, such as identified fractures, holes, etc., may be considered non-solid objects.
Step 522, performing contour tracing and extraction on the marked entity object in the binary image, and determining the boundary point of the entity object.
On the basis of marking the entity object in the binary image, the entity object in the binary image can be determined, and of course, the illustrated entity object may include one or more entity objects. Since the solid and the void, etc. in the geological formation are distributed in an interlaced manner, the regions of the solid object and the non-solid object exist simultaneously in the binary image. By carrying out contour tracking and extraction on the entity objects marked in the binary image, the boundary point of each entity object and the adjacent non-entity object can be determined.
In some embodiments of the present application, a pixel point representing the entity object may be pre-selected as an initial boundary point, and then a search is performed according to a preset search order (e.g., from left to right, from top to bottom) to determine a next boundary point, according to which process all boundary points between each entity object and the non-entity object in the binary image may be determined.
523, performing curve fitting according to the boundary points of the entity object to obtain a contour line in the acoustoelectric imaging log.
In some embodiments of the present application, in the case of determining the boundary point, the determined boundary point in the binary image may be transformed into a parameter space by using a Hough transform method, so as to determine a description parameter describing a contour line formed by the boundary point.
Further, on the basis of obtaining a contour line by extracting based on the determined boundary points and obtaining the description of the contour line, curve fitting is further performed on the contour line, so that a corresponding contour line is obtained.
Step 530, calculating geological characteristic parameters according to the determined pixel points where the contour lines in the acoustoelectric imaging log are located, wherein the geological characteristic parameters are used for describing geological characteristics shown by the acoustoelectric imaging log, and the geological characteristics comprise bedding, cracks and holes.
The geologic feature parameters used to describe the fracture may include: crack area, crack length, average crack width, maximum crack width, crack fractal dimension, dip angle, crack face porosity and the like.
The geologic feature parameters used to describe the hole may include: number of holes, length, width, area, diameter, hole area porosity, etc.
Geologic feature parameters used to describe bedding may include: thickness, inclination, etc.
The outline of the solid object in the acoustoelectric imaging log is determined, so that the region where the non-solid object is located can be correspondingly determined, and on the basis, the geological characteristic parameters are calculated based on the pixel points in the region where the non-solid object is located.
It is understood that the calculation process of the different geological feature parameters may also be different due to different geological features, but the calculation of the corresponding feature parameters is performed based on the determined contour lines.
Embodiments of the apparatus of the present application are described below, which may be used to perform the methods of the above-described embodiments of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the above-described embodiments of the method of the present application.
FIG. 6 is a block diagram illustrating an apparatus for identification of an acoustoelectric imaging log, as shown in FIG. 6, according to one embodiment, including:
an obtaining module 610, configured to obtain an acoustoelectric imaging log;
a geological feature recognition module 620, configured to perform geological feature recognition by the neural network model according to the acoustoelectric imaging log, so as to obtain a geological feature type corresponding to the acoustoelectric imaging log; the neural network model is obtained by training a sample acoustoelectric imaging log and a label corresponding to the sample acoustoelectric imaging log, wherein the label is used for indicating the geological feature type corresponding to the sample acoustoelectric imaging log; the geological feature types include a bedding type, a fracture type, and a hole type.
In some embodiments of the present application, the neural network model includes an input layer, a convolutional neural network, a fully-connected neural network, and an output layer; the geological feature identification module 620 comprises:
an input unit for inputting the sono-electric imaging log into the input layer;
the convolution characteristic extraction unit is used for performing convolution characteristic extraction on the output of the input layer by the convolution neural network to obtain a convolution characteristic vector corresponding to the acoustoelectric imaging log;
the full-connection unit is used for performing full connection on the convolution characteristic vectors by the full-connection neural network to obtain full-connection characteristic vectors corresponding to the acoustoelectric imaging log map;
and the classification unit is used for classifying the output layer according to the full-connection characteristic vector and outputting a geological characteristic label corresponding to the acoustoelectric imaging log, wherein the geological characteristic label is used for indicating the geological characteristic type corresponding to the acoustoelectric imaging log.
In some embodiments of the present application, the convolutional neural network comprises a number of cascaded first neural network layers, the first neural network layers comprising cascaded convolutional layers, first activation function layers, and pooling layers; the fully-connected neural network comprises a plurality of second neural network layers, and the second neural network layers comprise a fully-connected layer, a second activation function layer and a Dropout layer which are cascaded.
In some embodiments of the present application, the apparatus for identifying an acoustoelectric imaging log further comprises:
the system comprises a training sample set acquisition module, a data acquisition module and a data processing module, wherein the training sample set acquisition module is used for acquiring a training sample set, the training sample set comprises a plurality of sample acoustoelectric imaging well logs and label labels corresponding to the sample acoustoelectric imaging well logs, and the label labels are used for indicating geological feature types actually corresponding to the sample acoustoelectric imaging well logs;
the prediction module is used for predicting the geological feature type of the sample acoustoelectric imaging log by the neural network model to obtain a prediction label corresponding to the sample acoustoelectric imaging log;
the parameter adjusting module is used for adjusting parameters of the neural network model according to the prediction label corresponding to the sample acoustoelectric imaging log and the labeling label corresponding to the sample acoustoelectric imaging log;
and the training ending module is used for ending the training of the neural network model when the convergence condition of the neural network model is reached.
In some embodiments of the present application, the obtaining module 610 includes:
the acquisition unit is used for acquiring an original acoustoelectric imaging log;
and the preprocessing unit is used for preprocessing the original acoustoelectric imaging log to obtain the acoustoelectric imaging log, and the preprocessing comprises at least one of filtering processing and image enhancement processing.
In some embodiments of the present application, the apparatus for identifying an acoustoelectric imaging log further comprises:
the image segmentation module is used for carrying out image segmentation on the acoustoelectric imaging log to obtain a binary image;
the edge detection module is used for carrying out edge detection according to the gray value of each pixel point in the binary image and determining a contour line in the acoustoelectric imaging log map;
and the geological characteristic parameter calculation module is used for calculating geological characteristic parameters according to the determined pixel points where the contour lines in the acoustoelectric imaging log are located, wherein the geological characteristic parameters are used for describing geological characteristics shown by the acoustoelectric imaging log, and the geological characteristics comprise bedding, cracks and holes.
In some embodiments of the present application, the edge detection module comprises:
the marking unit is used for marking the entity object in the binary image according to the gray value of each pixel point in the binary image;
the boundary point determining unit is used for carrying out contour tracking and extraction on the marked entity object in the binary image and determining the boundary point of the entity object;
and the curve fitting unit is used for performing curve fitting according to the boundary points of the entity object to obtain a contour line in the acoustoelectric imaging log.
FIG. 7 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 700 of the electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes, such as executing the methods in the above-described embodiments, according to a program stored in a Read-Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for system operation are also stored. The CPU701, the ROM702, and the RAM 703 are connected to each other via a bus 704. An Input/Output (I/O) interface 705 is also connected to the bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable storage medium carries computer readable instructions which, when executed by a processor, implement the method of any of the embodiments described above.
According to an aspect of the present application, there is also provided an electronic device, including: a processor; a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method of any of the above embodiments.
According to an aspect of an embodiment of the present application, there is provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of any of the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of identifying an acousto-electric imaging log, comprising:
acquiring an acoustoelectric imaging log;
carrying out geological feature recognition by a neural network model according to the acoustoelectric imaging log map to obtain a geological feature type corresponding to the acoustoelectric imaging log map; the neural network model is obtained by training a sample acoustoelectric imaging log and a label corresponding to the sample acoustoelectric imaging log, wherein the label is used for indicating the geological feature type corresponding to the sample acoustoelectric imaging log; the geological feature types include a bedding type, a fracture type, and a hole type.
2. The method of claim 1, wherein the neural network model comprises an input layer, a convolutional neural network, a fully-connected neural network, and an output layer;
the geological feature recognition is carried out by the neural network model according to the acoustoelectric imaging log map, and the geological feature type corresponding to the acoustoelectric imaging log map is obtained, and the method comprises the following steps:
inputting the sono-electric imaging log into the input layer;
performing convolution feature extraction on the output of the input layer by the convolution neural network to obtain a convolution feature vector corresponding to the acoustoelectric imaging log;
fully connecting the convolution characteristic vectors by the fully-connected neural network to obtain fully-connected characteristic vectors corresponding to the acoustoelectric imaging log;
and classifying by the output layer according to the fully-connected feature vector, and outputting a geological feature label corresponding to the acoustoelectric imaging log, wherein the geological feature label is used for indicating the geological feature type corresponding to the acoustoelectric imaging log.
3. The method of claim 2, wherein the convolutional neural network comprises one or more cascaded first neural network layers comprising a cascaded convolutional layer, a first activation function layer, and a pooling layer; the fully-connected neural network includes one or more second neural network layers including a cascaded fully-connected layer, a second activation function layer, and a Dropout layer.
4. The method of claim 1, further comprising:
acquiring a training sample set, wherein the training sample set comprises one or more sample acoustoelectric imaging well logs and label labels corresponding to the sample acoustoelectric imaging well logs, and the label labels are used for indicating geological feature types actually corresponding to the sample acoustoelectric imaging well logs;
performing geological feature type prediction on the sample acoustoelectric imaging log by the neural network model to obtain a prediction label corresponding to the sample acoustoelectric imaging log;
adjusting parameters of the neural network model according to the prediction label corresponding to the sample acoustoelectric imaging log and the labeling label corresponding to the sample acoustoelectric imaging log;
and when the convergence condition of the neural network model is reached, finishing the training of the neural network model.
5. The method of claim 1, wherein the acquiring an acousto-electric imaging log comprises:
acquiring an original acoustoelectric imaging log;
and preprocessing the original acoustoelectric imaging log to obtain the acoustoelectric imaging log, wherein the preprocessing comprises at least one of filtering processing and image enhancement processing.
6. The method of claim 1, wherein after the geological feature recognition is performed by the neural network model according to the acoustoelectric imaging log, and a geological feature type corresponding to the acoustoelectric imaging log is obtained, the method further comprises:
carrying out image segmentation on the acoustoelectric imaging log to obtain a binary image;
performing edge detection according to the gray value of each pixel point in the binary image, and determining a contour line in the acoustoelectric imaging log;
and calculating geological characteristic parameters according to the determined pixel points where the contour lines in the acoustoelectric imaging log are located, wherein the geological characteristic parameters are used for describing geological characteristics shown by the acoustoelectric imaging log, and the geological characteristics comprise bedding, cracks and holes.
7. The method of claim 6, wherein the determining the contour line in the sono-electric imaging log by performing edge detection according to the gray values of the pixels in the binary image comprises:
marking the entity object in the binary image according to the gray value of each pixel point in the binary image;
carrying out contour tracking and extraction on the marked entity object in the binary image, and determining the boundary point of the entity object;
and performing curve fitting according to the boundary points of the entity object to obtain a contour line in the acoustoelectric imaging log.
8. An apparatus for identifying an acousto-electric imaging log, comprising:
the acquisition module is used for acquiring an acoustoelectric imaging log;
the geological feature recognition module is used for carrying out geological feature recognition by the neural network model according to the acoustoelectric imaging log map to obtain a geological feature type corresponding to the acoustoelectric imaging log map; the neural network model is obtained by training a sample acoustoelectric imaging log and a label corresponding to the sample acoustoelectric imaging log, wherein the label is used for indicating the geological feature type corresponding to the sample acoustoelectric imaging log; the geological feature types include a bedding type, a fracture type, and a hole type.
9. An electronic device, comprising:
a processor;
a memory having computer-readable instructions stored thereon which, when executed by the processor, implement the method of any one of claims 1-7.
10. A computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the method of any one of claims 1-7.
CN202110731946.5A 2021-06-29 2021-06-29 Identification method of acoustic-electric imaging log and related equipment Active CN113392924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110731946.5A CN113392924B (en) 2021-06-29 2021-06-29 Identification method of acoustic-electric imaging log and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110731946.5A CN113392924B (en) 2021-06-29 2021-06-29 Identification method of acoustic-electric imaging log and related equipment

Publications (2)

Publication Number Publication Date
CN113392924A true CN113392924A (en) 2021-09-14
CN113392924B CN113392924B (en) 2023-05-02

Family

ID=77624772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110731946.5A Active CN113392924B (en) 2021-06-29 2021-06-29 Identification method of acoustic-electric imaging log and related equipment

Country Status (1)

Country Link
CN (1) CN113392924B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109212617A (en) * 2018-08-24 2019-01-15 中国石油天然气股份有限公司 Electric imaging logging phase automatic identifying method and device
CN109389128A (en) * 2018-08-24 2019-02-26 中国石油天然气股份有限公司 Electric imaging logging image characteristic automatic extraction method and device
CN109900617A (en) * 2019-03-21 2019-06-18 西南石油大学 A kind of fractured reservoir permeability curve calculation method based on acoustic-electric imaging logging map
CN110208859A (en) * 2019-05-07 2019-09-06 长江大学 Oil-base mud well crack quantitative parameter intelligence computation method based on ultrasonic imaging
CN110264459A (en) * 2019-06-24 2019-09-20 江苏开放大学(江苏城市职业学院) A kind of interstices of soil characteristics information extraction method
US10750036B1 (en) * 2019-08-27 2020-08-18 Kyocera Document Solutions, Inc. Rapid workflow design using machine learning
CN111783825A (en) * 2020-05-26 2020-10-16 中国石油天然气集团有限公司 Well logging lithology identification method based on convolutional neural network learning
CN112862139A (en) * 2019-11-27 2021-05-28 北京国双科技有限公司 Fluid type prediction model construction method, fluid type prediction method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109212617A (en) * 2018-08-24 2019-01-15 中国石油天然气股份有限公司 Electric imaging logging phase automatic identifying method and device
CN109389128A (en) * 2018-08-24 2019-02-26 中国石油天然气股份有限公司 Electric imaging logging image characteristic automatic extraction method and device
CN109900617A (en) * 2019-03-21 2019-06-18 西南石油大学 A kind of fractured reservoir permeability curve calculation method based on acoustic-electric imaging logging map
CN110208859A (en) * 2019-05-07 2019-09-06 长江大学 Oil-base mud well crack quantitative parameter intelligence computation method based on ultrasonic imaging
CN110264459A (en) * 2019-06-24 2019-09-20 江苏开放大学(江苏城市职业学院) A kind of interstices of soil characteristics information extraction method
US10750036B1 (en) * 2019-08-27 2020-08-18 Kyocera Document Solutions, Inc. Rapid workflow design using machine learning
CN112862139A (en) * 2019-11-27 2021-05-28 北京国双科技有限公司 Fluid type prediction model construction method, fluid type prediction method and device
CN111783825A (en) * 2020-05-26 2020-10-16 中国石油天然气集团有限公司 Well logging lithology identification method based on convolutional neural network learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许联锋, 北京:兵器工业出版社 *

Also Published As

Publication number Publication date
CN113392924B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN110148130B (en) Method and device for detecting part defects
CN107808138B (en) Communication signal identification method based on FasterR-CNN
CN108564085B (en) Method for automatically reading of pointer type instrument
Ayed et al. Unsupervised variational image segmentation/classification using a Weibull observation model
CN111475613A (en) Case classification method and device, computer equipment and storage medium
CN114862838A (en) Unsupervised learning-based defect detection method and equipment
CN108681689B (en) Frame rate enhanced gait recognition method and device based on generation of confrontation network
US20230417700A1 (en) Automated analysis of analytical gels and blots
CN112001362A (en) Image analysis method, image analysis device and image analysis system
CN113177456A (en) Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion
CN116894985B (en) Semi-supervised image classification method and semi-supervised image classification system
CN110472673B (en) Parameter adjustment method, fundus image processing device, fundus image processing medium and fundus image processing apparatus
CN113011528B (en) Remote sensing image small target detection method based on context and cascade structure
CN110751170A (en) Panel quality detection method, system, terminal device and computer readable medium
CN206897873U (en) A kind of image procossing and detecting system based on detection product performance
CN112052730A (en) 3D dynamic portrait recognition monitoring device and method
CN111461152B (en) Cargo detection method and device, electronic equipment and computer readable medium
CN113392924B (en) Identification method of acoustic-electric imaging log and related equipment
CN114898362A (en) Mushroom image classification method based on neural network
CN115423802A (en) Automatic classification and segmentation method for squamous epithelial tumor cell picture based on deep learning
CN112346126B (en) Method, device, equipment and readable storage medium for identifying low-order faults
CN114782822A (en) Method and device for detecting state of power equipment, electronic equipment and storage medium
CN114114457A (en) Fracture characterization method, device and equipment based on multi-modal logging data
CN113762120A (en) Insulator image segmentation method and device, electronic equipment and storage medium
CN116844143B (en) Embryo development stage prediction and quality assessment system based on edge enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant