CN115035371A - Borehole wall crack identification method based on multi-scale feature fusion neural network - Google Patents

Borehole wall crack identification method based on multi-scale feature fusion neural network Download PDF

Info

Publication number
CN115035371A
CN115035371A CN202210524515.6A CN202210524515A CN115035371A CN 115035371 A CN115035371 A CN 115035371A CN 202210524515 A CN202210524515 A CN 202210524515A CN 115035371 A CN115035371 A CN 115035371A
Authority
CN
China
Prior art keywords
shallow
scale
neural network
module
wall crack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210524515.6A
Other languages
Chinese (zh)
Other versions
CN115035371B (en
Inventor
夏文鹤
朱喆昊
李皋
韩玉娇
林永学
李大奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202210524515.6A priority Critical patent/CN115035371B/en
Publication of CN115035371A publication Critical patent/CN115035371A/en
Application granted granted Critical
Publication of CN115035371B publication Critical patent/CN115035371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a borehole wall crack identification method based on a multi-scale feature fusion neural network, which comprises the following specific steps of: s1: performing shallow feature extraction on the input resistivity logging imaging graph by using a shallow feature extraction module; s2: carrying out deep feature extraction on the shallow features by using a multi-scale void convolution attention module; s3: adopting a characteristic fusion upper sampling module to restore the characteristics after splicing and fusing the shallow layer characteristics and the deep layer characteristics to the original image input size; s4: and adjusting the number of channels as the number of label categories, and calculating the probability of each pixel point belonging to the background and the foreground according to the background and foreground characteristic images and the loss function, so as to realize the automatic identification of the well wall crack area. On the premise of ensuring the intelligent crack identification precision and speed, the method reduces the workload of manually detecting the cracks by using resistivity logging imaging pictures, provides powerful safety guarantee for the drilling work, and meets the urgent requirements of cost reduction and efficiency improvement of the current drilling.

Description

Borehole wall crack identification method based on multi-scale feature fusion neural network
Technical Field
The invention relates to the technical field of deep learning, in particular to a resistivity logging imaging graph well wall crack identification method based on a multi-scale feature fusion neural network.
Background
In recent years, with the rapid development of technologies such as big data, artificial intelligence, cloud computing and the like, the fourth industrial revolution represented by artificial intelligence has come, and the intellectualization of oil and gas exploration and development also becomes an inevitable trend of the development of the global oil and gas industry. Well logging engineering has been an indispensable engineering technology in the field of oil and gas exploration and development all the time, and is a guarantee for safety, high efficiency and economic development of oil and gas exploration. At present, the oil and gas industry in China gradually advances the progress of intelligent logging analysis, and is expected to realize accurate analysis of complex well conditions and intelligent closed-loop regulation and control of stability of a shaft by combining with emerging theories and technologies such as big data, artificial intelligence and the like.
Aiming at the problems that the processing process of the existing crack identification technology based on human-computer interaction is huge in workload, strong in human subjectivity and poor in stability, the requirements for stably and accurately characterizing and analyzing cracks are difficult to achieve and the like, a deep learning framework is necessary to be used for constructing a multi-scale feature fusion crack intelligent identification network model, the deep features of a well wall image in a resistivity logging imaging graph are extracted, the shallow features and the deep features are combined to form new features with more characterization capability, and the requirements of a crack intelligent identification task on high accuracy, high speed and strong anti-interference capability are met. On the premise of ensuring the intelligent crack identification precision and speed, the workload of artificial crack detection is reduced, powerful safety guarantee is provided for drilling work, the urgent requirements of cost reduction and efficiency improvement of current drilling are met, and a beneficial reference direction is provided for subsequent shaft stability analysis and regulation.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a borehole wall crack identification method based on a multi-scale feature fusion neural network, which comprises the following steps: aiming at the problem of intelligent extraction and identification of the fracture profile of a resistivity imaging logging data set, the existing network architecture is optimized and improved, so that the method has the best practicability, and the method still has higher identification precision under the condition of less sample quantity.
The purpose of the invention is realized by the following technical scheme:
the well wall crack identification method based on the multi-scale feature fusion neural network comprises the following specific steps:
s1: performing shallow feature extraction on the input image by using a shallow feature extraction module;
s2: carrying out deep feature extraction on the shallow features by using a multi-scale void convolution attention module;
s3: adopting a characteristic fusion upper sampling module to restore the characteristics after splicing and fusing the shallow layer characteristics and the deep layer characteristics to the original image input size;
s4: and adjusting the number of channels as the number of label categories, calculating the probability of each pixel point belonging to the background and the foreground according to the background and foreground characteristic graphs and the loss function, and realizing the automatic identification of the well wall crack area.
The shallow feature extraction module in step S1 includes a convolution layer, a BN layer, an activation function, and a pooling layer, where all convolution kernels are 3 × 3 in size, the step size is 1, and the padding value is 1; the activating function is a ReLU function; the maximum pooling layer convolution kernel size is 2 multiplied by 2, and the step length is 2; the shallow feature extraction module integrally presents a downsampling process.
The multi-scale hole convolution attention module includes a multi-scale hole convolution and attention mechanism module.
The multi-scale hole convolution is formed by juxtaposing convolutions of three different scales, wherein one convolution is 1 multiplied by 1 and used for obtaining global features, and the other two convolutions are hole convolutions of different hole rates.
The attention mechanism module comprises a global pooling layer, a full connection layer, a ReLU function and a Sigmoid function; the attention mechanism module firstly performs global average pooling on the input feature layer, and then performs full connection operation twice, wherein the number of neurons in the first full connection layer is less, and the number of neurons in the second full connection layer is the same as that of input feature channels; after completing the two times of full connection, fixing the input between 0 and 1 by using a Sigmoid function, and multiplying the output value by the original input weight to complete the learning of the weight of each channel.
The step S3 specifically includes the following steps:
s301: reducing the deep characteristic size to be consistent with the shallow characteristic size by adopting an up-sampling operation;
s302: splicing the reduced deep features and the thousand-layer features by using a direct connection mode to form a new feature tensor;
s303: the new feature tensor, upsampled to restore it to the input image size, is again continuously subjected to a convolution operation of 3 x 3 to improve its characterization capability.
The label category number is 2, namely the channel is a background feature map and a foreground feature map.
The step S4 specifically includes the following steps:
s401: using 1 × 1 convolution to adjust the number of channels to be the number of label categories, wherein the number of label categories is 2;
s402: calculating the probability that each pixel point belongs to the background and the foreground according to the background and foreground characteristic graphs and the loss function;
s403: and setting the background pixel points to be black and the foreground pixel points to be label colors, so as to realize the automatic identification of the well wall crack area.
The invention has the beneficial effects that:
the invention constructs a well wall crack area identification network by utilizing a multi-scale cavity convolution attention module and a characteristic fusion upper sampling module. For a given resistivity logging imaging graph, firstly, a shallow feature extraction module is adopted to extract features such as image texture, form and color in the form of cracks in the imaging graph. And then, a multi-scale hole convolution module is used for the shallow features to acquire high-level semantic information of the crack under different scales, and the weights of all channels are adjusted for the semantic features through an attention mechanism to form final deep features. And finally, restoring the deep features of the cracks to be consistent with the shallow feature size by using an up-sampling module, directly connecting the deep features with the shallow features, and restoring the fused features to the original image input size by using the up-sampling module. In order to improve the generalization capability of the network model, the network model designed by the invention does not use a full connection layer to predict the final output result, and the number of channels is adjusted to be the number of label categories by using 1 × 1 convolution. At the moment, the two channels are background and foreground (crack regions) feature graphs, the probability that each pixel belongs to the background and the foreground is calculated according to the feature and a loss function, the background pixels are set to be black, and the foreground pixels are set to be label colors. This operation reduces the number of model parameters while greatly reducing the likelihood of overfitting. And finally, the automatic identification of the well wall crack region is realized by calculating the probability that each pixel belongs to the crack and the background. On the premise of ensuring the intelligent identification precision and speed of the cracks, the workload of manual crack detection is reduced, powerful safety guarantee is provided for drilling work, and the urgent requirements of cost reduction and efficiency improvement of the current drilling are met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
FIG. 1 is a schematic diagram of the network architecture of the present invention;
FIG. 2 is a block diagram of the shallow feature extraction module of the present invention;
FIG. 3 is a schematic structural diagram of a multi-scale hole convolution module according to the present invention;
FIG. 4 is a block diagram of an attention mechanism module process flow of the present invention;
fig. 5 is a schematic diagram of a feature fusion upsampling module structure of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. In addition, the technical solutions in the embodiments may be combined with each other, but it must be based on the realization of those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should be considered to be absent and not within the protection scope of the present invention.
The invention mainly designs an end-to-end well wall crack region identification method based on multi-scale characteristics, which mainly comprises two parts: the first part is to use a multi-scale void convolution module to obtain more comprehensive context characteristics of a crack area in a resistivity imaging log and simultaneously use an attention mechanism to adjust the weight of each channel; and the second part is to introduce a feature fusion upsampling module, fuse the deep features and the shallow features extracted from the first part, and recombine from multiple hierarchy dimensions to form more representative fracture region features.
The borehole wall crack identification method based on the multi-scale feature fusion neural network comprises the following specific steps:
s1: performing shallow feature extraction on the input image by using a shallow feature extraction module;
s2: carrying out deep feature extraction on shallow features by using a multi-scale void convolution attention module;
s3: adopting a characteristic fusion upper sampling module to restore the characteristics after splicing and fusing the shallow layer characteristics and the deep layer characteristics to the original image input size;
s4: and adjusting the number of channels as the number of label categories, and calculating the probability of each pixel point belonging to the background and the foreground according to the background and foreground characteristic images and the loss function, so as to realize the automatic identification of the well wall crack area.
It should be noted that, the shallow feature extraction module:
generally, the shallow feature of an image is similar to the input image, and the shallow feature includes more pixel information, for example: color, texture, edges, etc. At present, the distribution form, filling condition and the like of the cracks also need to be analyzed to a certain extent by manually identifying the cracks, and the reaction of the information in the resistivity imaging log is the texture and color information of the image. Therefore, before the deep feature extraction is performed on the input, a module needs to be designed to perform efficient extraction on the shallow feature of the input.
Artificial intelligence technology has developed to this day in the field of computer vision, many image detection, classification and segmentation networks have good image feature extraction functions, and most of them are improvements to the Unet network for shallow feature extraction of images. Therefore, the shallow feature extraction module designed herein is also inspired by the Unet model, and the specific structure diagram of the module is shown in fig. 2.
The module consists of a convolutional layer, a BN layer, an activation function, and a pooling layer. Wherein, the sizes of all convolution kernels are 3 multiplied by 3, the step length is 1, and the padding value is 1; the activating function is a ReLU function; the maximum pooling layer convolution kernel size is 2 x 2 with a step size of 2. The whole module presents a down-sampling process, and due to the arrangement of the BN layer to carry out normalization processing on data obtained by convolution, the training and convergence speed of the network is accelerated on the premise of preventing overfitting, controlling gradient explosion and preventing gradient disappearance. The detailed architecture of the module is shown in table 1. In the table, eps is the current learning rate, and padding is the number of zero padding for the periphery of the image matrix.
TABLE 1
Figure BDA0003643575330000041
Figure BDA0003643575330000051
It should be noted that the multi-scale hole convolution attention module:
1. multi-scale hole convolution
In a conventional convolutional network, a convolutional kernel is usually used to identify a target region with discrimination power from top to bottom, but the classification capability is lost for a region with weak discrimination power. In the deep convolutional network, convolutional kernels with different size receptive fields are used for acquiring information around a target area to be overlapped and mixed, so that the purpose of improving the classification capability is achieved. The hole convolution is a special form of ordinary convolution, and effectively enlarges the receptive field of a convolution kernel by inserting zero values between weight parameters of the convolution kernel.
For a convolution kernel of n × n size, the receptive field calculation formula is shown as follows:
R=n+(n-1)×(D n -1)
in the formula, D n Represents the void rate of the convolution kernel, i.e., the number of zeros added between the weight parameters. Because only n multiplied by n parameters are calculated in the whole convolution process, the cavity convolution realizes that the calculation amount is not increased while the receptive field is enlarged, and the original resolution is kept. Meanwhile, the multi-scale cavity convolution module in the text network is designed based on the inspiration that DeepLab uses a cavity space pyramid to capture multi-scale information. And performing common convolution pooling on the original input data to form shallow image characteristics, and then transmitting the shallow image characteristics into a multi-scale cavity convolution module. The module is formed by juxtaposing three convolutions of different scales, wherein one convolution is 1 × 1 for obtaining global features, and the other two convolutions are hole convolutions of different hole rates, and the specific structure is shown in fig. 3.
2. Attention mechanism
After the features are convolutely extracted through three different scales, the features are spliced by adopting a direct connection (concat) mode, and compared with the traditional feature fusion method adopting weight multiplication, the direct connection method can furthest retain feature information under various scales. Because the number of shallow characteristic channels after the ordinary convolution processing is large, in order to improve the network training quality and shorten the time of each training of the network, an attention mechanism module (as shown in fig. 4) is added after the training, and the weight of each channel is actively learned.
The attention mechanism module firstly performs global average pooling on an input feature layer, and then performs full-connection operation twice, wherein the number of neurons in a first full-connection layer is less, and the number of neurons in a second full-connection layer is the same as that of input feature channels; after completing the two times of full connection, fixing the input between 0 and 1 by using a Sigmoid function, and multiplying the output value by the original input weight to complete the learning of the weight of each channel. After that, the number of channels is adjusted by a 1 × 1 convolution kernel to be equal to the number of shallow characteristic channels.
So far, the design of the multi-scale void convolution attention module is completed, and the specific frame structure of the module is shown in table 2. In the table, partition is the hole rate of the hole convolution, and the inputs of the conv _ cat modules are the outputs of the branch1, branch2 and branch3 modules. Since the attention mechanism Selayer assigns a weight to each channel of the feature map, the input and output feature sizes are not changed.
TABLE 2
Figure BDA0003643575330000052
Figure BDA0003643575330000061
After being processed by the multi-scale hole convolution module designed in section 4.3.2, the original input image has been converted into rich semantic visual features, and the spatial resolution of the features is rough, so that the possibility of being directly used for extracting a crack region is not provided. In the foregoing, it is mentioned that the current principal formula realizes the identification of the crack through a man-machine interaction mode combining with a computer, and the manual identification of the crack needs to refer to information such as the form and color of the crack to a great extent, and such information often belongs to shallow features in computer vision, so a mode of splicing deep features and shallow features is considered to form a feature tensor with more characterization capability (as shown in fig. 5). The up-sampling module here therefore comprises two inputs: firstly, extracting shallow features of the image through common convolution; and secondly, extracting deep features through multi-scale hole convolution. Since the convolution and pooling operations reduce the size of the image data, the sizes of the shallow and deep features are not consistent, and therefore the shallow and deep features cannot be directly spliced together, and the deep feature size needs to be reduced to be consistent with the shallow feature size through an upsampling operation. The method selects a bilinear interpolation mode to realize the upsampling operation, and compared with other upsampling methods, the method is simple in calculation and does not need parameter learning.
The deep features restored by the up-sampling operation have the same size as the shallow features, and the deep features are spliced by using a direct connection mode to form a new feature tensor. The new feature tensor, which is now much smaller in size than the original input image size, is up-sampled in order to restore it to the input image size, again using successively a 3 x 3 convolution operation on the new feature tensor to improve its characterization capability.
The specific framework structure of the module is shown in table 3, wherein p in the table is the probability of discarding neurons randomly for a while, and the operation can reduce the overfitting probability of the network to a certain extent and reduce the training time of the network. Wherein the cat _ conv layer input is short _ conv layer output and SElayer layer output. After training, the module can extract and form a characteristic with high characterization capability for the resistivity imaging log.
TABLE 3
Figure BDA0003643575330000062
Figure BDA0003643575330000071
The invention constructs a well wall crack area identification network as shown in figure 1.
For a given borehole wall fracture image, firstly, a shallow feature extraction module is adopted to extract features such as texture, form and color of the image. And then, a multi-scale hole convolution module is used for the shallow features to acquire high-level semantic information of the crack under different scales, and the weights of all channels are adjusted for the semantic features through an attention mechanism to form final deep features. And finally, restoring the deep features of the cracks to be consistent with the shallow feature size by using an up-sampling module, directly connecting the deep features with the shallow features, and restoring the fused features to the original image input size by using the up-sampling module. In order to improve the generalization capability of the network model, the network model designed herein does not use a full connection layer to predict the final output result, and here, 1 × 1 convolution is used to adjust the number of channels to be the number of label categories (since the task herein only needs to distinguish between the background and the crack region, the number of label categories is 2). At the moment, the two channels are background and foreground (crack regions) feature graphs, the probability that each pixel belongs to the background and the foreground is calculated according to the feature and a loss function, the background pixels are set to be black, and the foreground pixels are set to be label colors. This operation reduces the number of model parameters while greatly reducing the likelihood of overfitting. And finally, the automatic identification of the well wall crack region is realized by calculating the probability that each pixel belongs to the crack and the background.
The multi-scale information fusion fracture region identification network designed by the method adopts a cross entropy function as a loss function of the network, and selects an adaptive estimation moment (Adam) as an optimizer of the network. The Adam algorithm differs from the conventional stochastic gradient descent algorithm in that it establishes independent, adaptive learning rates for different weight parameters by computing first and second order moment estimates of the gradient. The method has high calculation efficiency and less required memory, and is suitable for solving the problem of large-scale parameter optimization, so the optimizer is also selected for the network designed by the text.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. The well wall crack identification method based on the multi-scale feature fusion neural network is characterized by comprising the following specific steps of:
s1: performing shallow feature extraction on the input resistivity logging imaging graph by using a shallow feature extraction module;
s2: carrying out deep feature extraction on the shallow features by using a multi-scale void convolution attention module;
s3: adopting a characteristic fusion upper sampling module to restore the characteristics after splicing and fusing the shallow layer characteristics and the deep layer characteristics to the original image input size;
s4: and adjusting the number of channels as the number of label categories, calculating the probability of each pixel point belonging to the background and the foreground according to the background and foreground characteristic graphs and the loss function, and realizing the automatic identification of the well wall crack area.
2. The method for identifying the borehole wall crack based on the multi-scale feature fusion neural network as claimed in claim 1, wherein the shallow feature extraction module in the step S1 comprises a convolutional layer, a BN layer, an activation function and a pooling layer; the activating function is a ReLU function; the shallow feature extraction module integrally presents a downsampling process.
3. The method for identifying the borehole wall crack based on the multi-scale feature fusion neural network as claimed in claim 1, wherein the multi-scale void convolution attention module comprises a multi-scale void convolution and attention mechanism module.
4. The borehole wall crack identification method based on the multi-scale feature fusion neural network as claimed in claim 3, wherein the multi-scale cavity convolution is composed of three convolutions with different scales in parallel and used for obtaining global features, and the other two cavity convolutions are cavity convolutions with different cavity rates.
5. The method for identifying the borehole wall fractures based on the multi-scale feature fusion neural network according to claim 3, wherein the attention mechanism module comprises a global pooling layer, a full connection layer, a ReLU function and a Sigmoid function.
6. The method for identifying the borehole wall crack based on the multi-scale feature fusion neural network according to claim 1, wherein the step S3 specifically comprises the following steps:
s301: reducing the deep characteristic size to be consistent with the shallow characteristic size by adopting an up-sampling operation;
s302: splicing the reduced deep-layer features and shallow-layer features by using a direct connection mode to form a new feature tensor;
s303: the new feature tensor, upsampled to restore it to the input image size, is again continuously subjected to a convolution operation of 3 x 3 to improve its characterization capability.
7. The method for identifying the borehole wall crack based on the multi-scale feature fusion neural network as claimed in claim 1, wherein the label category number is 2, that is, the channels are background and foreground feature maps.
8. The method for identifying the borehole wall crack based on the multi-scale feature fusion neural network according to claim 1, wherein the step S4 specifically comprises the following steps:
s401: using 1 × 1 convolution to adjust the number of channels to be the number of label categories, wherein the number of label categories is 2;
s402: calculating the probability that each pixel point belongs to the background and the foreground according to the background and foreground characteristic graphs and the loss function;
s403: and setting the background pixel points to be black and the foreground pixel points to be label colors, so as to realize the automatic identification of the well wall crack area.
CN202210524515.6A 2022-05-13 2022-05-13 Well wall crack identification method based on multi-scale feature fusion neural network Active CN115035371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210524515.6A CN115035371B (en) 2022-05-13 2022-05-13 Well wall crack identification method based on multi-scale feature fusion neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210524515.6A CN115035371B (en) 2022-05-13 2022-05-13 Well wall crack identification method based on multi-scale feature fusion neural network

Publications (2)

Publication Number Publication Date
CN115035371A true CN115035371A (en) 2022-09-09
CN115035371B CN115035371B (en) 2023-09-12

Family

ID=83120672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210524515.6A Active CN115035371B (en) 2022-05-13 2022-05-13 Well wall crack identification method based on multi-scale feature fusion neural network

Country Status (1)

Country Link
CN (1) CN115035371B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116122802A (en) * 2022-12-23 2023-05-16 中国科学院地质与地球物理研究所 Drilling logging feature extraction method and system based on Unet double-channel output
CN117211758A (en) * 2023-11-07 2023-12-12 克拉玛依市远山石油科技有限公司 Intelligent drilling control system and method for shallow hole coring
CN117292193A (en) * 2023-10-08 2023-12-26 无锡科维智能物流装备有限公司 Multi-station intelligent logistics conveying system
CN118247784A (en) * 2024-05-28 2024-06-25 南昌大学第二附属医院 Cryptococcus image recognition method based on deep convolutional neural network

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104730596A (en) * 2015-01-25 2015-06-24 中国石油大学(华东) Discrete fracture modeling method based on multiscale factor restraint
CN107609530A (en) * 2017-09-25 2018-01-19 常州工学院 One kind is with brill orientation electromagnetic resistivity imaging features extracting method
CN108956622A (en) * 2018-07-04 2018-12-07 安徽理工大学 A kind of acquisition method and device of shaft wall image
CN109614883A (en) * 2018-11-21 2019-04-12 瑾逸科技发展扬州有限公司 A kind of tight sand crack intelligent identification Method based on convolutional neural networks
CN111105377A (en) * 2019-12-19 2020-05-05 西南石油大学 Method for repairing micro-resistivity imaging image
CN111597932A (en) * 2020-04-30 2020-08-28 汕头大学 Road crack image identification method, device and system based on convolutional neural network
US20200364442A1 (en) * 2019-05-15 2020-11-19 Getac Technology Corporation System for detecting surface pattern of object and artificial neural network-based method for detecting surface pattern of object
CN112364911A (en) * 2020-11-06 2021-02-12 东北石油大学 Resistivity imaging inversion method and device and storage medium
CN112465748A (en) * 2020-11-10 2021-03-09 西南科技大学 Neural network based crack identification method, device, equipment and storage medium
CN112489023A (en) * 2020-12-02 2021-03-12 重庆邮电大学 Pavement crack detection method based on multiple scales and multiple layers
CN113255678A (en) * 2021-06-17 2021-08-13 云南航天工程物探检测股份有限公司 Road crack automatic identification method based on semantic segmentation
CN114114457A (en) * 2021-11-09 2022-03-01 中国石油大学(北京) Fracture characterization method, device and equipment based on multi-modal logging data

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104730596A (en) * 2015-01-25 2015-06-24 中国石油大学(华东) Discrete fracture modeling method based on multiscale factor restraint
CN107609530A (en) * 2017-09-25 2018-01-19 常州工学院 One kind is with brill orientation electromagnetic resistivity imaging features extracting method
CN108956622A (en) * 2018-07-04 2018-12-07 安徽理工大学 A kind of acquisition method and device of shaft wall image
CN109614883A (en) * 2018-11-21 2019-04-12 瑾逸科技发展扬州有限公司 A kind of tight sand crack intelligent identification Method based on convolutional neural networks
US20200364442A1 (en) * 2019-05-15 2020-11-19 Getac Technology Corporation System for detecting surface pattern of object and artificial neural network-based method for detecting surface pattern of object
CN111105377A (en) * 2019-12-19 2020-05-05 西南石油大学 Method for repairing micro-resistivity imaging image
CN111597932A (en) * 2020-04-30 2020-08-28 汕头大学 Road crack image identification method, device and system based on convolutional neural network
CN112364911A (en) * 2020-11-06 2021-02-12 东北石油大学 Resistivity imaging inversion method and device and storage medium
CN112465748A (en) * 2020-11-10 2021-03-09 西南科技大学 Neural network based crack identification method, device, equipment and storage medium
CN112489023A (en) * 2020-12-02 2021-03-12 重庆邮电大学 Pavement crack detection method based on multiple scales and multiple layers
CN113255678A (en) * 2021-06-17 2021-08-13 云南航天工程物探检测股份有限公司 Road crack automatic identification method based on semantic segmentation
CN114114457A (en) * 2021-11-09 2022-03-01 中国石油大学(北京) Fracture characterization method, device and equipment based on multi-modal logging data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
万金彬等: ""页岩气储层测井评价及进展"", 《测井技术》, vol. 36, no. 5, pages 441 - 447 *
李皋等: ""气体钻井随钻安全风险识别与监控"", 《天然气工业》, vol. 35, no. 7, pages 66 - 72 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116122802A (en) * 2022-12-23 2023-05-16 中国科学院地质与地球物理研究所 Drilling logging feature extraction method and system based on Unet double-channel output
CN116122802B (en) * 2022-12-23 2023-07-14 中国科学院地质与地球物理研究所 Drilling logging feature extraction method and system based on Unet double-channel output
CN117292193A (en) * 2023-10-08 2023-12-26 无锡科维智能物流装备有限公司 Multi-station intelligent logistics conveying system
CN117211758A (en) * 2023-11-07 2023-12-12 克拉玛依市远山石油科技有限公司 Intelligent drilling control system and method for shallow hole coring
CN117211758B (en) * 2023-11-07 2024-04-02 克拉玛依市远山石油科技有限公司 Intelligent drilling control system and method for shallow hole coring
CN118247784A (en) * 2024-05-28 2024-06-25 南昌大学第二附属医院 Cryptococcus image recognition method based on deep convolutional neural network

Also Published As

Publication number Publication date
CN115035371B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN115035371A (en) Borehole wall crack identification method based on multi-scale feature fusion neural network
CN111461083A (en) Rapid vehicle detection method based on deep learning
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN112699937A (en) Apparatus, method, device, and medium for image classification and segmentation based on feature-guided network
CN113920468B (en) Multi-branch pedestrian detection method based on cross-scale feature enhancement
CN111738054B (en) Behavior anomaly detection method based on space-time self-encoder network and space-time CNN
CN113283356B (en) Multistage attention scale perception crowd counting method
CN110020658B (en) Salient object detection method based on multitask deep learning
CN111882620A (en) Road drivable area segmentation method based on multi-scale information
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN112381030A (en) Satellite optical remote sensing image target detection method based on feature fusion
CN110852199A (en) Foreground extraction method based on double-frame coding and decoding model
CN115565043A (en) Method for detecting target by combining multiple characteristic features and target prediction method
CN115908793A (en) Coding and decoding structure semantic segmentation model based on position attention mechanism
CN115240259A (en) Face detection method and face detection system based on YOLO deep network in classroom environment
CN117557775A (en) Substation power equipment detection method and system based on infrared and visible light fusion
CN117197746A (en) Safety monitoring system and method based on deep learning
CN115063833A (en) Machine room personnel detection method based on image layered vision
CN110728186A (en) Fire detection method based on multi-network fusion
CN118172308A (en) Hub surface defect detection method and device integrating attention mechanism and deformable convolution, electronic equipment and storage medium
CN113344005B (en) Image edge detection method based on optimized small-scale features
CN115761438A (en) Depth estimation-based saliency target detection method
CN110427920B (en) Real-time pedestrian analysis method oriented to monitoring environment
CN111402223B (en) Transformer substation defect problem detection method using transformer substation video image
CN114565764A (en) Port panorama sensing system based on ship instance segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant