CN115035371B - Well wall crack identification method based on multi-scale feature fusion neural network - Google Patents

Well wall crack identification method based on multi-scale feature fusion neural network Download PDF

Info

Publication number
CN115035371B
CN115035371B CN202210524515.6A CN202210524515A CN115035371B CN 115035371 B CN115035371 B CN 115035371B CN 202210524515 A CN202210524515 A CN 202210524515A CN 115035371 B CN115035371 B CN 115035371B
Authority
CN
China
Prior art keywords
shallow
features
feature
module
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210524515.6A
Other languages
Chinese (zh)
Other versions
CN115035371A (en
Inventor
夏文鹤
朱喆昊
李皋
韩玉娇
林永学
李大奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Petroleum University
Original Assignee
Southwest Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Petroleum University filed Critical Southwest Petroleum University
Priority to CN202210524515.6A priority Critical patent/CN115035371B/en
Publication of CN115035371A publication Critical patent/CN115035371A/en
Application granted granted Critical
Publication of CN115035371B publication Critical patent/CN115035371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying a well wall crack based on a multi-scale feature fusion neural network, which comprises the following specific steps: s1: shallow feature extraction is carried out on the input resistivity logging imaging graph by using a shallow feature extraction module; s2: deep feature extraction is carried out on shallow features by using a multi-scale cavity convolution attention module; s3: the feature fusion upsampling module is adopted to restore the features after the shallow features and the deep features are spliced and fused to the original image input size; s4: and adjusting the number of the channels to be the number of label categories, and calculating the probability that each pixel point belongs to the background and the foreground according to the background and the foreground feature map and the loss function, so as to realize the automatic identification of the well wall crack region. On the premise of ensuring the intelligent identification precision and speed of the cracks, the invention reduces the workload of manually detecting the cracks by using the resistivity logging imaging diagram, provides powerful safety guarantee for drilling work, and meets the urgent requirements of cost reduction and synergy of the current drilling.

Description

Well wall crack identification method based on multi-scale feature fusion neural network
Technical Field
The invention relates to the technical field of deep learning, in particular to a resistivity logging imaging graph well wall crack identification method based on a multi-scale feature fusion neural network.
Background
In recent years, with the rapid development of technologies such as big data, artificial intelligence, cloud computing and the like, the fourth industrial revolution represented by artificial intelligence has come, and oil and gas exploration and development intelligence also becomes an inevitable trend of the global oil and gas industry development. Logging engineering is an indispensable engineering technology in the field of oil and gas exploration and development all the time, and is a guarantee for oil and gas exploration safety, high efficiency and economic development. At present, the oil and gas industry in China gradually starts to advance the process of intelligent logging analysis, and is hopeful to realize accurate analysis of complex well conditions and intelligent closed-loop regulation and control of well bore stability by combining with emerging theories and technologies such as big data, artificial intelligence and the like.
Aiming at the problems that the existing crack identification technology based on man-machine interaction is huge in processing process workload, strong in human subjectivity and poor in stability, the requirements of stable and accurate characterization analysis on cracks are difficult to achieve, and the like, a multi-scale feature fusion crack intelligent identification network model is necessarily constructed by using a deep learning framework, extraction of deep features of a well wall image in a resistivity logging imaging diagram is achieved, and shallow features and deep features are combined to form new features with better characterization capability, so that the requirements of a crack intelligent identification task on high accuracy, high speed and strong anti-interference capability are met. On the premise of ensuring the intelligent identification precision and speed of the cracks, the workload of manual crack detection is reduced, powerful safety guarantee is provided for drilling work, the urgent requirements of current drilling on cost reduction and synergy are met, and a beneficial reference direction can be provided for subsequent well bore stability analysis and regulation.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method for identifying a well wall crack based on a multi-scale feature fusion neural network, which comprises the following steps: aiming at the problems of intelligent extraction and identification of crack contours of a resistivity imaging logging data set, the existing network architecture is optimized and improved, so that the method has optimal practicability, and high identification accuracy is ensured under the condition of fewer samples.
The aim of the invention is realized by the following technical scheme:
a method for identifying a well wall crack based on a multi-scale feature fusion neural network comprises the following specific steps:
s1: shallow feature extraction is carried out on an input image by using a shallow feature extraction module;
s2: deep feature extraction is carried out on shallow features by using a multi-scale cavity convolution attention module;
s3: the feature fusion upsampling module is adopted to restore the features after the shallow features and the deep features are spliced and fused to the original image input size;
s4: and adjusting the number of the channels to be the number of label categories, and calculating the probability that each pixel point belongs to the background and the foreground according to the background and the foreground feature map and the loss function, so as to realize the automatic identification of the well wall crack region.
The shallow layer feature extraction module in the step S1 comprises a convolution layer, a BN layer, an activation function and a pooling layer, wherein the size of all convolution kernels is 3 multiplied by 3, the step length is 1, and the padding value is 1; the activating function selects a ReLU function; the convolution kernel size of the maximum pooling layer is 2 multiplied by 2, and the step length is 2; the shallow feature extraction module integrally presents a downsampling process.
The multi-scale cavity convolution attention module comprises a multi-scale cavity convolution and attention mechanism module.
The multi-scale cavity convolution consists of three convolutions with different scales in parallel, wherein one convolution is 1 multiplied by 1 and is used for acquiring global features, and the other two convolutions are cavity convolutions with different cavity rates.
The attention mechanism module comprises a global pooling layer, a full connection layer, a ReLU function and a Sigmoid function; the attention mechanism module carries out global average pooling on the input characteristic layers firstly, then carries out two full-connection operations, the number of neurons of the first full-connection layer is less, and the number of neurons of the second full-connection layer is the same as the number of input characteristic channels; after the two full connections are completed, the Sigmoid function is used for fixing the input between 0 and 1, and the output value is multiplied by the original input weight, so that the learning of the weight of each channel is completed.
The step S3 specifically comprises the following steps:
s301: reducing the deep characteristic size to be consistent with the shallow characteristic size by adopting an up-sampling operation;
s302: splicing the restored deep features and the thousand-layer features by using a direct connection mode to form a new feature tensor;
s303: a 3 x 3 convolution operation is again used successively on the new feature tensor to increase its characterizations capability, up-sampling the new feature tensor to restore the new feature tensor to the input image size.
The label category number is 2, namely the channel is a background and foreground characteristic diagram.
The step S4 specifically includes the following steps:
s401: using a 1 x 1 convolution to adjust the number of channels to the number of tag categories, the number of tag categories being 2;
s402: according to the background and foreground feature images and the loss function, calculating the probability that each pixel belongs to the background and the foreground;
s403: and setting background pixel points as black and foreground pixel points as label colors, so as to realize automatic identification of a well wall crack region.
The invention has the beneficial effects that:
the invention constructs a well wall crack region identification network by utilizing a multi-scale cavity convolution attention module and a feature fusion up-sampling module. For a given resistivity logging imaging graph, firstly, a shallow feature extraction module is adopted to extract features of image textures, forms, colors and the like in a crack form in the imaging graph. And then, a multi-scale cavity convolution module is used for shallow features to acquire high-level semantic information of the crack under different scales, and the weight of each channel is adjusted for the semantic features through a attention mechanism to form the final deep features. And finally, restoring the deep features of the cracks to be consistent with the shallow features in size by using an up-sampling module, directly connecting the deep features with the shallow features, and restoring the fused features to the original image input size by using the up-sampling module. In order to improve the generalization capability of the network model, the network model designed by the invention does not use a full connection layer to predict the final output result, and the number of channels is adjusted to the number of label categories by using 1X 1 convolution. At this time, the two channels are background and foreground (crack area) feature graphs, the probability that each pixel belongs to the background and the foreground is calculated according to the feature and the loss function, and finally the background pixel is set to be black, and the foreground pixel is set to be the label color. This operation reduces the number of model parameters while greatly reducing the likelihood of overfitting. Finally, the automatic identification of the well wall crack area is realized by calculating the probability that each pixel belongs to the crack and the background. On the premise of ensuring the intelligent identification precision and speed of the cracks, the workload of manual crack detection is reduced, powerful safety guarantee is provided for drilling work, and the urgent requirements of cost reduction and synergy of the current drilling are met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the network architecture of the present invention;
FIG. 2 is a schematic diagram of a shallow feature extraction module according to the present invention;
FIG. 3 is a schematic diagram of a multi-scale cavity convolution module of the present invention;
FIG. 4 is a block diagram of the attention mechanism module process flow of the present invention;
fig. 5 is a schematic diagram of a feature fusion up-sampling module structure according to the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the technical solutions should be considered that the combination does not exist and is not within the scope of protection claimed by the present invention.
The invention mainly designs an end-to-end well wall crack region identification method based on multi-scale characteristics, which mainly comprises two parts: the first part is to use a multi-scale cavity convolution module to acquire more comprehensive context characteristics of a crack region in a resistivity imaging log, and simultaneously use an attention mechanism to adjust the weight of each channel; and the second part is to introduce a feature fusion up-sampling module, fuse the deep features extracted from the first part with the shallow features, and recombine the deep features from multiple hierarchy dimensions to form more representative crack region features.
A method for identifying a well wall crack based on a multi-scale feature fusion neural network comprises the following specific steps:
s1: shallow feature extraction is carried out on an input image by using a shallow feature extraction module;
s2: deep feature extraction is carried out on shallow features by using a multi-scale cavity convolution attention module;
s3: the feature fusion upsampling module is adopted to restore the features after the shallow features and the deep features are spliced and fused to the original image input size;
s4: and adjusting the number of the channels to be the number of label categories, and calculating the probability that each pixel point belongs to the background and the foreground according to the background and the foreground feature map and the loss function, so as to realize the automatic identification of the well wall crack region.
It should be noted that, the shallow feature extraction module:
in general, the shallow features of an image are similar to the input image, and the shallow features include more pixel information, for example: color, texture, edges, etc. At present, the distribution form, filling condition and the like of the cracks are also needed to be analyzed to a certain extent by manually identifying the cracks, and the reaction of the information in the resistivity imaging log is texture and color information of the image. Therefore, a module needs to be designed to effectively extract shallow features of the input before deep features are extracted from the input.
Artificial intelligence technology has been developed to date in the field of computer vision, and many image detection, classification and segmentation networks have good image feature extraction functions, which are mostly improvements to the Unet network for shallow feature extraction of images. Therefore, the shallow feature extraction module designed in the text is inspired by the Unet model, and a specific structure diagram of the module is shown in fig. 2.
The module consists of a convolutional layer, a BN layer, an activation function, and a pooling layer. Wherein, the size of all convolution kernels is 3 multiplied by 3, the step length is 1, and the padding value is 1; the activating function selects a ReLU function; the maximum pooling layer convolution kernel size is 2 x 2 with a step size of 2. The whole module presents a downsampling process, and the BN layer is arranged to normalize the data obtained by convolution, so that the training and convergence speed of the network is accelerated on the premise of preventing overfitting and controlling gradient explosion to prevent gradient from disappearing. The detailed architecture of the module is shown in table 1. Eps in the table is the current learning rate, and padding is the number of zero padding on the periphery of the image matrix.
TABLE 1
It should be noted that, the multi-scale cavity convolution attention module:
1. multi-scale hole convolution
In a conventional convolution network, a convolution kernel is generally used to identify a target area with discrimination from top to bottom, but the classification capability is lost for an area with weak discrimination. In the deep convolution network, convolution kernels with different sizes of receptive fields are used for acquiring information around a target area to be overlapped and mixed, so that the aim of improving classification capability is fulfilled. The hole convolution is a special form of ordinary convolution, and the receptive field of the convolution kernel is effectively enlarged by inserting zero values between weight parameters of the convolution kernel.
For a convolution kernel of size n×n, the receptive field calculation formula is shown as:
R=n+(n-1)×(D n -1)
wherein D is n Representing the void fraction of the convolution kernel, i.e. the number of zero values added between the weight parameters. Because only n multiplied by n parameters are still calculated in the whole convolution process, the cavity convolution realizes that the calculated amount is not increased while the receptive field is enlarged, and meanwhile, the original resolution is reserved. Meanwhile, inspired by deep Lab to capture multi-scale information by using a hole space pyramid, a multi-scale hole convolution module in the network is designed. And carrying out common convolution pooling treatment on the original input data to form shallow image features, and then transmitting the shallow image features into a multi-scale cavity convolution module. The module consists of three convolutions with different scales in parallel, wherein one convolution is 1 multiplied by 1 and is used for acquiring global features, and the other two convolutions are cavity convolutions with different cavity rates, and the specific structure is shown in figure 3.
2. Attention mechanism
After the features are convolutionally extracted by three different scales, the features are spliced in a direct connection (concat) mode, and compared with the traditional feature fusion method by adopting a weight multiplication method, the direct connection can reserve the feature information under various scales to the greatest extent. Because the number of shallow characteristic channels processed by common convolution is more, in order to improve the network training quality and shorten the time of each training of the network, an attention mechanism module (shown in fig. 4) is added after that to actively learn the weights of all channels.
The attention mechanism module carries out global average pooling on the input characteristic layers firstly, then carries out two full-connection operations, the number of neurons of the first full-connection layer is smaller, and the number of neurons of the second full-connection layer is the same as the number of input characteristic channels; after the two full connections are completed, the Sigmoid function is used for fixing the input between 0 and 1, and the output value is multiplied by the original input weight, so that the learning of the weight of each channel is completed. The number of channels is then adjusted by a 1 x 1 convolution kernel to be equal to the number of shallow feature channels.
Thus far, the design of the multi-scale cavity convolution attention module is completed, and the specific framework structure of the module is shown in table 2. The position in the table is the void rate of the void convolution, and the inputs of the conv_cat module are the outputs of the branch1, branch2 and branch3 modules. Since the attention mechanism selyer is to weight each channel of the feature map, the input and output feature sizes are not changed.
TABLE 2
After the processing of the multi-scale hole convolution module designed in section 4.3.2, the original input image is converted into rich semantic visual features, and the spatial resolution of the features is rough, so that the possibility of directly extracting a crack region is not provided. In the foregoing description, the identification of the crack is mainly realized by manually combining with the man-machine interaction of the computer, and the manual identification of the crack is largely required to refer to the information such as the morphology and the color of the crack, which often belongs to the shallow features in the computer vision, so that the deep features and the shallow features are considered to be spliced to form a feature tensor with better characterization capability (as shown in fig. 5). The upsampling module here contains two inputs: firstly, the shallow layer characteristics of the image are extracted through common convolution; and secondly, deep features extracted through multi-scale cavity convolution. Because convolution and pooling operations reduce the image data size, the sizes of the shallow features and the deep features are inconsistent, so that the shallow features and the deep features cannot be directly spliced, and the deep feature size needs to be reduced to be consistent with the shallow feature size through an up-sampling operation. The upsampling operation is implemented by using bilinear interpolation, which is simple to calculate and does not require parameter learning compared with other upsampling methods.
The deep features restored by the up-sampling operation have the same size as the shallow features, and are spliced by using a direct connection mode to form a new feature tensor. The new feature tensor is again continuously subjected to a 3 x 3 convolution operation to increase its characterizability, where the feature tensor is of a size much smaller than the original input image size, so that it is up-sampled to restore it to the input image size.
The specific framework structure of the module is shown in a table 3, wherein p is the probability of temporarily and randomly discarding neurons, and the operation can reduce the probability of network overfitting to a certain extent and reduce the network training time. Wherein the cat_conv layer input is a shortcut_conv layer output and a SElayer layer output. After training, the module can extract the resistivity imaging log to form a feature with high characterization capability.
TABLE 3 Table 3
The invention constructs a well wall crack area identification network as shown in figure 1.
For a given well wall crack image, firstly, a shallow layer feature extraction module is adopted to extract the features of the image, such as texture, morphology, color and the like. And then, a multi-scale cavity convolution module is used for shallow features to acquire high-level semantic information of the crack under different scales, and the weight of each channel is adjusted for the semantic features through a attention mechanism to form the final deep features. And finally, restoring the deep features of the cracks to be consistent with the shallow features in size by using an up-sampling module, directly connecting the deep features with the shallow features, and restoring the fused features to the original image input size by using the up-sampling module. To enhance the generalization ability of the network model, the network model designed herein does not use the full-join layer to predict the final output result, where a 1×1 convolution is used to adjust the number of channels to the number of label categories (label category number 2 because the task herein only needs to distinguish between background and crack regions). At this time, the two channels are background and foreground (crack area) feature graphs, the probability that each pixel belongs to the background and the foreground is calculated according to the feature and the loss function, and finally the background pixel is set to be black, and the foreground pixel is set to be the label color. This operation reduces the number of model parameters while greatly reducing the likelihood of overfitting. Finally, the automatic identification of the well wall crack area is realized by calculating the probability that each pixel belongs to the crack and the background.
The multi-scale information fusion fracture region identification network designed in the method adopts a cross entropy function as a loss function of the network, and an adaptive estimation moment (Adam) is selected as an optimizer of the network. Adam's algorithm differs from traditional random gradient descent algorithms in that Adam's algorithm establishes independent, adaptive learning rates for different weight parameters by computing first and second moment estimates of the gradient. The method has high calculation efficiency and less required memory, and is suitable for solving the problem of large-scale parameter optimization, so that the network designed in the method also selects the optimizer.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (3)

1. The method for identifying the well wall cracks based on the multi-scale feature fusion neural network is characterized by comprising the following specific steps of:
s1: shallow feature extraction is carried out on the input resistivity logging imaging graph by using a shallow feature extraction module; the shallow feature extraction module comprises a convolution layer, a BN layer, an activation function and a pooling layer; the activating function selects a ReLU function; the shallow layer feature extraction module integrally presents a downsampling process;
s2: deep feature extraction is carried out on shallow features by using a multi-scale cavity convolution attention module; the multi-scale cavity convolution attention module comprises a multi-scale cavity convolution and attention mechanism module; the multi-scale cavity convolution consists of three convolutions with different scales in parallel and is used for acquiring global features, and the other two convolutions are cavity convolutions with different cavity rates; the attention mechanism module comprises a global pooling layer, a full connection layer, a ReLU function and a Sigmoid function;
s3: the feature fusion upsampling module is adopted to restore the features after the shallow features and the deep features are spliced and fused to the original image input size; the step S3 specifically comprises the following steps:
s301: reducing the deep characteristic size to be consistent with the shallow characteristic size by adopting an up-sampling operation;
s302: splicing the restored deep features and shallow features by using a direct connection mode to form a new feature tensor;
s303: the new feature tensor is continuously used for 3 multiplied by 3 convolution operation again to improve the characterization capability, and the new feature tensor is up-sampled to restore the new feature tensor to the input image size;
s4: and adjusting the number of the channels to be the number of label categories, and calculating the probability that each pixel point belongs to the background and the foreground according to the background and the foreground feature map and the loss function, so as to realize the automatic identification of the well wall crack region.
2. The method for identifying the well wall cracks based on the multi-scale feature fusion neural network according to claim 1, wherein the label category number is 2, namely the channels are background and foreground feature images.
3. The method for identifying the wall cracks based on the multi-scale feature fusion neural network according to claim 1, wherein the step S4 specifically comprises the following steps:
s401: using a 1 x 1 convolution to adjust the number of channels to the number of tag categories, the number of tag categories being 2;
s402: according to the background and foreground feature images and the loss function, calculating the probability that each pixel belongs to the background and the foreground;
s403: and setting background pixel points as black and foreground pixel points as label colors, so as to realize automatic identification of a well wall crack region.
CN202210524515.6A 2022-05-13 2022-05-13 Well wall crack identification method based on multi-scale feature fusion neural network Active CN115035371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210524515.6A CN115035371B (en) 2022-05-13 2022-05-13 Well wall crack identification method based on multi-scale feature fusion neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210524515.6A CN115035371B (en) 2022-05-13 2022-05-13 Well wall crack identification method based on multi-scale feature fusion neural network

Publications (2)

Publication Number Publication Date
CN115035371A CN115035371A (en) 2022-09-09
CN115035371B true CN115035371B (en) 2023-09-12

Family

ID=83120672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210524515.6A Active CN115035371B (en) 2022-05-13 2022-05-13 Well wall crack identification method based on multi-scale feature fusion neural network

Country Status (1)

Country Link
CN (1) CN115035371B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116122802B (en) * 2022-12-23 2023-07-14 中国科学院地质与地球物理研究所 Drilling logging feature extraction method and system based on Unet double-channel output
CN117292193B (en) * 2023-10-08 2024-07-09 无锡科维智能物流装备有限公司 Multi-station intelligent logistics conveying system
CN117211758B (en) * 2023-11-07 2024-04-02 克拉玛依市远山石油科技有限公司 Intelligent drilling control system and method for shallow hole coring
CN118247784B (en) * 2024-05-28 2024-08-06 南昌大学第二附属医院 Cryptococcus image recognition method based on deep convolutional neural network

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104730596A (en) * 2015-01-25 2015-06-24 中国石油大学(华东) Discrete fracture modeling method based on multiscale factor restraint
CN107609530A (en) * 2017-09-25 2018-01-19 常州工学院 One kind is with brill orientation electromagnetic resistivity imaging features extracting method
CN108956622A (en) * 2018-07-04 2018-12-07 安徽理工大学 A kind of acquisition method and device of shaft wall image
CN109614883A (en) * 2018-11-21 2019-04-12 瑾逸科技发展扬州有限公司 A kind of tight sand crack intelligent identification Method based on convolutional neural networks
CN111105377A (en) * 2019-12-19 2020-05-05 西南石油大学 Method for repairing micro-resistivity imaging image
CN111597932A (en) * 2020-04-30 2020-08-28 汕头大学 Road crack image identification method, device and system based on convolutional neural network
CN112364911A (en) * 2020-11-06 2021-02-12 东北石油大学 Resistivity imaging inversion method and device and storage medium
CN112465748A (en) * 2020-11-10 2021-03-09 西南科技大学 Neural network based crack identification method, device, equipment and storage medium
CN112489023A (en) * 2020-12-02 2021-03-12 重庆邮电大学 Pavement crack detection method based on multiple scales and multiple layers
CN113255678A (en) * 2021-06-17 2021-08-13 云南航天工程物探检测股份有限公司 Road crack automatic identification method based on semantic segmentation
CN114114457A (en) * 2021-11-09 2022-03-01 中国石油大学(北京) Fracture characterization method, device and equipment based on multi-modal logging data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11216687B2 (en) * 2019-05-15 2022-01-04 Getac Technology Corporation Image detection scanning method for object surface defects and image detection scanning system thereof

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104730596A (en) * 2015-01-25 2015-06-24 中国石油大学(华东) Discrete fracture modeling method based on multiscale factor restraint
CN107609530A (en) * 2017-09-25 2018-01-19 常州工学院 One kind is with brill orientation electromagnetic resistivity imaging features extracting method
CN108956622A (en) * 2018-07-04 2018-12-07 安徽理工大学 A kind of acquisition method and device of shaft wall image
CN109614883A (en) * 2018-11-21 2019-04-12 瑾逸科技发展扬州有限公司 A kind of tight sand crack intelligent identification Method based on convolutional neural networks
CN111105377A (en) * 2019-12-19 2020-05-05 西南石油大学 Method for repairing micro-resistivity imaging image
CN111597932A (en) * 2020-04-30 2020-08-28 汕头大学 Road crack image identification method, device and system based on convolutional neural network
CN112364911A (en) * 2020-11-06 2021-02-12 东北石油大学 Resistivity imaging inversion method and device and storage medium
CN112465748A (en) * 2020-11-10 2021-03-09 西南科技大学 Neural network based crack identification method, device, equipment and storage medium
CN112489023A (en) * 2020-12-02 2021-03-12 重庆邮电大学 Pavement crack detection method based on multiple scales and multiple layers
CN113255678A (en) * 2021-06-17 2021-08-13 云南航天工程物探检测股份有限公司 Road crack automatic identification method based on semantic segmentation
CN114114457A (en) * 2021-11-09 2022-03-01 中国石油大学(北京) Fracture characterization method, device and equipment based on multi-modal logging data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"页岩气储层测井评价及进展";万金彬等;《测井技术》;第36卷(第5期);第441-447页 *

Also Published As

Publication number Publication date
CN115035371A (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN115035371B (en) Well wall crack identification method based on multi-scale feature fusion neural network
CN109299274B (en) Natural scene text detection method based on full convolution neural network
CN107945204B (en) Pixel-level image matting method based on generation countermeasure network
CN110675368B (en) Cell image semantic segmentation method integrating image segmentation and classification
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN109903339B (en) Video group figure positioning detection method based on multi-dimensional fusion features
CN111882620A (en) Road drivable area segmentation method based on multi-scale information
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN113763327B (en) Power plant pipeline high-pressure steam leakage detection method based on CBAM-Res_Unet
CN112288084B (en) Deep learning target detection network compression method based on feature map channel importance
CN114092793B (en) End-to-end biological target detection method suitable for complex underwater environment
CN112381030A (en) Satellite optical remote sensing image target detection method based on feature fusion
CN109815957A (en) A kind of character recognition method based on color image under complex background
CN110490155A (en) A kind of no-fly airspace unmanned plane detection method
CN117456330A (en) MSFAF-Net-based low-illumination target detection method
CN110728186A (en) Fire detection method based on multi-network fusion
CN118230354A (en) Sign language recognition method based on improvement YOLOv under complex scene
CN116977280A (en) Rail surface defect detection method based on improved UPerNet and connected domain analysis
CN116740572A (en) Marine vessel target detection method and system based on improved YOLOX
CN108460772B (en) Advertisement harassment fax image detection system and method based on convolutional neural network
CN113344005B (en) Image edge detection method based on optimized small-scale features
CN110427920B (en) Real-time pedestrian analysis method oriented to monitoring environment
CN111402223B (en) Transformer substation defect problem detection method using transformer substation video image
CN114565764A (en) Port panorama sensing system based on ship instance segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant