CN115880505B - Low-order fault intelligent identification method for target edge detection neural network - Google Patents

Low-order fault intelligent identification method for target edge detection neural network Download PDF

Info

Publication number
CN115880505B
CN115880505B CN202310214221.8A CN202310214221A CN115880505B CN 115880505 B CN115880505 B CN 115880505B CN 202310214221 A CN202310214221 A CN 202310214221A CN 115880505 B CN115880505 B CN 115880505B
Authority
CN
China
Prior art keywords
data
network
fault
low
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310214221.8A
Other languages
Chinese (zh)
Other versions
CN115880505A (en
Inventor
丁仁伟
韩天娇
赵硕
张玉洁
赵俐红
刘一霖
李建平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Science and Technology
Original Assignee
Shandong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Science and Technology filed Critical Shandong University of Science and Technology
Priority to CN202310214221.8A priority Critical patent/CN115880505B/en
Publication of CN115880505A publication Critical patent/CN115880505A/en
Application granted granted Critical
Publication of CN115880505B publication Critical patent/CN115880505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Geophysics And Detection Of Objects (AREA)

Abstract

The invention discloses a low-order fault intelligent identification method of a target edge detection neural network, which belongs to the technical field of geophysics and is used for intelligent identification of low-order faults and comprises the following steps: the expansion convolution is used for constructing the encoder to increase the network receptive field, so that the network fully learns the low-order fault information, the decoder introduces a attention mechanism to strengthen the capture of shallow network position information and deep network semantic information, the fault information of different scales output by the decoder is fused, and the recognition precision of the network to the low-order fault is further improved. The training model is applied to simulation data and actual seismic data, and results show that the method can effectively identify low-order faults, reduces false identification of faults, increases continuity of faults, improves accuracy of faults, provides technical support for exploration and development of residual oil, and improves recovery efficiency of an old oil field.

Description

Low-order fault intelligent identification method for target edge detection neural network
Technical Field
The invention discloses a low-order fault intelligent identification method of a target edge detection neural network, and belongs to the technical field of geophysics.
Background
The development of the residual oil has important significance for improving the later-stage yield of the oil field, and the identification of the low-order fault has great influence on the exploration and development of the residual oil. For a long time, high order faults can be identified from the seismic profile, while low order faults are very difficult to interpret using conventional interpretation methods. The low-order faults are derived from high-order faults or from bending deformation of rock strata, have various forms and discrete structures, have stronger concealment and smaller breaking distance, and have a fall of only a few meters. The conventional seismic section has unclear reaction and is difficult to determine fault strike only by individual well stratum contrast and conventional seismic data. From the two-dimensional perspective, the fault model is projected on a single seismic section, fault lines are crossed, longitudinal confusion is caused, fault space information is easy to lose, fault identification is difficult, time is long, and manual interpretation is difficult to meet the requirements of actual production. Along with the development of deep learning, many scholars utilize a complex convolutional neural network to predict faults, and the accuracy and the efficiency of the convolutional neural network in fault identification are proved to be far superior to those of the traditional method, so that the oil and gas exploration industry is promoted to develop towards automation.
Faults studied by the invention are structures in which rock formations on two sides of a fracture surface are subjected to remarkable relative displacement. Faults typically exist in two-dimensional, three-dimensional seismic data in the form of continuous fault lines, fault planes, rather than isolated fault points. The interpretation of faults is made more difficult by the variety of the actual faults themselves, together with the complex seismic noise. Based on the traditional single model technology, the proper fault characteristics are extracted, the extracted fault characteristics have no universality, and the fault identification effect is often poor. When the deep network model is trained, huge network parameters need huge data volume support, and the larger the training sample size is, the better the training effect is. However, in practice there is very little actual data for training and no corresponding fault signature. In three-dimensional fault identification, multi-level features are extracted from five stages of a three-dimensional HED structure backbone network to obtain corresponding side output, and observation shows that shallow features usually contain a large amount of detail structure information and are accompanied by a large amount of noise points, and deep features usually contain abundant semantic information, so that the resolution of the deep features is lower and the edge expression is more fuzzy due to the fact that the backbone network is Pooling. While multi-scale features enable shallow and deep networks to be complementary, linear combinations of scale features do not adequately mine complex relationships prior to multi-layer edge features because the individual side outputs are independent. The Unet coding and decoding network structure only uses the characteristics of the last convolution layer, the information is relatively single, some smaller faults exist, and the characteristics of the single convolution layer are difficult to judge correctly.
Disclosure of Invention
The invention aims to provide a low-order fault intelligent identification method of a target edge detection neural network, which aims to solve the problems of thicker edge contour lines, lower detection precision and less training data in the prior art.
A low-order fault intelligent identification method of a target edge detection neural network comprises the following steps:
step 1: constructing a training set, and generating 200 pairs of simulated three-dimensional seismic data and corresponding labels;
step 2: building a network structure SH-Unet according to the characteristics of the low-order fault;
step 3: preprocessing all data;
step 4: model training, namely putting data into a built network, and performing model training by using class balance cross entropy as a loss function to obtain a trained SH-UNet network model;
step 5: and (3) model trial calculation, namely predicting simulation data and actual seismic data which do not participate in training by using the network model obtained in the step (4) to obtain a prediction result.
The building of the network structure SH-Unet comprises the following steps:
constructing an encoder by means of dilation convolution, expanding the receptive field of a convolution kernel, wherein the size of the convolution kernel is 3 multiplied by 3, k represents the size of the dilation convolution kernel, d represents the dilation coefficient, and the receptive field k The calculation formula of (2) is as follows:
Figure SMS_1
and adding an SE attention module before each side is output to fuse the deep network characteristics, guiding the shallow network to remove weak edges and noise, fusing fault information of different scales output by a decoder through a concat function, and improving the recognition precision of the network to the low-order faults.
The SE attention module includes:
performing a squeze operation through a feature map obtained by convolution to obtain global features of a channel level, performing an exchange operation on the global features, learning the relation among the channels to obtain weights of different channels, multiplying the original features to obtain final features, and enabling the channels to represent channels in a convolution network.
The Squeeze operation includes:
h x W x D x C is compressed to 1 x C by global averaging pooling, H x W x D is compressed to one dimension, the previous global field of view is obtained, the sensing area is wider, and the result of the Squeeze operation is obtained
Figure SMS_2
The following are provided:
Figure SMS_3
wherein ,
Figure SMS_4
representing a Squeeze operation,/->
Figure SMS_5
C represents channel, H represents height, W represents width, D represents depth, < ->
Figure SMS_6
Parameters representing traversal;
the above converts the input to output, the result of the squeze operation
Figure SMS_7
The correlation between channels is learned with two 1 x 1 convolutional layers: the first convolution layer is +>
Figure SMS_8
Has effect in reducing dimension, activating with ReLU function, and adding a second convolution layer +.>
Figure SMS_9
Multiplying, recovering the original dimension, and obtaining an output result s through a Sigmoid function:
Figure SMS_10
wherein ,
Figure SMS_11
representing the expression operation,/->
Figure SMS_12
Representing the Relu function, ++>
Figure SMS_13
Representing Sigmoid function->
Figure SMS_14
Parameters representing the first 1 x 1 convolution operation with the aim of reducing the dimension,/->
Figure SMS_15
Parameters representing a second 1 x 1 convolution operation, restoring the dimension to the input dimension;
the original features in the channel dimension are reset by weighting the previous features of each channel as follows:
Figure SMS_16
wherein ,
Figure SMS_17
representing Scale operation, +.>
Figure SMS_18
Representing a three-dimensional matrix->
Figure SMS_19
Representing a weight.
The network structure SH-Unet leads out one side output at the end of each scale, and leads out 5 different scale side outputs in total;
when the input data matrix size is 128×128×128×1, the feature maps of the 5 side outputs at this time are O1, O2, O3, O4, O5, the 5 feature map sizes are 128 x 1, 64 x 1, the 5 feature map sizes are 128 x 1, 64 x 1, the sizes of the feature graphs obtained by O2-O5 are amplified by 2-16 times respectively, the sizes of the O1 feature graphs are kept unchanged, the SH-Unet obtains the feature graphs with the same size on each scale, the 5 feature maps are superimposed to obtain a 128 x 5 feature data set, a convolution operation with a number of output channels of 1 and a convolution kernel size of 1 x 1 is introduced.
The data set is Z-Score normalized, with the middle output of the network adjusted with small batch mean and variance during the training of the model, and the data thus becomes more stable, the Z-Score normalization formula is as follows:
Z-Score normalization formula:
Figure SMS_20
where x represents the original data of the site,
Figure SMS_21
mean value of all data>
Figure SMS_22
Representing standard deviation.
Step 4, introducing a class balance cross entropy Loss function to balance Loss between positive and negative samples, wherein the calculation process of the BCE Loss function is as follows:
Figure SMS_23
wherein ,
Figure SMS_24
is the ratio between the non-fault data point and the total data point, +.>
Figure SMS_25
For the ratio of three-dimensional seismic data interruption layer data points, N is the input three-dimensional seismic data points, +.>
Figure SMS_26
For the fault binary tag value,
Figure SMS_27
is a fault binary predictor.
And (3) predicting untrained data and actual seismic data by using the SH-UNet model trained in the step (4).
Compared with the prior art, the invention has the following beneficial effects: the invention combines two deep learning model semantic segmentation frames (Unet) and an edge detection frame (HED), fuses deep and shallow features in a network through a symmetrical coding and decoding structure, fully utilizes data of a training set, and still performs well under the condition of fewer data sets. By performing deep supervision on edge predictions of multiple resolutions, training efficiency and generalization capability of the model can be improved. Finally, attention mechanisms (SEnet) are introduced, adaptively combining these multi-scale predictions into the final model output. According to the characteristics of faults, an SH-Unet model is provided, the advantages of a segmentation and edge detection method are utilized, the receptive field of a network is increased, loss of image detail information in the down sampling process is reduced, and a attention mechanism is introduced to realize high-precision segmentation of faults and non-faults. The different scale characteristic information output by the decoder part is fused, so that the multi-scale characteristic information is complemented, the combination of the shallow position information and the deep semantic information is facilitated, the optimal fault edge information is obtained, and the fault identification precision of the model is improved; the data quantity used for training is increased through data simulation, and under the condition that the actual data of training is very small, the simulated training data is increased in data quantity and is closer to the actual data, so that the network training precision is further improved.
Drawings
FIG. 1 is a training flow diagram of the present invention.
Fig. 2 is an overall structure diagram of the SEnet of the present invention.
Fig. 3 is a diagram of the SH-Unet network structure of the present invention.
FIG. 4 is a diagram of the effect of the simulated three-dimensional seismic data of the invention.
FIG. 5 is a graph of label effects corresponding to simulated three-dimensional seismic data in accordance with the present invention.
FIG. 6 is a graph of the predicted effect of the simulated seismic data of the invention.
Fig. 7 is a tomographic effect graph of the actual data of the present invention F3.
Fig. 8 is a graph showing the effect of tomographic prediction on the actual data of the present invention F3.
FIG. 9 is a process diagram of the present invention for constructing a sample set from real data, where a is a three-dimensional reflectance model diagram, b is a fold-adding structure diagram, c is a plane-adding shear structure diagram, d is a fault-adding diagram in the model, e is a synthetic seismic record diagram, and f is a diagram after e is added with noise.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the present invention will be clearly and completely described below, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
A low-order fault intelligent identification method of a target edge detection neural network comprises the following steps:
step 1: constructing a training set, and generating 200 pairs of simulated three-dimensional seismic data and corresponding labels;
step 2: building a network structure SH-Unet according to the characteristics of the low-order fault;
step 3: preprocessing all data;
step 4: model training, namely putting data into a built network, and performing model training by using class balance cross entropy as a loss function to obtain a trained SH-UNet network model;
step 5: and (3) model trial calculation, namely predicting simulation data and actual seismic data which do not participate in training by using the network model obtained in the step (4) to obtain a prediction result.
The building of the network structure SH-Unet comprises the following steps:
constructing an encoder by means of dilation convolution, expanding the receptive field of a convolution kernel, wherein the size of the convolution kernel is 3 multiplied by 3, k represents the size of the dilation convolution kernel, d represents the dilation coefficient, and the receptive field k The calculation formula of (2) is as follows:
Figure SMS_28
and adding an SE attention module before each side is output to fuse the deep network characteristics, guiding the shallow network to remove weak edges and noise, fusing fault information of different scales output by a decoder through a concat function, and improving the recognition precision of the network to the low-order faults.
The SE attention module includes:
performing a squeze operation through a feature map obtained by convolution to obtain global features of a channel level, performing an exchange operation on the global features, learning the relation among the channels to obtain weights of different channels, multiplying the original features to obtain final features, and enabling the channels to represent channels in a convolution network.
The Squeeze operation includes:
h x W x D x C is compressed to 1 x C by global averaging pooling, H x W x D is compressed to one dimension, the previous global field of view is obtained, the sensing area is wider, and the result of the Squeeze operation is obtained
Figure SMS_29
The following are provided:
Figure SMS_30
;/>
wherein ,
Figure SMS_31
representing a Squeeze operation,/->
Figure SMS_32
C represents channel, H represents height, W represents width, D represents depth, < ->
Figure SMS_33
Parameters representing traversal;
the above converts the input to output, the result of the squeze operation
Figure SMS_34
The correlation between channels is learned with two 1 x 1 convolutional layers: the first convolution layer is +>
Figure SMS_35
Has effect in reducing dimension, activating with ReLU function, and adding a second convolution layer +.>
Figure SMS_36
Multiplying, recovering the original dimension, and obtaining an output result s through a Sigmoid function:
Figure SMS_37
wherein ,
Figure SMS_38
representing the expression operation,/->
Figure SMS_39
Representing the Relu function, ++>
Figure SMS_40
Representing Sigmoid function->
Figure SMS_41
Parameters representing the first 1 x 1 convolution operation with the aim of reducing the dimension,/->
Figure SMS_42
Parameters representing a second 1 x 1 convolution operation, restoring the dimension to the input dimension;
the original features in the channel dimension are reset by weighting the previous features of each channel as follows:
Figure SMS_43
wherein ,
Figure SMS_44
representing Scale operation, +.>
Figure SMS_45
Representing a three-dimensional matrix->
Figure SMS_46
Representing a weight.
The network structure SH-Unet leads out one side output at the end of each scale, and leads out 5 different scale side outputs in total;
when the input data matrix size is 128×128×128×1, the feature maps of the 5 side outputs at this time are O1, O2, O3, O4, O5, the 5 feature map sizes are 128 x 1, 64 x 1, the 5 feature map sizes are 128 x 1, 64 x 1, the sizes of the feature graphs obtained by O2-O5 are amplified by 2-16 times respectively, the sizes of the O1 feature graphs are kept unchanged, the SH-Unet obtains the feature graphs with the same size on each scale, the 5 feature maps are superimposed to obtain a 128 x 5 feature data set, a convolution operation with a number of output channels of 1 and a convolution kernel size of 1 x 1 is introduced.
The data set is Z-Score normalized, with the middle output of the network adjusted with small batch mean and variance during the training of the model, and the data thus becomes more stable, the Z-Score normalization formula is as follows:
Z-Score normalization formula:
Figure SMS_47
where x represents the original data of the site,
Figure SMS_48
mean value of all data>
Figure SMS_49
Representing standard deviation.
Step 4, introducing a class balance cross entropy Loss function to balance Loss between positive and negative samples, wherein the calculation process of the BCE Loss function is as follows:
Figure SMS_50
wherein ,
Figure SMS_51
is the ratio between the non-fault data point and the total data point, +.>
Figure SMS_52
For the ratio of three-dimensional seismic data interruption layer data points, N is the input three-dimensional seismic data points, +.>
Figure SMS_53
For the fault binary tag value,
Figure SMS_54
is a fault binary predictor.
And (3) predicting untrained data and actual seismic data by using the SH-UNet model trained in the step (4).
In the embodiment of the invention, a mixed model is introduced, and the edge information and the semantic information are considered simultaneously for completing fault segmentation and edge detection, so that the invention combines the advantages of the two frames, and not only reserves the position information of a shallow layer, but also adds the semantic information of a deep layer. By means of the architecture, deep features with semantic information are gradually transferred to shallow features for effective combination. Details lost in the deep edge expression will be progressively retrieved and the blurred high-level feature map will be progressively optimized throughout the process.
In a low-order fault identification task, in order to solve the problem of low-order fault information loss caused by a downsampling layer in fault segmentation, the receptive field of a convolution kernel is enlarged by using expansion convolution, and the accuracy of fault segmentation is improved. With the continuous pooling of layers in the backbone network of fault identification, deep features can possess a large receptive field, but the resolution of features is reduced, and lower resolution can affect dense predictions. This requires a larger receptive field to better learn the fault features of the input and a larger feature resolution to predict pixel-by-pixel. The convolution kernel of the dilation convolution expands in size on the basis of the common convolution kernel, but the convolution kernel unit actually participating in the operation is unchanged.
The simple addition is not an optimal scheme for synthesizing a feature map, and as shown in fig. 2, the SE attention module is used for better combining the two parts of features, better integrating the information of the deep network features and guiding the shallow network to remove weak edges and noise. According to the importance degree, useful features are emphasized, noise and other irrelevant features are restrained, and each channel feature is weighted, so that the network can learn low-order fault features better, the segmentation precision of fault areas is improved, and the capability of the network for extracting fault features is improved.
The original coding and decoding network structure only uses the characteristics of the last convolution layer in the final segmentation, the information is single, rich details are lacking, and the low-order fault can not be judged correctly by using the characteristics of the single convolution layer. Therefore, the fusion weights are learned simultaneously during training through the weight fusion layer, multi-scale learning is carried out on the low-order faults, and finally a clearer edge detection result is obtained.
As shown in fig. 3, the SH-Unet network balances the loss between positive and negative samples by introducing a balance weight on a per pixel item basis, so that the fault data is very small in the total data in the edge detection task, and the unbalance of faults and non-faults has a very serious influence on the network learning, so that the final result detects a lot of discontinuities.
The low-order fault processed by the method is a branch (secondary and derivative) fracture generated by the low-order fault due to the activity of the high-order fault, and the high-order fault is caused by plate movement.
The training data samples used in the present invention are open data sets (Wu et al 2019) that simulate real fold and fault structures using reflection coefficients of seismic wavelet convolution, forward modeling and geologic modeling, with a three-dimensional synthetic seismic data volume of 128 x 128 in size.
The network model of the present invention requires a large number of fault samples. Manual marking interprets fault three-dimensional seismic data, and is greatly influenced by human factors. Incorrect manually marked or unmarked faults may cause the network to learn incorrect characteristic information, and to avoid such errors, a plurality of simulated seismic data containing different types of faults are generated as training sample sets and verification sample sets.
In order to make the seismic data of different work areas compatible with each other, the seismic data needs to be standardized before network training. The amplitude value of the analog data ranges between [ -3,3], and the amplitude value of the actual data is very large. When the actual seismic data is utilized for fault intelligent interpretation, the amplitude ranges of the seismic data of different working areas are often different, so that in order to enable the trained network model to be compatible with the actual seismic data of a plurality of working areas, standardized processing is required to be introduced so that the amplitude ranges of the training data and the predicted data are the same, and guarantee is provided for fault accurate interpretation. The adopted Z-Score data standardization method processes the data, improves the comparability of the data and weakens the interpretation of the data. In intelligent fault interpretation of actual seismic data, the seismic data amplitude values for different working areas may vary widely. Therefore, in order to make the trained network model compatible with the actual seismic data of a plurality of work areas, Z-Score data standard processing is adopted, so that the amplitude range of the trained data is consistent with the predicted data, and the fault is more accurately interpreted.
Z-Score data standardization is a common method for data processing, improves the comparability of the data and weakens the interpretation of the data. In intelligent fault interpretation of actual seismic data, the seismic data amplitude values for different working areas may vary widely. Therefore, in order to make the trained network model compatible with the actual seismic data of a plurality of work areas, Z-Score data standard processing is adopted, so that the amplitude range of the trained data is consistent with the predicted data, and the fault is more accurately interpreted.
The process of constructing a sample set according to the present invention is shown in fig. 9, and includes:
1. randomly generating a three-dimensional reflectance model (a in fig. 9);
2. in the reflectance model, the fold configuration (b in fig. 9) is increased by the vertical shear model;
3. to further increase the complexity of the model structure, a planar shear configuration (c in fig. 9) is added to the resulting pleat model;
4. adding faults in the model, namely d in fig. 9, wherein the faults are planar, the directions and the displacements of the faults are different from each other, the directions of the faults comprise the inclination angle and the trend of the faults, the distance of each fault changes in space along the trend and the trend, the range of the distance of the faults is defined according to the faults of an actual work area and is [10, 75], the inclination angle of the faults is in the range of positive faults [50 DEG, 80 DEG ], and the negative faults are [ -80 DEG, -50 DEG ];
5. convolving the generated model with the Rake wavelet to obtain a synthetic seismic record (e in FIG. 9); the convolution with the Rake wavelet can blur sharp discontinuities at fault boundaries, so that the fault looks more realistic;
6. to make the synthetic seismic data closer to the real data, noise is added to it (f in fig. 9).
The method of synthesizing faults comprises the following steps: first, an initial model is constructed, and folds and fault structures are added in sequence. And simulating a fold shearing plane model in the vertical direction, shearing the plane model by using two displacement field inclined structures and bending structures, constructing an anisotropic Gaussian function according to the estimated fault azimuth and the estimated fault inclination angle and the estimated fault extension, and finally superposing all local Gaussian functions facing to faults to generate a three-dimensional simulated seismic record and a corresponding fault label shown in figure 5.
As shown in FIG. 4, the strategy designed by the invention aiming at the low-order fault characteristics is proved by a simulated seismic data to be capable of effectively realizing low-order fault prediction, and as shown in FIG. 6, the invention can well predict the approximate distribution of faults and has higher fault continuity.
To verify the effect of the present invention on fault identification, the F3 data was automatically interpreted using the trained SH-Unet model described above, as shown in FIG. 7. As shown in FIG. 8, FIG. 8 shows the result of the fault prediction of the F3 actual data, it can be seen that the faults predicted by the method are clearer, especially the faults on the transverse line and the longitudinal line have fewer discontinuous points, the fault crossing position prediction is clearer, and the fault positioning is more accurate.
The above embodiments are only for illustrating the technical aspects of the present invention, not for limiting the same, and although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may be modified or some or all of the technical features may be replaced with other technical solutions, which do not depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (4)

1. The low-order fault intelligent identification method of the target edge detection neural network is characterized by comprising the following steps of:
step 1: constructing a training set, and generating 200 pairs of simulated three-dimensional seismic data and corresponding labels;
step 2: building a network structure SH-Unet according to the characteristics of the low-order fault;
step 3: preprocessing all data;
step 4: model training, namely putting data into a built network, and performing model training by using class balance cross entropy as a loss function to obtain a trained SH-UNet network model;
step 5: model trial calculation, namely predicting simulation data and actual seismic data which do not participate in training by using the network model obtained in the step 4 to obtain a prediction result;
the building of the network structure SH-Unet comprises the following steps:
constructing an encoder by means of dilation convolution, expanding the receptive field of a convolution kernel, wherein the size of the convolution kernel is 3 multiplied by 3, k represents the size of the dilation convolution kernel, d represents the dilation coefficient, and the receptive field k The calculation formula of (2) is as follows:
Figure QLYQS_1
adding an SE attention module before outputting each side to fuse the deep network characteristics, guiding the shallow network to remove weak edges and noise, fusing fault information of different scales output by a decoder through a concat function, and improving the recognition precision of the network to the low-order faults;
the SE attention module includes:
performing a squeze operation through a feature map obtained by convolution to obtain a global feature of a channel level, performing an specification operation on the global feature and learning the relation among channels to obtain weights of different channels, multiplying the original feature to obtain a final feature, wherein the channels represent channels in a convolution network;
the Squeeze operation includes:
h x W x D x C is compressed to 1 x C by global averaging pooling, H x W x D is compressed to one dimension, the previous global field of view is obtained, the sensing area is wider, and the result of the Squeeze operation is obtained
Figure QLYQS_2
The following are provided:
Figure QLYQS_3
wherein ,
Figure QLYQS_4
representing a Squeeze operation,/->
Figure QLYQS_5
C represents channel, H represents height, W represents width, D represents depth, < ->
Figure QLYQS_6
Parameters representing traversal;
the above converts the input to output, the result of the squeze operation
Figure QLYQS_7
The correlation between channels is learned with two 1 x 1 convolutional layers: the first convolution layer is +>
Figure QLYQS_8
Has effect in reducing dimension, activating with ReLU function, and adding a second convolution layer +.>
Figure QLYQS_9
Multiplying, recovering the original dimension, and obtaining an output result s through a Sigmoid function:
Figure QLYQS_10
wherein ,
Figure QLYQS_11
representing the expression operation,/->
Figure QLYQS_12
Representing the Relu function, ++>
Figure QLYQS_13
Representing Sigmoid function->
Figure QLYQS_14
Parameters representing the first 1 x 1 convolution operation with the aim of reducing the dimension,/->
Figure QLYQS_15
Parameters representing a second 1 x 1 convolution operation, restoring the dimension to the input dimension;
the original features in the channel dimension are reset by weighting the previous features of each channel as follows:
Figure QLYQS_16
wherein ,
Figure QLYQS_17
representing Scale operation, +.>
Figure QLYQS_18
Representing a three-dimensional matrix->
Figure QLYQS_19
Representing a weight;
the network structure SH-Unet leads out one side output at the end of each scale, and leads out 5 different scale side outputs in total;
when the input data matrix size is 128×128×128×1, the feature maps of the 5 side outputs at this time are O1, O2, O3, O4, O5, the 5 feature map sizes are 128 x 1, 64 x 1, the 5 feature map sizes are 128 x 1, 64 x 1, the sizes of the feature graphs obtained by O2-O5 are amplified by 2-16 times respectively, the sizes of the O1 feature graphs are kept unchanged, the SH-Unet obtains the feature graphs with the same size on each scale, the 5 feature maps are superimposed to obtain a 128 x 5 feature data set, a convolution operation with a number of output channels of 1 and a convolution kernel size of 1 x 1 is introduced.
2. The method of claim 1, wherein the Z-Score normalization is performed on all data sets, and the data is more stable by adjusting the intermediate output of the network using small-batch means and variances during the training of the model, and the Z-Score normalization formula is as follows:
Z-Score normalization formula:
Figure QLYQS_20
where x represents the original data of the site,
Figure QLYQS_21
mean value of all data>
Figure QLYQS_22
Representing standard deviation.
3. The intelligent low-order fault identification method of the target edge detection neural network according to claim 2, wherein the step 4 introduces a class-balanced cross entropy Loss function to balance the Loss between positive and negative samples, and the calculation process of the BCE Loss function is as follows:
Figure QLYQS_23
wherein ,
Figure QLYQS_24
is the ratio between the non-fault data point and the total data point, +.>
Figure QLYQS_25
For the ratio of three-dimensional seismic data interruption layer data points, N is the input three-dimensional seismic data points, +.>
Figure QLYQS_26
For a fault binary tag value +.>
Figure QLYQS_27
Is a fault binary predictor.
4. A low-order fault intelligent recognition method for a target edge detection neural network according to claim 3, wherein the training SH-UNet model in the step 4 is used to predict untrained data and actual seismic data.
CN202310214221.8A 2023-03-08 2023-03-08 Low-order fault intelligent identification method for target edge detection neural network Active CN115880505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310214221.8A CN115880505B (en) 2023-03-08 2023-03-08 Low-order fault intelligent identification method for target edge detection neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310214221.8A CN115880505B (en) 2023-03-08 2023-03-08 Low-order fault intelligent identification method for target edge detection neural network

Publications (2)

Publication Number Publication Date
CN115880505A CN115880505A (en) 2023-03-31
CN115880505B true CN115880505B (en) 2023-05-09

Family

ID=85762019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310214221.8A Active CN115880505B (en) 2023-03-08 2023-03-08 Low-order fault intelligent identification method for target edge detection neural network

Country Status (1)

Country Link
CN (1) CN115880505B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117217095B (en) * 2023-10-13 2024-05-28 西南石油大学 Method for obtaining variation function in geological attribute modeling based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114114408A (en) * 2020-08-27 2022-03-01 中国石油化工股份有限公司 Low-order fault identification method
CN115690772A (en) * 2021-07-26 2023-02-03 中国石油化工股份有限公司 Gravity fault automatic identification method based on deep learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019040288A1 (en) * 2017-08-25 2019-02-28 Exxonmobil Upstream Researchcompany Automated seismic interpretation using fully convolutional neural networks
CN108415077B (en) * 2018-02-11 2021-02-26 中国石油化工股份有限公司 Edge detection low-order fault identification method
CN113902769A (en) * 2021-08-18 2022-01-07 南方海洋科学与工程广东省实验室(广州) Seismic fault identification method based on deep learning semantic segmentation
CN113703048B (en) * 2021-08-30 2022-08-23 中国科学院地质与地球物理研究所 Method and system for detecting high-resolution earthquake fault of antagonistic neural network
CN115601750A (en) * 2022-09-14 2023-01-13 河北地质大学(Cn) Seismic facies recognition semantic segmentation method and system for improving edge accuracy
CN115457066A (en) * 2022-09-22 2022-12-09 闽江学院 Retinal vessel segmentation method fusing UNet and edge detection model
CN115639605B (en) * 2022-10-28 2024-05-28 中国地质大学(武汉) Automatic identification method and device for high-resolution fault based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114114408A (en) * 2020-08-27 2022-03-01 中国石油化工股份有限公司 Low-order fault identification method
CN115690772A (en) * 2021-07-26 2023-02-03 中国石油化工股份有限公司 Gravity fault automatic identification method based on deep learning

Also Published As

Publication number Publication date
CN115880505A (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN111259930B (en) General target detection method of self-adaptive attention guidance mechanism
CN110705457B (en) Remote sensing image building change detection method
CN109709603B (en) Seismic horizon identification and tracking method and system
CN112541572B (en) Residual oil distribution prediction method based on convolutional encoder-decoder network
CN109190626A (en) A kind of semantic segmentation method of the multipath Fusion Features based on deep learning
CN114972384A (en) Tunnel rock intelligent rapid regional grading method based on deep learning
CN108399248A (en) A kind of time series data prediction technique, device and equipment
CN108710777B (en) Diversified anomaly detection identification method based on multi-convolution self-coding neural network
CN108447057A (en) SAR image change detection based on conspicuousness and depth convolutional network
CN115880505B (en) Low-order fault intelligent identification method for target edge detection neural network
CN103020649A (en) Forest type identification method based on texture information
CN116168295B (en) Lithology remote sensing intelligent interpretation model establishment method and interpretation method
CN113160246A (en) Image semantic segmentation method based on depth supervision
CN117079048B (en) Geological disaster image recognition method and system based on CLIP model
CN115471467A (en) High-resolution optical remote sensing image building change detection method
CN113703045A (en) Seismic facies identification method based on lightweight network
CN113420619A (en) Remote sensing image building extraction method
CN111639067A (en) Multi-feature fusion convolution self-coding multivariate geochemical anomaly identification method
CN115308799A (en) Seismic imaging free gas structure identification method and system
CN114494870A (en) Double-time-phase remote sensing image change detection method, model construction method and device
CN114565594A (en) Image anomaly detection method based on soft mask contrast loss
CN112731522A (en) Intelligent recognition method, device and equipment for seismic stratum and storage medium
CN115690772A (en) Gravity fault automatic identification method based on deep learning
CN113313077A (en) Salient object detection method based on multi-strategy and cross feature fusion
CN117152630A (en) Optical remote sensing image change detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant