CN115146667A - Multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding - Google Patents
Multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding Download PDFInfo
- Publication number
- CN115146667A CN115146667A CN202210428762.6A CN202210428762A CN115146667A CN 115146667 A CN115146667 A CN 115146667A CN 202210428762 A CN202210428762 A CN 202210428762A CN 115146667 A CN115146667 A CN 115146667A
- Authority
- CN
- China
- Prior art keywords
- data
- noise
- seismic
- noise suppression
- seismic data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001629 suppression Effects 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013528 artificial neural network Methods 0.000 claims abstract description 41
- 238000012549 training Methods 0.000 claims abstract description 39
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 25
- 230000009466 transformation Effects 0.000 claims abstract description 13
- 230000004927 fusion Effects 0.000 claims abstract description 9
- 230000000694 effects Effects 0.000 claims abstract description 5
- 238000011176 pooling Methods 0.000 claims description 9
- 238000013135 deep learning Methods 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 7
- 230000000903 blocking effect Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 15
- 238000000605 extraction Methods 0.000 abstract description 5
- 238000005520 cutting process Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 17
- 239000011159 matrix material Substances 0.000 description 14
- 230000006870 function Effects 0.000 description 11
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000013461 design Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000011068 loading method Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000003345 natural gas Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000003209 petroleum derivative Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V1/00—Seismology; Seismic or acoustic prospecting or detecting
- G01V1/28—Processing seismic data, e.g. for interpretation or for event detection
- G01V1/36—Effecting static or dynamic corrections on records, e.g. correcting spread; Correlating seismic signals; Eliminating effects of unwanted energy
- G01V1/364—Seismic filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V2210/00—Details of seismic processing or analysis
- G01V2210/30—Noise handling
- G01V2210/32—Noise reduction
- G01V2210/324—Filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Acoustics & Sound (AREA)
- Environmental & Geological Engineering (AREA)
- Geology (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geophysics (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-scale seismic noise suppression algorithm based on curvelet transformation and multi-branch deep self-coding. Belonging to the field of seismic signal processing. The invention provides a multi-scale seismic noise suppression method based on dynamic fusion of a multi-branch convolution self-encoder and a curvelet transform denoising algorithm, and accurate seismic data noise suppression and weak signal extraction are realized. Firstly, generating a data training set containing real noise and a pure data label set, obtaining a group of matrixes with the same size through a block cutting algorithm by the generated training set and the label set, inputting the matrixes into a constructed multi-branch convolution self-coding seismic noise suppression network, performing iterative training of a model, and storing model parameters after the iteration is finished to finish network training; processing the data to be denoised into a prediction set by the same method, respectively inputting the prediction set into a neural network branch and a curvelet transformation branch, and dynamically weighting and fusing the operation results of the two branches according to the characteristics of the data by a mean square loss algorithm to obtain high-precision pure seismic data. The method can remarkably suppress multi-scale noise characteristics in the seismic data, enhance multi-scale deep weak signals and improve the overall signal-to-noise ratio. The method has better effect than the traditional algorithm and provides a new idea for suppressing the noise of the seismic data.
Description
Technical Field
The invention relates to the field of signal processing, in particular to a seismic signal processing method combining a traditional curvelet transform algorithm and a deep learning algorithm.
Background
In the scientific research field of geophysical exploration, the accurate extraction of seismic wavelets cannot be achieved through key technologies such as full waveform inversion technology, prestack migration imaging and the like, and the requirement on the signal-to-noise ratio of the seismic wavelets is high. The Tarim areas in western regions of China are rich in petroleum and natural gas resources, but most of the resources are stored in deep underground layers or ultra-deep underground layers, so that serious noise interference is caused, the quality of seismic signals received through seismic exploration is difficult to guarantee, and the precision of full waveform inversion and prestack migration imaging is seriously influenced.
The traditional seismic noise suppression method mainly adopts traditional algorithms, such as wavelet transformation, multi-channel singular general analysis, f-k two-dimensional filtering and the like. The traditional algorithm has a common noise suppression effect, can accidentally damage effective signals, has limited improvement of the integral signal-to-noise ratio, cannot perform targeted noise suppression optimization on a certain area, and is difficult to obtain the optimal noise suppression effect.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a neural network model based on curvelet transform and a multi-branch deep self-encoder, which is used for seismic data noise suppression. The extraction of seismic data edge information is realized by improving the traditional curvelet transform algorithm, a denoising self-coding network is constructed, and a convolutional layer, a pooling layer, a multi-branch structure, a residual structure, a data enhancement layer and the like are added, so that the learning capacity and the anti-overfitting capacity of a neural network are improved; secondly, training a neural network by using noise-containing seismic data and noise-free pure data, and performing dynamic weighting, blocking and fusion with a curvelet transform noise reduction algorithm, so that an obvious seismic data noise suppression effect can be realized; the model can train the seismic data of different regions and store the parameters of the regions, thereby realizing the targeted noise suppression of different regions.
The invention adopts the specific technical scheme that:
firstly, constructing seismic data containing actual seismic noise, using the seismic data as a training set, using the seismic data without noise as a label set, and forming training data of a neural network by the seismic data and the label set.
And constructing an artificial neural network of the multi-branch deep self-encoder. Aiming at the noise suppression of complex ground conditions, designing a self-monitoring learning self-encoder neural network architecture; aiming at stratum features and noise features with different scales in a stratum, a multi-branch convolution structure is designed, and noise suppression learning is carried out on the features with different scales, so that an effective suppression effect on the multi-scale noise features is obtained; aiming at the problems of gradient dispersion, gradient explosion and the like of an artificial neural network model, a multi-layer residual error network structure is designed; aiming at the problem of weak signal enhancement commonly existing after training, a data post-processing and data enhancement layer is designed. Aiming at the problems of edge feature loss, overfitting and the like in deep learning, a curvelet transformation noise reduction layer is designed, and dynamic weighting fusion is carried out on curvelet transformation data and deep learning data in a small area.
After the model is trained, stratum features and noise features of different scales under complex construction can be recognized, and multi-scale high-precision seismic data noise suppression is realized; meanwhile, the differential geological structures in different areas can be specially trained, and a targeted noise suppression effect is obtained.
Compared with the prior art, the two-dimensional convolution unit can effectively identify the stratigraphic structure characteristics in the section and accurately suppress the noise of the section; the design of the multi-branch convolutional neural network can effectively identify the stratigraphic structure characteristics under different scales, and simultaneously accurately suppress the seismic noise of different scales, thereby avoiding the characteristic loss and noise release under small scale, obviously improving the signal-to-noise ratio of seismic data and increasing the calculation speed of the neural network; the design of the residual error network can avoid the problems of overfitting or falling into a local minimum value in deep learning, and improve the effectiveness of the neural network; the design of the data enhancement layer can perform weak signal self-adaptive enhancement aiming at the current data characteristics, so that the signal-to-noise ratio of a prediction result is improved; the design of the curvelet transform layer can effectively extract edge detail information in the seismic data and repair the suppression of the neural network on weak effective signals, thereby improving the signal-to-noise ratio of the overall seismic data and improving the denoising quality. The effect of the seismic data noise suppression network provided by the method is obviously superior to that of the existing deep learning algorithm and also superior to that of the traditional seismic data noise suppression algorithm.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a content diagram of a multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding;
FIG. 3 is a diagram of a multi-scale seismic noise suppression network model based on curvelet transformation and multi-branch deep self-coding;
FIG. 4 is a comparative global profile of actual seismic data noise suppression results;
FIG. 5 is a cross-sectional view comparing the suppression of actual seismic data noise by different methods;
FIG. 6 is a graph of the suppression results versus the spectrum of actual seismic data noise for different methods.
Detailed Description
The invention discloses a multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding, and belongs to the field of seismic data processing. According to the invention, by constructing the multi-branch deep self-encoder artificial neural network model, a curvelet transform noise suppression layer for extracting edge characteristics in a stratum structure is designed aiming at the edge processing defects of the neural network, and a data enhancement layer for enhancing weak signals is designed, so that high-precision seismic data noise suppression and signal-to-noise ratio improvement are realized.
The present invention will become more apparent from the following description, taken in conjunction with the accompanying drawings, wherein are set forth, by way of illustration, specific embodiments of the present invention, and the like.
FIG. 1 is a flow chart of the method of the present invention, which comprises the following steps:
step 101: and constructing a neural network training sample set through the seismic data. The training set is seismic data containing real noise, and the noise of the training set is obviously distributed; the sample set is a data set without noise, can be constructed by relatively pure seismic data, and is used for enabling the neural network to learn mapping logic between the noise-containing data and the noise-free data in the seismic data so as to suppress the noise of the seismic data.
After training and sample set data are obtained, preprocessing operations such as data normalization, data blocking, data random scrambling and the like are carried out on the seismic data, so that the format of the data is matched with the input format of the neural network.
In order to show the noise suppression effect, real seismic data CMP and F3 simulated seismic data are used as substrates, and a training set and a sample set suitable for a neural network are constructed on the basis of the algorithm and are used as input of a neural network model.
Step 102: and (5) constructing and training a neural network.
Step 102.1: and (5) constructing a network. Seismic data acquired in engineering, which contain significant signals and interference noise, can be described approximately by the following expression:
r(n)=o(n)+n(n) (1)
where r (n) represents noisy seismic data, o (n) represents raw seismic data without noise, and n (n) represents contained noise. Obviously, o (n) is data in an ideal state, and the natural world does not exist, so that an estimation o' (n) of approximate o (n) data needs to be obtained through a data synthesis algorithm, and the following steps are included:
o'(n)≈o(n) (2)
the neural network type adopted by the method is a noise reduction self-encoder DAE, and belongs to the category of a self-supervision learning model. Pure data and noisy data are input into the neural network, parameters of learning units such as a weight matrix and a bias matrix in the neural network are iteratively optimized for multiple times through a back propagation and gradient descent algorithm, the learning of the mapping relation between the noisy seismic data and the pure seismic data is achieved, and therefore the capacity of reconstructing idealized seismic data o' (n) approximately free of noise is obtained.
The method adopts a self-encoder AutoEncoder, and the interior of the self-encoder AutoEncoder is composed of an encoder, a hidden layer and a decoder. The seismic data encoding process from the encoder to the hidden layer is as follows:
h=g encode (x)=σ(W 1 x+b 1 ) (3)
where x is the input data, here r (n), W 1 Is a weight matrix of the layer, b 1 Is the bias matrix for that layer, and h is the output data of the encoder for that layer. Through the multilayer upper formula process, the input seismic data are gradually encoded into deep-level multi-dimensional abstract logic h' which comprises the common step-by-step rule of the seismic data under different dimensions, different scales and different domains. Inputting the abstract logic h' into a hidden layer for further feature abstraction and logic processing, wherein the hidden layer process is as follows:
h”=g hidden (h')=relu(W 2 h'+b 2 ) (4)
wherein W 2 Is a weight matrix of the layer, b 2 For the bias matrix for this layer, h' is the input to the coding layer and relu is the non-linear function. The nonlinear function is a modified linear unit for nonlinearizing linear transformation in the neural network, is used for nonlinearly processing input data and optimizing gradient descent back propagation efficiency, can simulate a nonlinear logic law commonly existing in nature, and improves the learning precision of the neural network.
The data processed by the multilayer hidden layer is input into a decoder, the decoder can reconstruct the input represented by the modified abstract logic h' through a multilayer convolution network to obtain the data with the same size as the original data, and the process of the decoder is as follows:
x'=g decode (h”)=σ(W 3 h”+b 3 ) (5)
wherein W 3 Is a weight matrix of the layer, b 3 H "is the abstract logic of the hidden layer output, which is the bias matrix for that layer. And the scale of the decoded output data is consistent with that of the input data, the input data and the output data are subjected to maximum likelihood loss function to obtain Euclidean distance between the input data and the output data, and then the gradient value of the process is obtained, so that the model parameters are subjected to gradient back propagation, and the model parameters of the neural network are optimized towards the direction of gradient reduction. The process is as follows:
wherein y is i For the output values of the network model,is the corresponding label value, n is the number of the numerical values participating in summation;and solving the Euclidean distance between the prediction result of the training set and the label set, and obtaining the difference between the training set result and the label set.
The basic unit of the model is a two-dimensional CNN convolutional neural network. The convolutional layer is a combination of a plurality of input feature surfaces, a plurality of neurons exist in a single feature surface, and the single neuron is connected to the feature surfaces of the preceding and following convolutional layers by a convolutional kernel (also called a weight matrix), thereby constituting the entire connection. The convolution kernel is a filter matrix, the convolution layer is used for extracting stratum characteristics and noise characteristics in the seismic data, and the core structure of the convolution layer is as follows:
wherein l represents the number of network layers;represents the jth feature of the layer;represents the kth feature of the previous layer; gamma-shaped j Is a set of input features;is a convolution kernel;is a bias matrix; @ denotes the convolution process. The convolution kernel can be set to 5*5, 3*3, or 1*1.
The model also comprises basic units such as a convolutional layer, an activation layer and a normalization layer, and is used for improving the non-linearity degree of the convolutional layer, reducing the calculation complexity, shortening the time of the neural network reaching the relatively optimal gradient and avoiding over-training.
The model takes Relu as a nonlinear layer of CNN, and is used for carrying out nonlinear transformation on input data and optimizing the effects of back propagation and gradient descent. If the input x >0, the input is equal to the output, if x <0, the output is 0, thereby realizing the nonlinear processing of the data, and the layer structure is as follows:
f(x)=max(0,x) (8)
in order to accelerate the gradient process of the model to reach the optimum and normalize the distribution of seismic data, a neural network unit batch normalization layer BatchNormalization is added at the beginning of each Block of the model and used as a pretreatment layer of CNN (convolutional neural network) for slowing down internal variable transfer, reducing the sensitivity to initialization weight, reducing dependence on overfitting and dropout, accelerating convergence and simultaneously rapidly improving the training accuracy, and the process is as follows:
wherein x i Representing the current input as a row vector; mu is the mean of the row vector, sigma is the standard deviation of the row vector, epsilon is the minimums introduced to prevent denominator from being zero, gamma is the scale of the row vector, and beta is the shift parameter of the row vector.
The model designs a multi-branch convolution-pooling layer structure aiming at the characteristics of different scales of stratigraphic structures in seismic data, and each branch encodes and decodes the stratigraphic structure characteristics and noise of different scales step by step so as to realize accurate learning under multiple scales. The multi-branch convolution-pooling layer structure Block-A principle formula is as follows:
T 1 =relu(maxpool(Conv(I))) (10)
T 2 =relu(maxpool(Conv(I))) (11)
T 3 =relu(Conv(maxpool(I)) (12)
T out =concat(T 1 +T 2 +T 3 ) (13)
wherein I is the input value of the previous layer, concat is the data merging layer, relu is the non-linear layer, maxpool is the maximum pooling layer, and Conv is the convolution layer. The multi-branch convolution-pooling layer structure Block-B principle formula is as follows:
T 1 =relu(maxpool(Conv(I))) (14)
T 2 =relu(maxpool(Conv(I))) (15)
T out =concat(T 1 +T 2 ) (16)
wherein I is the input value of the previous layer, concat is the data merging layer, relu is the non-linear layer, maxpool is the maximum pooling layer, and Conv is the convolution layer.
Different from the conventional image data in steps, the difference value of every two numerical values of the seismic data is large, and the difference value is usually more than 10000, so that the problems of overfitting, gradient explosion, gradient degradation and the like generated in deep learning need to be prevented. The model designs a plurality of residual error network structures in encoding and decoding, and the principle formula is as follows:
F(x)=ConvBlock(x)+Maxpool(x) (17)
wherein ConvBlock (x) is the output after multi-branch convolution-pooling layer structure processing, maxpool (x) is the output after maximum pooling layer processing, and when ConvBlock (x) =0, F (x) is Maxpool (x), so that the generation of identity mapping can be prevented, and the problems such as overfitting can be avoided.
Step 102.2: and (5) training a model.
In the whole network training process, network input is a training set generated in the step 101, the training set is input into the neural network in batches and successively, a decoder and a hidden layer perform high-dimensional feature extraction and logic processing on input data, the decoder reconstructs input of abstract logic representation, and output data consistent with input scale are obtained. And solving a loss function of the decoder output data and the corresponding label, and further performing gradient back propagation through an Adam optimizer and updating parameters of the neural network so as to complete a single training process. The above process is iterated continuously for one hundred times, the model parameters of the neural network tend to be optimal, and the accurate noise suppression capability on the seismic data is achieved, so that the whole training process is completed, as shown in the left side of fig. 2.
Step 102.3: and saving the training model.
After the neural network training is finished, the noise suppression capability is achieved, and at the moment, network model parameters such as a weight matrix and a bias matrix are stored to the local for noise suppression of the neural network on a prediction set.
Step 103: seismic data prediction set generation
After the neural network parameters are locally stored, the prediction processing method can be used for prediction processing of actual seismic data. First, actual seismic data is prepared and is partitioned into blocks to generate a data block set with a consistent size. The data block size recommendation is consistent with the training set data block size at the time of training.
Step 104: noise suppression and data enhancement by multi-branch deep self-coding seismic noise suppression network
Step 104.1: noise suppression by multi-branch deep self-coding seismic noise suppression network
Inputting the processed prediction block set into a high-depth multi-branch self-coding seismic noise suppression network, loading the trained model parameters by a network model, initializing the model, and covering the trained parameters to all network layers, including convolution kernel parameters in a convolution layer, and weight and offset parameters in a full-connection layer. The prediction set sequentially passes through an encoder, a hidden layer and a decoder in the network, and data encoding, noise suppression and feature decoding are carried out in the prediction set, so that a final excellent noise suppression prediction result is obtained.
Step 104.2: the adaptive data enhancement layer improves the prediction set signal-to-noise ratio.
The denoised data processed by the neural network model does not contain noise basically, but deep signals are still weak, and the overall signal-to-noise ratio needs to be further improved. Since the prediction set is substantially noise free, weak signal enhancement can be performed by a data enhancement algorithm, thereby improving the overall signal-to-noise ratio. The method adopts a dynamic self-adaptive data enhancement technology based on a Sigmoid function. Firstly, identifying the dimensionality and the scale of data, extracting a seismic data value under a specific depth, and inputting the seismic data value into a specially designed enhancement function, wherein the expression of the enhancement function is as follows:
wherein k is a numerator coefficient, eta is a denominator base number, alpha is a weight coefficient, beta is a bias coefficient, and b is a correction value for fine tuning the enhancement capability; x is the value of the input signal, x m The maximum value of the target enhancement value is used for setting the weak signal enhancement degree. The curve structure is similar to a conventional Sigmoid function curve, so that self-adaption enhancement can be performed on a lower data value according to linearity, weak signals of the seismic data are effectively extracted, and the signal-to-noise ratio of the whole seismic data is enhanced.
Step 105: seismic data noise suppression algorithm processing prediction set based on curvelet transform
The curvelet transform is an improved version of wavelet transform, has strong anisotropy, locality and directivity when processing two-dimensional signals, can sparsely represent image edge characteristics, and is widely applied to data noise suppression.
Assuming that the seismic signal of N sampling points M is represented as o (N), the seismic signal r (N) containing the seismic signal has the following relation:
r(n)=o(n)+n(n) (19)
where r (n) represents noisy seismic data, o (n) represents raw seismic data without noise, and n (n) represents contained noise, subject to a normal distribution. Acquiring the number L of layers of curvelet decomposition and the corresponding direction number S through seismic data, performing operations such as fast Fourier transform on seismic signals to acquire curvelet coefficients W, and calculating the noise variance of each sub-band according to the following expression:
wherein χ is a constant. The variance in each subband is estimated by the following equation:
calculating a threshold value of each sub-band, wherein the relation is as follows:
finally, correcting the curvelet coefficient:
and performing inverse curvelet transformation on W to convert curvelet coefficients into de-noised seismic data of an original scale, thereby realizing curvelet transformation noise suppression. Step 106: the fusion layer performs local loss function weighting and block dynamic fusion on the double branch result
Extracting the background noise of the original data through amplitude limiting filtering, and performing adaptability enhancement, wherein the process is as follows:
where V is the block of the matrix from which the noise is extracted, X real And (n) is original seismic data, and w is a weight value. V is represented as follows:
after the prediction set is subjected to neural network and curvelet transformation, the result is subjected to blocking processing, loss function solving is carried out on each data block and actual noise under the global condition, and Euclidean distance between the prediction set and the actual noise under two branches is obtained, wherein the process is as follows:
wherein w n,n Outputting blocked data block singles, y, for curvelet transforms n,n Outputting blocked data block singles, x, for neural networks i,j Singles out the extracted block of noise data. The loss function value L is calculated for each block w (i, j, k) and L ae (i, j, k), performing dynamic weighted fusion, and performing the following process:
wherein k is 0 The number of data blocks is W, the data block is output by curvelet transform and is divided into blocks, Y is the data block output by neural network and is divided into blocks, X out And (4) performing noise suppression on the final seismic data.
The overall model structure of the method is shown in fig. 3, and mainly comprises a multi-branch deep self-encoder network and an adaptive data enhancement layer.
Example (b):
the following will demonstrate the practice of the present invention with reference to specific examples.
Step 1: and combining a training set and a label set by adopting a seismic data Model94 generated by simulation and actual seismic data CMP.
Step 2: and adaptively cutting the generated training set and the label set into blocks, and converting the blocks into a format suitable for the neural network.
And step 3: and constructing a multi-scale seismic noise suppression network of the multi-branch deep self-encoder.
And 4, step 4: and inputting the training block set into a neural network, and solving a MSE-based loss function by the network output and the corresponding label block set so as to perform network gradient back propagation and realize the training of the network.
And 5: the trained model inputs data to be predicted, and the actual seismic data CMP _ INL2000_ stk _ org _ gain is adopted in the example.
Step 6: and the network output result is subjected to data enhancement through the data enhancement layer.
And 7: and inputting the prediction set into a noise reduction curvelet transform layer to obtain a seismic data denoising result with edge information reserved. The result and the result of the data enhancement layer are input into the gradient dynamic fusion layer to obtain a fused result.
And 8: and inputting the result into a self-adaptive combiner to obtain a synthetic data block, and further constructing a complete earthquake prediction result. The mid-deep pair of the original seismic data and the predicted seismic data profile is shown in FIG. 4. The method is compared with a curvelet transform seismic noise suppression algorithm, a comparison graph is shown in fig. 5, wherein (1-1), (2-1) and (3-1) are section graphs of original files, (2-1), (2-2) and (2-3) are section graphs after denoising by the curvelet transform algorithm, and (3-1), (3-2) and (3-3) are section graphs after denoising by the method, and values taken in the graphs are middle-depth regions. Compared with the prior art, the method has obvious noise suppression effect, can effectively recover weak signals in the stratum, particularly areas with strong noise, basically disappears the original noise after treatment, and has clear and visible stratum structure.
And step 9: and (5) obtaining a spectrogram of the whole seismic data and comparing the spectrogram. As shown in fig. 6, it can be seen that the method can effectively suppress noise signals at medium and low frequencies and enhance effective signals at medium and high frequencies.
According to the process, the invention provides the multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding, the characteristics of noisy seismic data are fully considered, and the multi-scale combined noise suppression and stratum weak signal extraction can be carried out on the seismic data by adopting the mode of combining multi-branch convolution, noise reduction self-coding, residual error network and curvelet transform, so that the signal-to-noise ratio is obviously improved, and the effect of the method is far beyond that of the traditional algorithm.
Claims (4)
1. A multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding is characterized in that: the edge information of the seismic data is extracted by improving the traditional curvelet transform algorithm, a denoising self-coding network is constructed, and a convolution layer, a pooling layer, a multi-branch structure, a residual error structure, a data enhancement layer and a curvelet transform layer are added into the network, so that the learning capability and the anti-overfitting capability of a neural network are improved; secondly, training a neural network by using noise-containing seismic data and noise-free pure data, and performing dynamic weighting, blocking and fusion with a curvelet transform noise reduction algorithm, so that a remarkable seismic data noise suppression effect can be realized; the model can train the seismic data of different regions and save the parameters of the region, thereby realizing the targeted noise suppression. The method mainly comprises the following steps: firstly, generating a seismic data training set containing noise and a corresponding seismic data label set without noise, obtaining a block set suitable for a neural network by the generated training set and the label set through a self-adaptive splicer, inputting the training block set and the label block set into a constructed residual convolution self-coding multi-scale seismic noise suppression network for training and storing a model, finally inputting seismic data needing denoising into a trained network model for prediction, and performing data enhancement on a prediction result through a data enhancement layer; inputting seismic data to be denoised into a curvelet transformation model for noise suppression to obtain a noise suppression result after extracting edge information, performing weighted fusion on a neural network and the curvelet transformation result through a mean square loss algorithm, inputting the result into a self-adaptive combiner to synthesize a data block, and obtaining complete denoised seismic prediction data.
2. The residual convolutional self-coding based multiscale seismic noise suppression method of claim 1, wherein: a training set of seismic data containing noise and a corresponding set of seismic data tags that do not contain noise need to be established.
3. The residual convolutional self-coding based multiscale seismic noise suppression method of claim 1, wherein: the training set and the label set are processed into data blocks with fixed size before being input into the neural network, so as to be used for the convolutional network layer in deep learning.
4. The residual convolutional self-coding based multiscale seismic noise suppression method of claim 1, wherein: the method can obviously suppress noise characteristics of different scales, obviously extract deep weak signals of different scales, and has an effect far superior to that of a traditional algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210428762.6A CN115146667A (en) | 2022-04-22 | 2022-04-22 | Multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210428762.6A CN115146667A (en) | 2022-04-22 | 2022-04-22 | Multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115146667A true CN115146667A (en) | 2022-10-04 |
Family
ID=83406148
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210428762.6A Pending CN115146667A (en) | 2022-04-22 | 2022-04-22 | Multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115146667A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115836867A (en) * | 2023-02-14 | 2023-03-24 | 中国科学技术大学 | Dual-branch fusion deep learning electroencephalogram noise reduction method, device and medium |
CN117876692A (en) * | 2024-03-11 | 2024-04-12 | 中国石油大学(华东) | Feature weighted connection guided single-image remote sensing image denoising method |
-
2022
- 2022-04-22 CN CN202210428762.6A patent/CN115146667A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115836867A (en) * | 2023-02-14 | 2023-03-24 | 中国科学技术大学 | Dual-branch fusion deep learning electroencephalogram noise reduction method, device and medium |
CN115836867B (en) * | 2023-02-14 | 2023-06-16 | 中国科学技术大学 | Deep learning electroencephalogram noise reduction method, equipment and medium with double-branch fusion |
CN117876692A (en) * | 2024-03-11 | 2024-04-12 | 中国石油大学(华东) | Feature weighted connection guided single-image remote sensing image denoising method |
CN117876692B (en) * | 2024-03-11 | 2024-05-17 | 中国石油大学(华东) | Feature weighted connection guided single-image remote sensing image denoising method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | Residual learning of deep convolutional neural network for seismic random noise attenuation | |
Yang et al. | Unsupervised 3-D random noise attenuation using deep skip autoencoder | |
Yang et al. | Deep learning seismic random noise attenuation via improved residual convolutional neural network | |
CN110361778B (en) | Seismic data reconstruction method based on generation countermeasure network | |
CN115146667A (en) | Multi-scale seismic noise suppression method based on curvelet transform and multi-branch deep self-coding | |
Jiang et al. | A convolutional autoencoder method for simultaneous seismic data reconstruction and denoising | |
CN111723701B (en) | Underwater target identification method | |
CN111368710A (en) | Seismic data random noise suppression method combined with deep learning | |
Meng et al. | Self-supervised learning for seismic data reconstruction and denoising | |
CN113642484B (en) | Magnetotelluric signal noise suppression method and system based on BP neural network | |
Liu et al. | Unsupervised deep learning for random noise attenuation of seismic data | |
CN113255437A (en) | Fault diagnosis method for deep convolution sparse automatic encoder of rolling bearing | |
Saad et al. | Unsupervised deep learning for single-channel earthquake data denoising and its applications in event detection and fully automatic location | |
CN113077386A (en) | Seismic data high-resolution processing method based on dictionary learning and sparse representation | |
Feng et al. | Seismic data denoising based on tensor decomposition with total variation | |
CN117076858B (en) | Deep learning-based low-frequency geomagnetic strong interference suppression method and system | |
CN114460648A (en) | 3D convolutional neural network-based self-supervision 3D seismic data random noise suppression method | |
CN117171514A (en) | Seismic data denoising method based on multi-scale residual convolution | |
CN115267911B (en) | Model and data driving deep learning-based earthquake multiple suppression method | |
CN116011338A (en) | Full waveform inversion method based on self-encoder and deep neural network | |
Wang et al. | Multi-scale residual network for seismic data denoising and reconstruction | |
Dong et al. | A potential solution to insufficient target-domain noise data: Transfer learning and noise modeling | |
CN113484913B (en) | Seismic data denoising method for multi-granularity feature fusion convolutional neural network | |
Mo et al. | An aliasing-free low-frequency pretrained model for seismic interpolation using a small training set | |
Liao et al. | A twice denoising autoencoder framework for random seismic noise attenuation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |