CN113935467A - DAS (data acquisition System) borehole exploration data noise suppression method based on iterative multi-scale attention network - Google Patents
DAS (data acquisition System) borehole exploration data noise suppression method based on iterative multi-scale attention network Download PDFInfo
- Publication number
- CN113935467A CN113935467A CN202111218720.1A CN202111218720A CN113935467A CN 113935467 A CN113935467 A CN 113935467A CN 202111218720 A CN202111218720 A CN 202111218720A CN 113935467 A CN113935467 A CN 113935467A
- Authority
- CN
- China
- Prior art keywords
- feature
- network
- data
- attention
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000001629 suppression Effects 0.000 title claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 26
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000000605 extraction Methods 0.000 claims description 41
- 230000006870 function Effects 0.000 claims description 27
- 238000005070 sampling Methods 0.000 claims description 22
- 230000004927 fusion Effects 0.000 claims description 16
- 238000011176 pooling Methods 0.000 claims description 15
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 14
- 238000010276 construction Methods 0.000 claims description 13
- 238000013461 design Methods 0.000 claims description 12
- 230000002829 reductive effect Effects 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 6
- 230000006835 compression Effects 0.000 claims description 6
- 238000007906 compression Methods 0.000 claims description 6
- 230000009977 dual effect Effects 0.000 claims description 5
- 230000009467 reduction Effects 0.000 claims description 5
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000005284 excitation Effects 0.000 claims description 3
- 238000004321 preservation Methods 0.000 claims description 3
- 230000001737 promoting effect Effects 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000003313 weakening effect Effects 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 abstract description 4
- 238000005562 fading Methods 0.000 description 6
- 239000013307 optical fiber Substances 0.000 description 6
- 238000001228 spectrum Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000002238 attenuated effect Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000036961 partial effect Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Complex Calculations (AREA)
Abstract
The invention relates to a DAS borehole exploration data noise suppression method based on an iterative multi-scale attention network, and belongs to the technical field of geophysical. The method comprises the steps of establishing an iterative multi-scale attention network, establishing a training set, training the iterative multi-scale attention network and denoising DAS data. The de-noising results of the synthesized data and the actual data show that compared with a band-pass filter and a feedforward de-noising convolutional neural network, the method can effectively inhibit the complex noise in the DAS data, improve the signal-to-noise ratio of the DAS data, enable the recovered signal to be very close to a pure signal in frequency and amplitude, and facilitate subsequent data processing and explanation.
Description
Technical Field
The invention belongs to the technical field of geophysical, and particularly relates to a DAS (data acquisition system) borehole exploration data noise suppression method based on an iterative multi-scale attention network.
Background
The seismic exploration technology is a basic mode of oil and gas resource exploration and comprises three links of field data acquisition, indoor data processing, data interpretation and the like. Distributed optical fiber acoustic sensing (DAS) is a new distributed sensing technology for seismic signal acquisition. The acquisition principle is that the measurement of the axial strain of the optical fiber is realized by detecting the phase change of Rayleigh backscattering signals generated by laser pulses on a scattering body inside the optical fiber, and then seismic wave signals around the optical fiber are obtained. Compared with the traditional electronic detector, the DAS has the advantages of low cost, high temperature resistance, high pressure resistance, corrosion resistance, labor and material cost saving, wider application range, higher efficiency, convenience, no need of manual intervention, more accurate acquisition and the like. DAS has therefore been progressively applied to vertical seismic profile seismic survey data acquisition in recent years. However, the DAS seismic data actually acquired contains a large amount of background noise, including random noise, abnormal background interference, horizontal noise, checkerboard noise, fading noise, and the like, so that the DAS seismic data has a lower signal-to-noise ratio than data acquired by a conventional electronic detector, and is not favorable for subsequent processing. How to suppress complex background noise in the DAS seismic data and improve the signal-to-noise ratio becomes an important problem to be solved urgently in DAS data processing.
For many years, experts both at home and abroad have proposed a number of solutions for suppressing various noises in seismic records. Such as band pass filters, wavelet transforms, wiener filters, time-frequency peak filters, Empirical Mode Decomposition (EMD), Variational Mode Decomposition (VMD), robust principal component analysis, etc., which have achieved good results on some specific problems, but still have limitations. For example, the band-pass filter can retain components in a specific frequency range and attenuate the remaining frequency range to a lower level, and the band-pass filter has the advantages of simple principle, low cost and wide application in actual oilfield production. But the band-pass filter has difficulty in separating effective signals and noise of a shared frequency band; wavelet transformation relies on finding an optimal threshold capable of separating effective signals and noise, and obtaining the optimal threshold has certain difficulty; EMD has poor retention of signal amplitude when processing seismic data with high noise content and low signal-to-noise ratio. In addition, there are also multi-scale decomposition methods, such as curvelet (curvelet), contourlet (contourlet) and shear wave (shearlet), compressive sensing, dictionary learning, low-rank matrix decomposition, but these processing methods also require prior information and parameter optimization, etc., which limits the denoising effect of these processing methods on DAS seismic data.
In recent years, deep learning has great potential in the geophysical problem, and the convolutional neural network has the advantage of realizing automatic learning by extracting deep features of data on the premise of no prior knowledge. Some network methods have been successfully applied in seismic data denoising. The feedforward denoising convolutional neural network (DnCNN) is combined with residual learning and batch normalization to realize an end-to-end denoising task, a mapping relation from noisy data to pure data is established, and random noise and low-frequency noise in DAS data can be suppressed. A game theory-based classic deep learning algorithm generation countermeasure network (GAN) generates pure data from noisy data in the training countermeasures of a discriminator and a generator, and is applied to the denoising of desert seismic data. An unsupervised attention generation countermeasure network (U-GAT-IT) is combined with an attention module, so that the model can be guided to better distinguish noise and effective signals, and desert earthquake noise is effectively suppressed. However, these methods have limitations in processing DAS data, and it is difficult to effectively reduce various types of background noise in DAS data, and at the same time, the recovered effective signal is inevitably lost, and is not satisfactory in frequency and amplitude.
Disclosure of Invention
The invention provides a DAS (data acquisition system) borehole exploration data noise suppression method based on an iterative multi-scale attention network, which aims to solve the problems of complex background noise, multiple types and low signal-to-noise ratio existing in DAS seismic data processing at present.
The technical scheme adopted by the invention is that the method comprises the following steps:
(I) iterative multiscale attention network establishment
The iterative multi-scale attention network consists of three parts, namely shallow feature extraction, deep feature extraction and feature domain-to-signal domain conversion, noisy data Y and a denoising resultThe input and the output of the iterative multi-scale attention network are respectively, and the iterative multi-scale attention network is designed based on recursive residual errors;
(1) constructing a shallow feature extraction section
The shallow layer feature extraction aims to extract features of input noisy data Y, wherein the input noisy data Y is a convolution layer Conv;
(2) constructing a deep layer feature extraction section
The multiscale residual block is a main structure of a network and is used for extracting deep features, the multiscale residual block adopts three parallel convolution flow designs, the DAS seismic data are ensured to keep accurate high-resolution representation in the whole network, meanwhile, rich feature information is received from the low-resolution representation, and the network comprises key factors: the double attention module and the selective feature fusion module are used for extracting the parallel multi-resolution convolution flow of the multi-scale features and capturing information;
construction of parallel multiresolution convolution streams
The iterative multi-scale attention network framework adopts a recursive residual error design to simplify information flow in a learning process, three parallel multi-resolution convolution flows 1 x, 2 x and 4 x are constructed, in order to keep the residual error property of the framework, a residual error sampling module is introduced to execute down-sampling and up-sampling operations, in a multi-scale residual error block, the size of a feature map is kept unchanged along the convolution flows, the size of a cross-flow feature map is changed according to an input resolution index a and an output resolution index b, and if a < b, an input feature tensor is down-sampled; if a > b, upsampling the feature map, and integrating antialiasing downsampling in a downsampling module to improve the translation invariance of the network;
② constructing a dual attention module for capturing information
Generating two new feature graphs after the feature M respectively passes through the channel attention and the space attention, splicing the two feature graphs on the channel dimension, applying a layer of convolution to obtain new output, and learning the coefficientAnd spatial attention feature mapReadjusting the input features, enhancing important features, and weakening unimportant features, so that the output features have stronger directivity;
because the importance occupied by different channels is different, the channel attention utilizes the compression and excitation module to learn a group of weights, the characteristics are recalibrated, and the input characteristics M belong to RH×W×CApplying global average pooling to generate feature descriptors d ∈ R1×1×CApplying convolution layer Conv and PRelu activation functions to the filter, performing channel down-sampling to generate compact features, performing one-layer convolution, and generating coefficients through Sigmoid functionSigmoid functions to combine coefficientsLimited to [0:1 ]]Range of (1), coefficient ofMultiplying the channel corresponding to the characteristic M to realize readjustment of the characteristic M, and realizing attention of the channel through learning coefficientsThe weight of more useful channels is enhanced, and the influence of unimportant channels is suppressed;
the spatial attention is focused on the part of the feature with the most information quantity in the space, the global tie pooling and the global maximum pooling are respectively applied to the feature M in the channel dimension, and the obtained two results are combined based on the channel to obtain a new feature graph f epsilon RH×W×2The feature graph f is subjected to Conv dimension reduction by a layer of convolution to 1 channel, and then is subjected to Sigmoid function to generate a space attention feature graphFeature map of spatial attentionMultiplying by the feature M to rescale the feature M;
construction of selective feature fusion module
The design of three parallel convolution streams is adopted, and three input scale characteristics are fused:
L=L1+L2+L3,L∈RH×W×C, (1)
h, W and C respectively represent the height, width and channel of the feature, the feature graph L can be subjected to global average pooling through the global average pooling, all pixel values are added to calculate the average, the number of parameters is reduced, overfitting is avoided, and channel statistical data S belonging to R are obtained1×1×CAnd then, channel downsampling is carried out on the channel statistical data S to generate compact feature representation:
wherein HCDSDenotes channel down-sampling, the feature vector Z is corresponding to the convolution flow one by one through three parallel channel up-sampling convolution layers to generate three feature descriptors v1∈R1×1×C,v2∈R1×1×C,v3∈R1×1×CFor three feature descriptors v, respectively1、v2、v3Applying the softmax activation function yields three corresponding attention coefficients s1、s2、s3Respectively connecting them with multi-scale features L1、L2、L3The process of the whole selective feature fusion module can be described by the following formula:
U=s1·L1+s2·L2+s3·L3。 (3)
(3) constructing a feature domain to signal domain conversion section
The feature domain to signal domain conversion section is intended to convert the extracted deep features from the feature domain to the signal domain, this section being a convolutional layer Conv;
(II) construction of training set
The data set used for training the iterative multi-scale attention network is called a training set, and the training set comprises a clean signal set and a noisy data set;
(1) clean signal set
Simulating a large number of DAS pure signals Y by means of forward modeling*Using a 128 × 128 slider to cut 9000 blocks of valid signals from the clean record as a clean signal set;
(2) noisy data set
To construct a noisy signal data set, 9000 blocks of 128 × 128 noisy data Y, together with a clean signal Y, are extracted from the actually acquired DAS records*The construction of a noisy data set is realized through one-to-one addition;
(III) training of iterative multiscale attention networks
(1) Shallow feature extraction
Firstly, a convolutional layer Conv is applied to extract shallow layer characteristics F from noisy data YSThe process can be expressed as:
FS=HSF(Y), (4)
wherein HSF(. -) represents shallow feature extraction of input noisy data Y using convolution kernel;
(2) deep layer feature extraction
From shallow features F by multi-scale residual blocksSThe deep layer feature extraction is carried out, and is expressed as a formula form as follows:
in the formulaRepresenting a 1 st multi-scale residual block feature extraction function,a feature extraction function representing an ith multi-scale residual block,the extracted features are extracted for i times to obtain results;
(3) feature domain to signal domain conversion
The network finally applies a convolutional layer Conv to output the last multi-scale residual blockConverting the characteristic domain into a signal domain, and promoting network training by adopting a residual error learning mode, thereby obtaining a denoising result of the networkComprises the following steps:
wherein HLF(. cndot.) denotes the last convolutional layer performs a feature domain to signal domain transfer function, so the output and input relationship of the whole network can be expressed as:
in the formula HNet(. The) represents the processing function of the whole denoising network;
(4) in order to enable the denoised signal to have better amplitude preservation, a Charbonnier loss function is adopted to optimize a denoising result, which is specifically as follows:
wherein Y is*Which is indicative of a clean signal, is,for the output result of the network, ε is a constant, which is set to 10-3;
(IV) DAS data denoising processing
And denoising the DAS data by using a model generated based on the iterative multi-scale attention network, inputting the actual DAS data into the iterative multi-scale attention network, and obtaining denoised DAS data as an output result of the network.
In the steps (one) and (1) of the invention, the input channel of the shallow feature extraction part is 1, the output channel is 32, and the size of the convolution kernel is 3 multiplied by 3.
In the step (2) of the step (I) and the construction of the deep layer feature extraction part, the iterative multi-scale attention network adopts recursive residual error design, skip connection is used between input and output, and a mode of serial connection of a plurality of multi-scale residual error blocks is adopted, so that the network performance is improved, and the complexity of network learning is reduced.
In the step (I) and the step (2) of the invention, in the selective characteristic fusion module, the compression ratio is 8.
In the steps (one) and (3) of the invention, the input channel of the conversion part from the characteristic domain to the signal domain is 32, the output channel is 1, and the size of the convolution kernel is 3 multiplied by 3.
The specific parameters of the clean signal set in step (1) of the invention are set as follows: the number of the planar layers is 3-6, the depth is 300-500m, the wave velocity is 1000-4000m/s, and the density of the planar layers is 1500-2600kg/m3The signal type is Rake wavelet, the dominant frequency of the signal is 40-90Hz, the well-source distance is 100-.
In the deep feature extraction in the step (2) of the third step of the invention, i in the network structure is set to be 3.
The invention has the advantages that:
the iterative multi-scale attention network constructed by the invention combines a multi-scale thought and an attention mechanism, samples DAS seismic data to three different scales, and equivalently enlarges the receptive field. Channel attention and space attention are applied to each scale to extract channel and space information, the channel and space information share transmitted information, and the transmitted information is further extracted from the channel dimension and the space dimension, so that the utilization of the channel information and the space information by a network is ensured; and performing fusion and selection based on a self-attention mechanism, and realizing fusion and selection of seismic data features at different scales through two steps of fusion and selection. The DAS seismic data noise reduction method based on the network can effectively reduce various types of background noise in DAS seismic data, improves the signal-to-noise ratio of the DAS seismic data, enables the recovered signals to be very close to pure signals in frequency and amplitude, and is beneficial to subsequent data processing and interpretation.
Drawings
FIG. 1 is an overall architecture diagram of the iterative multi-scale attention network of the present invention;
FIG. 2(a) is a downsampling module of the present invention incorporating antialiased downsampling;
FIG. 2(b) is an upsampling module of the present invention, integrating bilinear upsampling;
FIG. 3(a) is a block diagram of a dual attention module in a multi-scale residual block of the present invention that can perform feature extraction on seismic data in both spatial and channel dimensions;
FIG. 3(b) is a channel attention module in the dual attention module of the present invention, which enhances the weighting of more useful channels, suppressing the impact of non-significant channels;
FIG. 3(c) is a spatial attention module of the dual attention module of the present invention, focusing on the portion of the feature with the most information content in space;
FIG. 4 is a block diagram of a selective feature fusion module in a multi-scale residual block according to the present invention, which implements fusion and selection of seismic data features at different scales;
FIG. 5(a) is a block of 9000 partially clean signals in a training data set of the present invention, 128 × 128 in size;
FIG. 5(b) is a partial set of noisy data in a training dataset according to the present invention. 9000 blocks with size of 128 × 128 correspond to pure effective signals one by one;
FIG. 6(a) is a clean DAS signal synthesized by forward modeling in accordance with the present invention;
FIG. 6(b) is the actual data noise of the present invention;
FIG. 6(c) is semi-synthetic noisy data with a signal-to-noise ratio of 1.9dB according to the present invention;
FIG. 7(a) is a composite noisy data comprising horizontal noise, fading noise, checkerboard noise, optical noise and random noise;
FIG. 7(b) is the result of the bandpass filter process, with significant residual horizontal noise and checkerboard noise, reduced recovered signal frequency, and a signal-to-noise ratio of 7.6 dB;
FIG. 7(c) is a DnCNN process result with some suppression of various background noises, but with some attenuation of the recovered signal amplitude and a signal-to-noise ratio of 19.8 dB;
FIG. 7(d) is the result of iterative multi-scale attention network processing with almost no background noise residue, the recovered signal frequency and amplitude remain good, and the signal-to-noise ratio is 28.7 dB;
FIG. 7(e) is the actual noise added to the composite record;
FIG. 7(f) noise separated by the bandpass filter, with significant signal residual;
FIG. 7(g) shows the noise separated by DnCNN, with more various noise residuals and severe loss of effective signal energy;
FIG. 7(h) shows the noise separated by the denoising network of the present invention, which has substantially no signal residue, and the separated noise is substantially the same as the noise added in the synthesis record;
FIG. 8(a) is the F-K spectrum of a pure DAS signal;
FIG. 8(b) is the F-K spectrum of the band-pass filter denoising result, the signal frequency is significantly reduced, and the white matrix indicates that the denoising result has noise residue;
FIG. 8(c) is the F-K spectrum of the DnCNN denoising result, with the white matrix indicating that the denoising result has a noisy residual;
FIG. 8(d) is the F-K spectrum of the denoising result of the iterative multi-scale attention network, which is closest to the clean data in frequency;
FIG. 9 is a comparison graph of the waveform of the 10 th channel signal recorded synthetically, in which the denoising result of the present invention is basically the same as the waveform of the pure signal, and the signal amplitude is the closest among the three methods;
fig. 10(a) is an actual noisy recording, the DAS active signal is contaminated by a large amount of background noise, including horizontal noise, fading noise, optical noise, and random noise;
FIG. 10(b) shows the de-noising result of the band-pass filter processing actual record, the horizontal noise has obvious residual, other noises are all suppressed, and the recovered signal frequency is reduced;
FIG. 10(c) is a de-noising result of DnCNN processing of an actual recording, with various background noises suppressed to some extent, but with the recovered signal amplitude reduced;
FIG. 10(d) shows the de-noising result of the de-noising network processing actual record of the present invention, where the background noise is almost eliminated and the frequency and amplitude of the recovered signal are not significantly attenuated;
FIG. 11(a) is a diagram showing a band-pass filter processing noise actually recorded separately, horizontal noise not completely separated;
FIG. 11(b) is a graph of DnCNN processing actually recording isolated noise with relatively significant signal residual;
FIG. 11(c) shows that the denoising network of the present invention processes the actually recorded and separated noise, the background noise is separated, and there is substantially no effective signal residual.
Detailed Description
Comprises the following steps:
(I) iterative multiscale attention network establishment
The iterative multi-scale attention network consists of three parts, namely shallow feature extraction, deep feature extraction and feature domain-to-signal domain conversion, noisy data Y and a denoising resultThe input and the output of the iterative multi-scale attention network are respectively, and the iterative multi-scale attention network is designed based on recursive residual errors;
(1) constructing a shallow feature extraction section
The shallow layer feature extraction aims to perform feature extraction on input noisy data Y, wherein the input noisy data Y is a convolution layer Conv, the input channel is 1, the output channel is 32, and the size of a convolution kernel is 3 x 3;
(2) constructing a deep layer feature extraction section
The multiscale residual block is a main structure of a network and is used for extracting deep features, the multiscale residual block adopts three parallel convolution flow designs, the DAS seismic data are ensured to keep accurate high-resolution representation in the whole network, meanwhile, rich feature information is received from the low-resolution representation, and the network comprises key factors: the double attention module and the selective feature fusion module are used for extracting the parallel multi-resolution convolution flow of the multi-scale features and capturing information;
construction of parallel multiresolution convolution streams
The new network framework adopts recursive residual error design (with skip connection) to simplify information flow in the learning process, three parallel multi-resolution convolutional flows 1 x, 2 x and 4 x are constructed, in order to keep the residual error property of the framework, a residual error sampling module in the graph 2 is introduced to execute down-sampling and up-sampling operations, in a multi-scale residual error block, the size of a feature map is kept unchanged along the convolutional flow, the size of a cross-flow feature map is changed according to an input resolution index a and an output resolution index b, and if a < b, the down-sampling in the graph 2(a) is carried out on an input feature tensor; if a > b, performing the upsampling in the figure 2(b) on the feature map, and integrating antialiasing downsampling in a downsampling module to improve the translation invariance of the network;
② construct a double attention Module for capturing information (FIG. 3(a))
Generating two new feature graphs after the feature M respectively passes through the channel attention and the space attention, splicing the two feature graphs on the channel dimension, applying a layer of convolution to obtain new output, and learning the coefficientAnd spatial attention feature mapReadjusting the input features, enhancing important features, and weakeningUnimportant characteristics, so that the characteristics of the output have stronger directivity;
because the importance occupied by different channels is different, the channel attention in FIG. 3(b) learns a set of weights by using a compression and excitation module, recalibrates the features, and makes the input features M e to RH×W×CApplying global average pooling to generate feature descriptors d ∈ R1×1×CApplying convolution layer Conv and PRelu activation functions to the filter, performing channel down-sampling to generate compact features, performing one-layer convolution, and generating coefficients through Sigmoid functionSigmoid functions to combine coefficientsLimited to [0:1 ]]Range of (1), coefficient ofMultiplying the channel corresponding to the characteristic M to realize readjustment of the characteristic M, and realizing attention of the channel through learning coefficientsThe weight of more useful channels is enhanced, and the influence of unimportant channels is suppressed;
the spatial attention in fig. 3(c) focuses on the part of the feature with the most information amount in space, the global tie pooling and the global max pooling are respectively applied to the feature M in the channel dimension, and the obtained two results are combined based on the channel to obtain a new feature map f e RH×W×2The feature graph f is subjected to Conv dimension reduction by a layer of convolution to 1 channel, and then is subjected to Sigmoid function to generate a space attention feature graphFeature map of spatial attentionMultiplying by the feature M to rescale the feature M;
construction of Selective feature fusion Module (FIG. 4)
The design of three parallel convolution streams is adopted, and three input scale characteristics are fused:
L=L1+L2+L3,L∈RH×W×C, (1)
h, W and C respectively represent the height, width and channel of the feature, the feature graph L can be subjected to global average pooling through the global average pooling, all pixel values are added to calculate the average, the number of parameters is reduced, overfitting is avoided, and channel statistical data S belonging to R are obtained1×1×CAnd then, channel downsampling is carried out on the channel statistical data S to generate compact feature representation:
wherein HCDS(. to) represents channel down-sampling with a compression ratio of 8, and finally, the feature vector Z is in one-to-one correspondence with the convolution flow through three parallel channel up-sampling convolution layers to generate three feature descriptors v1∈R1×1×C,v2∈R1×1×C,v3∈R1×1×CFor three feature descriptors v, respectively1、v2、v3Applying the softmax activation function yields three corresponding attention coefficients s1、s2、s3Respectively connecting them with multi-scale features L1、L2、L3The process of the whole selective feature fusion module can be described by the following formula:
U=s1·L1+s2·L2+s3·L3。 (3)
(3) constructing a feature domain to signal domain conversion section
The purpose of the feature domain to signal domain conversion part is to convert the extracted deep features from the feature domain to the signal domain, the part is a layer of convolution layer Conv, the input channel is 32, the output channel is 1, and the convolution kernel size is 3 × 3;
(III) construction of training set
The data set used for training the iterative multi-scale attention network is called a training set, and the training set comprises a clean signal set and a noisy data set;
(1) clean signal set
Simulating a large number of DAS pure signals Y by means of forward modeling*The specific parameters are set as follows: the number of the planar layers is 3-6, the depth is 300-500m, the wave velocity is 1000-4000m/s, and the density of the planar layers is 1500-2600kg/m3The signal type is Rake wavelets, the dominant frequency of the signal is 40-90Hz, the well-source distance is 100-200m, the sampling time interval is 1m, the length of the recording optical fiber is 2000-4000m, the sampling frequency is 2500Hz, 9000 effective signals are intercepted from pure records by using a 128 multiplied by 128 sliding block to serve as a pure signal set;
(2) noisy data set
To construct a noisy signal data set, we extract 9000 blocks of 128 × 128 noisy data Y, along with a clean signal Y, from the actually acquired DAS records*The construction of a noisy data set is realized through one-to-one addition;
(III) training of iterative multiscale attention networks
(1) Shallow feature extraction
Firstly, a convolutional layer Conv is applied to extract shallow layer characteristics F from noisy data YSThe input channel is 1, the output channel is 32, the convolution kernel size is 3 × 3, and the process can be expressed as:
FS=HSF(Y), (4)
wherein HSF(. -) represents shallow feature extraction of input noisy data Y using convolution kernel;
(2) deep layer feature extraction
From shallow features F by multi-scale residual blocksSThe deep layer feature extraction is carried out, and is expressed as a formula form as follows:
in the formulaRepresenting a 1 st multi-scale residual block feature extraction function,a feature extraction function representing an ith multi-scale residual block,for the result obtained by extracting the extracted features for i times, the larger i is theoretically, the deeper the network layer number is, the better the denoising performance is, and meanwhile, the training time is increased, and through experimental comparison, i in the network structure is set to be 3;
(3) feature domain to signal domain conversion
The network finally applies a convolutional layer Conv with an input channel of 32, an output channel of 1, a convolutional kernel size of 3 x 3, and outputs the last multi-scale residual blockConverting the characteristic domain into a signal domain, and promoting network training by adopting a residual error learning mode, thereby obtaining a denoising result of the networkComprises the following steps:
wherein HLF(. cndot.) represents the transfer function of the last convolutional layer from the feature domain to the signal domain. The relationship of the output and input of the entire network can then be expressed as:
in the formula HNet(. The) represents the processing function of the whole denoising network;
(4) in order to enable the denoised signal to have better amplitude preservation, a Charbonnier loss function is adopted to optimize a denoising result, which is specifically as follows:
wherein Y is*Which is indicative of a clean signal, is,for the output result of the network, ε is a constant, which is set to 10 according to experiments-3;
(V) DAS data denoising processing
And denoising the DAS data by using a model generated based on the iterative multi-scale attention network, inputting the actual DAS data into the iterative multi-scale attention network, and obtaining denoised DAS data as an output result of the network.
2. The DAS borehole survey data noise suppression method based on the iterative multi-scale attention network of claim 1, wherein: in the step (I), the iterative multi-scale attention network adopts a recursive residual error design, skip connection is used between input and output, and a mode of serial connection of a plurality of multi-scale residual error blocks is adopted, so that the network performance is improved, and the complexity of network learning is reduced.
3. The DAS borehole survey data noise suppression method based on the iterative multi-scale attention network of claim 2, wherein: and step two, channel attention and space attention are adopted simultaneously, the two share the transmitted information, the information is further extracted from the channel dimension and the space dimension, and the utilization of the channel information and the space information by the network is ensured.
4. The DAS borehole survey data noise suppression method based on the iterative multi-scale attention network of claim 2, wherein: and step three, fusing multi-scale features based on a self-attention mechanism, and fusing and selecting seismic data features under different scales through two steps of fusing and selecting.
In order to prove the performance of the iterative multi-scale attention network in the invention, the synthesized and actually acquired DAS data is required to be used for verification. Actual DAS data can be directly obtained, the synthetic records need modeling for simulation, and test data sets in the experimental process all adopt a modeling generation mode. First, a clean signal is generated by forward modeling as shown in fig. 6 (a). The forward model comprises 6 planar layers with a depth of 300-500m, wave speeds of 1200, 1600, 2000, 2500, 3100, 3800m/s, and densities of 1727, 1978, 2010, 2200, 2320kg/m3. The signal type is Rake wavelets, and the dominant frequency is 60 Hz; the well-source distance is 200m, the length of the recording optical fiber is 2048m, and the sampling frequency is 2500 Hz. The background noise is extracted from the actual DAS data as shown in fig. 6(b), and includes various types of noise such as horizontal noise, fading noise, optical noise, and checkerboard noise. The semi-synthetic noisy data in fig. 6(c) is obtained by adding the clean signal in fig. 6(a) and the actual noise in fig. 6 (b).
Example 1 synthetic records
In order to verify the processing effect of the denoising network on the DAS seismic data, the denoising processing is firstly carried out on the semi-synthetic noisy DAS seismic record, and the processing result is compared with the effect of a band-pass filter and a feedforward denoising convolutional neural network (DnCNN). Fig. 7(a) shows noisy DAS seismic data, which clearly shows that various types of noise such as horizontal noise, fading noise, optical noise, checkerboard noise, random noise, etc. are included, and the effective signal is buried therein, which has a lower signal-to-noise ratio. The band pass filter, the DnCNN and the denoising network of the invention are used for denoising, and the results are shown in FIGS. 7(b) - (d). Fig. 7(e) shows the noise in the record containing noise, and the differences after noise cancellation by the three methods are shown in fig. 7(f) - (h) in sequence. It can be seen from the denoising result of the band-pass filter, fig. 7(b) and the difference fig. 7(f) that the band-pass filter can suppress random noise, fading noise, and optical noise, and has a poor processing effect on horizontal noise and checkerboard noise having frequencies close to those of the signal. In terms of signal recovery, the band-pass filter is largely able to recover the effective signal, but some very weak effective signals are not effectively retained. From the noise-canceled result of DnCNN in fig. 7(c) and the difference map in fig. 7(g), it can be seen that DnCNN can suppress almost all kinds of background noise of DAS seismic data, but there are partial residuals and the effective signal attenuation is severe. In contrast, it is apparent from the noise cancellation result of the method of the present invention in fig. 7(d) and the difference diagram in fig. 7(h) that the new method can suppress all background noise to the maximum extent, and simultaneously completely retain effective signals, and the up-going wave and the down-going wave have almost no amplitude attenuation, and even extremely weak effective signals are retained. The denoising capability and the signal recovery capability of the method are verified.
To further illustrate the recovery of the new method to a valid signal, fig. 8(a) - (d) show the clean data, band pass filter, DnCNN and F-K spectra of the method of the present invention, respectively. As can be seen from the F-K spectrum, the effective signal recovered by the band-pass filter is subjected to frequency reduction to a certain degree, and meanwhile, the white part shows that the noise elimination results of the band-pass filter and the DnCNN have noise residues; the noise-canceling result of the method of the invention is basically not different from the pure data, and the recovered signal is closest to the pure data in frequency. FIG. 9 shows a comparison of the waveforms of the 10 th trace of the clean data and the results of the three methods of noise cancellation. FIG. 9 shows that the amplitude fluctuation of the noise cancellation result of the band-pass filter is very large, the amplitude of DnCNN at the signal peak is obviously reduced, and the noise cancellation result of the method is closest to pure data. The F-K spectrogram and the single-channel waveform comparison chart prove that the signal recovered by the method is closest to a pure signal in frequency and amplitude.
TABLE 1 Signal-to-noise ratio (SNR) of the three methods to the synthetic record denoising results
Method | Noisy recordings | Band-pass filter | DnCNN | The method of the invention |
SNR(dB) | 1.9 | 7.6 | 19.8 | 28.7 |
The signal-to-noise ratios of the band-pass filter, the DnCNN and the noise elimination result of the method are respectively improved by 5.7dB, 17.9dB and 26.8dB, and the data shows that the method is obviously superior to the other two methods in the aspect of noise suppression.
Example 2 actual recording
To further verify the validity of the inventive method, it was applied to the actual DAS data denoising process in fig. 10 (a). It can be seen from the figure that the weak effective signal is drowned by strong noise and is almost indistinguishable. FIGS. 10(b) - (d) show the processing results of the bandpass filter, DnCNN and the de-noising network of the present invention in sequence. Fig. 11(a) - (c) illustrate the noise separated by the three methods in sequence. From fig. 10(b) - (c) and fig. 11(a) - (b), it can be seen that both the bandpass filter and the DnCNN have a certain noise suppression capability, but the bandpass filter fails to suppress horizontal noise, and the frequency of the effective signal is reduced, the effective axis is thickened, and after the DnCNN processing, there is a part of noise residual, and the amplitude of the effective signal is severely attenuated. In contrast, in fig. 10(d) and fig. 11(c), various noises processed by the new method are effectively suppressed, and weak energy signals submerged by strong noises are better recovered, which shows that the denoising capability and the signal recovery capability of the method of the invention are better than those of the other two methods.
Claims (7)
1. A DAS borehole exploration data noise suppression method based on an iterative multi-scale attention network is characterized by comprising the following steps:
(I) iterative multiscale attention network establishment
The iterative multi-scale attention network consists of three parts, namely shallow feature extraction, deep feature extraction and feature domain-to-signal domain conversion, noisy data Y and a denoising resultThe input and the output of the iterative multi-scale attention network are respectively, and the iterative multi-scale attention network is designed based on recursive residual errors;
(1) constructing a shallow feature extraction section
The shallow layer feature extraction aims to extract features of input noisy data Y, wherein the input noisy data Y is a convolution layer Conv;
(2) constructing a deep layer feature extraction section
The multiscale residual block is a main structure of a network and is used for extracting deep features, the multiscale residual block adopts three parallel convolution flow designs, the DAS seismic data are ensured to keep accurate high-resolution representation in the whole network, meanwhile, rich feature information is received from the low-resolution representation, and the network comprises key factors: the double attention module and the selective feature fusion module are used for extracting the parallel multi-resolution convolution flow of the multi-scale features and capturing information;
construction of parallel multiresolution convolution streams
The iterative multi-scale attention network framework adopts a recursive residual error design to simplify information flow in a learning process, three parallel multi-resolution convolution flows 1 x, 2 x and 4 x are constructed, in order to keep the residual error property of the framework, a residual error sampling module is introduced to execute down-sampling and up-sampling operations, in a multi-scale residual error block, the size of a feature map is kept unchanged along the convolution flows, the size of a cross-flow feature map is changed according to an input resolution index a and an output resolution index b, and if a < b, an input feature tensor is down-sampled; if a > b, upsampling the feature map, and integrating antialiasing downsampling in a downsampling module to improve the translation invariance of the network;
② constructing a dual attention module for capturing information
Generating two new feature graphs after the feature M respectively passes through the channel attention and the space attention, splicing the two feature graphs on the channel dimension, applying a layer of convolution to obtain new output, and learning the coefficientAnd spatial attention feature mapReadjusting the input features, enhancing important features, and weakening unimportant features, so that the output features have stronger directivity;
because the importance occupied by different channels is different, the channel attention utilizes the compression and excitation module to learn a group of weights, the characteristics are recalibrated, and the input characteristics M belong to RH×W×CApplying global average pooling to generate feature descriptors d ∈ R1×1×CApplying convolution layer Conv and PRelu activation functions to the filter, performing channel down-sampling to generate compact features, performing one-layer convolution, and generating coefficients through Sigmoid functionSigmoid functions to combine coefficientsLimited to [0:1 ]]Range of (1), coefficient ofMultiplying the channel corresponding to the characteristic M to realize readjustment of the characteristic M, and realizing attention of the channel through learning coefficientsThe weight of more useful channels is enhanced, and the influence of unimportant channels is suppressed;
the spatial attention is focused on the part of the feature with the most information quantity in the space, the global tie pooling and the global maximum pooling are respectively applied to the feature M in the channel dimension, and the obtained two results are combined based on the channel to obtain a new feature graph f epsilon RH×W×2The feature graph f is subjected to Conv dimension reduction by a layer of convolution to 1 channel, and then is subjected to Sigmoid function to generate a space attention feature graphFeature map of spatial attentionMultiplying by the feature M to rescale the feature M;
construction of selective feature fusion module
The design of three parallel convolution streams is adopted, and three input scale characteristics are fused:
L=L1+L2+L3,L∈RH×W×C, (1)
h, W and C respectively represent the height, width and channel of the feature, the feature graph L can be subjected to global average pooling through the global average pooling, all pixel values are added to calculate the average, the number of parameters is reduced, overfitting is avoided, and channel statistical data S belonging to R are obtained1×1×CAnd then, channel downsampling is carried out on the channel statistical data S to generate compact feature representation:
wherein HCDSDenotes channel down-sampling, the feature vector Z is corresponding to the convolution flow one by one through three parallel channel up-sampling convolution layers to generate three feature descriptors v1∈R1×1×C,v2∈R1×1×C,v3∈R1×1×CFor three feature descriptors v, respectively1、v2、v3Applying the softmax activation function to generate three correspondingAttention coefficient s1、s2、s3Respectively connecting them with multi-scale features L1、L2、L3The process of the whole selective feature fusion module can be described by the following formula:
U=s1·L1+s2·L2+s3·L3 (3)
(3) constructing a feature domain to signal domain conversion section
The feature domain to signal domain conversion section is intended to convert the extracted deep features from the feature domain to the signal domain, this section being a convolutional layer Conv;
(II) construction of training set
The data set used for training the iterative multi-scale attention network is called a training set, and the training set comprises a clean signal set and a noisy data set;
(1) clean signal set
Simulating a large number of DAS pure signals Y by means of forward modeling*Using a 128 × 128 slider to cut 9000 blocks of valid signals from the clean record as a clean signal set;
(2) noisy data set
To construct a noisy signal data set, 9000 blocks of 128 × 128 noisy data Y, together with a clean signal Y, are extracted from the actually acquired DAS records*The construction of a noisy data set is realized through one-to-one addition;
(III) training of iterative multiscale attention networks
(1) Shallow feature extraction
Firstly, a convolutional layer Conv is applied to extract shallow layer characteristics F from noisy data YSThe process can be expressed as:
FS=HSF(Y), (4)
wherein HSF(. -) represents shallow feature extraction of input noisy data Y using convolution kernel;
(2) deep layer feature extraction
From shallow features F by multi-scale residual blocksSThe deep layer feature extraction is carried out, and is expressed as a formula form as follows:
in the formulaRepresenting a 1 st multi-scale residual block feature extraction function,a feature extraction function representing an ith multi-scale residual block,the extracted features are extracted for i times to obtain results;
(3) feature domain to signal domain conversion
The network finally applies a convolutional layer Conv to output the last multi-scale residual blockConverting the characteristic domain into a signal domain, and promoting network training by adopting a residual error learning mode, thereby obtaining a denoising result of the networkComprises the following steps:
wherein HLF(. cndot.) denotes the last convolutional layer performs a feature domain to signal domain transfer function, so the output and input relationship of the whole network can be expressed as:
in the formula HNet(. The) represents the processing function of the whole denoising network;
(4) in order to enable the denoised signal to have better amplitude preservation, a Charbonnier loss function is adopted to optimize a denoising result, which is specifically as follows:
wherein Y is*Which is indicative of a clean signal, is,for the output result of the network, ε is a constant, which is set to 10-3;
(IV) DAS data denoising processing
And denoising the DAS data by using a model generated based on the iterative multi-scale attention network, inputting the actual DAS data into the iterative multi-scale attention network, and obtaining denoised DAS data as an output result of the network.
2. The DAS borehole survey data noise suppression method based on the iterative multiscale attention network of claim 1, wherein: in the step (one) and the step (1), the input channel of the shallow feature extraction part is 1, the output channel is 32, and the size of the convolution kernel is 3 multiplied by 3.
3. The DAS borehole survey data noise suppression method based on the iterative multiscale attention network of claim 1, wherein: in the step (2) and the deep feature extraction part, the iterative multi-scale attention network adopts recursive residual error design, skip connection is used between input and output, and a mode of serial connection of a plurality of multi-scale residual error blocks is adopted, so that the network performance is improved, and the complexity of network learning is reduced.
4. The DAS borehole survey data noise suppression method based on the iterative multiscale attention network of claim 1, wherein: and (3) in the step (I) and the step (2) for constructing the selective characteristic fusion module, the compression ratio is 8.
5. The DAS borehole survey data noise suppression method based on the iterative multiscale attention network of claim 1, wherein: in the step (one) and the step (3), the input channel of the conversion part from the feature domain to the signal domain is 32, the output channel is 1, and the size of the convolution kernel is 3 multiplied by 3.
6. The DAS borehole survey data noise suppression method based on the iterative multiscale attention network of claim 1, wherein: the specific parameter settings of the clean signal set in the step (1) are as follows: the number of the planar layers is 3-6, the depth is 300-500m, the wave velocity is 1000-4000m/s, and the density of the planar layers is 1500-2600kg/m3The signal type is Rake wavelet, the dominant frequency of the signal is 40-90Hz, the well-source distance is 100-.
7. The DAS borehole survey data noise suppression method based on the iterative multiscale attention network of claim 1, wherein: in the deep feature extraction in the step (2), i in the network structure is set to be 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111218720.1A CN113935467B (en) | 2021-10-19 | 2021-10-19 | DAS (data acquisition system) well exploration data noise suppression method based on iterative multi-scale attention network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111218720.1A CN113935467B (en) | 2021-10-19 | 2021-10-19 | DAS (data acquisition system) well exploration data noise suppression method based on iterative multi-scale attention network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113935467A true CN113935467A (en) | 2022-01-14 |
CN113935467B CN113935467B (en) | 2024-05-07 |
Family
ID=79280444
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111218720.1A Active CN113935467B (en) | 2021-10-19 | 2021-10-19 | DAS (data acquisition system) well exploration data noise suppression method based on iterative multi-scale attention network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113935467B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114879252A (en) * | 2022-07-11 | 2022-08-09 | 中国科学院地质与地球物理研究所 | DAS (data acquisition system) same-well monitoring real-time microseism effective event identification method based on deep learning |
CN115622626A (en) * | 2022-12-20 | 2023-01-17 | 山东省科学院激光研究所 | Distributed sound wave sensing voice information recognition system and method |
CN116594061A (en) * | 2023-07-18 | 2023-08-15 | 吉林大学 | Seismic data denoising method based on multi-scale U-shaped attention network |
CN116977651A (en) * | 2023-08-28 | 2023-10-31 | 河北师范大学 | Image denoising method based on double-branch and multi-scale feature extraction |
CN117675112A (en) * | 2024-02-01 | 2024-03-08 | 阳光凯讯(北京)科技股份有限公司 | Communication signal processing method, system, equipment and medium based on machine learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240219A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN110648334A (en) * | 2019-09-18 | 2020-01-03 | 中国人民解放军火箭军工程大学 | Multi-feature cyclic convolution saliency target detection method based on attention mechanism |
CN111968195A (en) * | 2020-08-20 | 2020-11-20 | 太原科技大学 | Dual-attention generation countermeasure network for low-dose CT image denoising and artifact removal |
CN112233026A (en) * | 2020-09-29 | 2021-01-15 | 南京理工大学 | SAR image denoising method based on multi-scale residual attention network |
-
2021
- 2021-10-19 CN CN202111218720.1A patent/CN113935467B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240219A1 (en) * | 2017-02-22 | 2018-08-23 | Siemens Healthcare Gmbh | Denoising medical images by learning sparse image representations with a deep unfolding approach |
CN110648334A (en) * | 2019-09-18 | 2020-01-03 | 中国人民解放军火箭军工程大学 | Multi-feature cyclic convolution saliency target detection method based on attention mechanism |
CN111968195A (en) * | 2020-08-20 | 2020-11-20 | 太原科技大学 | Dual-attention generation countermeasure network for low-dose CT image denoising and artifact removal |
CN112233026A (en) * | 2020-09-29 | 2021-01-15 | 南京理工大学 | SAR image denoising method based on multi-scale residual attention network |
Non-Patent Citations (2)
Title |
---|
田雅男;李月;林红波;吴宁: "GNMF小波谱分离在地震勘探噪声压制中的应用", 地球物理学报, no. 012, 31 December 2015 (2015-12-31) * |
董新桐;李月;刘飞;冯黔堃;钟铁: "基于卷积神经网络的井中分布式光纤传感器地震数据随机噪声压制新技术", 地球物理学报, vol. 64, no. 007, 7 July 2021 (2021-07-07) * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114879252A (en) * | 2022-07-11 | 2022-08-09 | 中国科学院地质与地球物理研究所 | DAS (data acquisition system) same-well monitoring real-time microseism effective event identification method based on deep learning |
CN114879252B (en) * | 2022-07-11 | 2022-09-13 | 中国科学院地质与地球物理研究所 | DAS (data acquisition system) same-well monitoring real-time microseism effective event identification method based on deep learning |
CN115622626A (en) * | 2022-12-20 | 2023-01-17 | 山东省科学院激光研究所 | Distributed sound wave sensing voice information recognition system and method |
CN116594061A (en) * | 2023-07-18 | 2023-08-15 | 吉林大学 | Seismic data denoising method based on multi-scale U-shaped attention network |
CN116594061B (en) * | 2023-07-18 | 2023-09-22 | 吉林大学 | Seismic data denoising method based on multi-scale U-shaped attention network |
CN116977651A (en) * | 2023-08-28 | 2023-10-31 | 河北师范大学 | Image denoising method based on double-branch and multi-scale feature extraction |
CN116977651B (en) * | 2023-08-28 | 2024-02-23 | 河北师范大学 | Image denoising method based on double-branch and multi-scale feature extraction |
CN117675112A (en) * | 2024-02-01 | 2024-03-08 | 阳光凯讯(北京)科技股份有限公司 | Communication signal processing method, system, equipment and medium based on machine learning |
CN117675112B (en) * | 2024-02-01 | 2024-05-03 | 阳光凯讯(北京)科技股份有限公司 | Communication signal processing method, system, equipment and medium based on machine learning |
Also Published As
Publication number | Publication date |
---|---|
CN113935467B (en) | 2024-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113935467A (en) | DAS (data acquisition System) borehole exploration data noise suppression method based on iterative multi-scale attention network | |
CN110058305A (en) | A kind of DAS seismic data noise-reduction method based on convolutional neural networks | |
CN108845352B (en) | Desert Denoising of Seismic Data method based on VMD approximate entropy and multi-layer perception (MLP) | |
CN112946749B (en) | Method for suppressing seismic multiples based on data augmentation training deep neural network | |
CN113687414B (en) | Data-augmentation-based seismic interbed multiple suppression method for convolutional neural network | |
CN108985304B (en) | Automatic sedimentary layer structure extraction method based on shallow profile data | |
CN107144879A (en) | A kind of seismic wave noise-reduction method combined based on adaptive-filtering with wavelet transformation | |
CN115877461A (en) | Desert earthquake noise suppression method based on multi-scale attention interaction network | |
CN104849757A (en) | System and method for eliminating random noise in seismic signals | |
CN115905805A (en) | DAS data multi-scale noise reduction method based on global information judgment GAN | |
CN111708087A (en) | Method for suppressing seismic data noise based on DnCNN neural network | |
Li et al. | Distributed acoustic sensing vertical seismic profile data denoising based on multistage denoising network | |
CN117631028A (en) | Low-frequency reconstruction method for seismic data of multi-scale global information fusion neural network | |
CN115346112A (en) | Seismic data oil pumping unit noise suppression method based on multilayer feature fusion | |
CN109212608B (en) | Borehole microseismic signal antinoise method based on 3D shearlet transformation | |
CN116184502A (en) | Underground DAS noise suppression method based on subspace projection attention network | |
CN115236733A (en) | DAS-VSP data background noise suppression method based on deep learning | |
Ma et al. | A Global and Multi-Scale Denoising Method Based on Generative Adversarial Network for DAS VSP Data | |
CN115561817A (en) | Desert earthquake denoising method based on multiple attention mechanism | |
Saad et al. | Signal Enhancement in Distributed Acoustic Sensing Data Using a Guided Unsupervised Deep Learning Network | |
CN112213785B (en) | Seismic data desert noise suppression method based on feature-enhanced denoising network | |
CN113093282A (en) | Desert data denoising method based on geometric modal characteristic parallel network | |
CN109085649B (en) | Seismic data denoising method based on wavelet transformation optimization | |
Jinhuan et al. | Research on Application of Deep Learning Algorithm in Earthquake Noise Reduction | |
Saad et al. | Noise Attenuation in Distributed Acoustic Sensing Data Using a Guided Unsupervised Deep Learning Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |