CN114662045B - Multi-dimensional seismic data denoising method based on p-order tensor deep learning of frame set - Google Patents

Multi-dimensional seismic data denoising method based on p-order tensor deep learning of frame set Download PDF

Info

Publication number
CN114662045B
CN114662045B CN202210292980.1A CN202210292980A CN114662045B CN 114662045 B CN114662045 B CN 114662045B CN 202210292980 A CN202210292980 A CN 202210292980A CN 114662045 B CN114662045 B CN 114662045B
Authority
CN
China
Prior art keywords
frame
denoising
tensor
seismic data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210292980.1A
Other languages
Chinese (zh)
Other versions
CN114662045A (en
Inventor
钱峰
郑丙伟
王彦
刘章波
李惠敏
胡光岷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210292980.1A priority Critical patent/CN114662045B/en
Publication of CN114662045A publication Critical patent/CN114662045A/en
Application granted granted Critical
Publication of CN114662045B publication Critical patent/CN114662045B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/36Effecting static or dynamic corrections on records, e.g. correcting spread; Correlating seismic signals; Eliminating effects of unwanted energy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Remote Sensing (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Acoustics & Sound (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geology (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-dimensional seismic data denoising method based on p-order tensor deep learning of a frame set, which is applied to the field of seismic data denoising and aims at solving the problem that a complete image structure cannot be captured obviously under the condition of lacking flattening operation when high-dimensional seismic data are processed in the prior art; then, the tNN of the standard tensor product is directly expanded to M-D earthquake denoising through redefining the p-order tensor product by the p-order tNN frame; the characteristics of the p-order tensor product can be calculated by matrix multiplication in the frame domain, and the optimal weighting parameters in the FPTNN are solved through a group of transformed matrix slices DL.

Description

Multi-dimensional seismic data denoising method based on p-order tensor deep learning of frame set
Technical Field
The invention belongs to the field of seismic data processing, and particularly relates to a seismic data denoising technology.
Background
Seismic data denoising is a classical but still active topic in seismic data processing, as it is an indispensable step in subsequent interpretation work (e.g., describing real subsurface formations). From a machine learning perspective, single seismic data denoising is in principle an underdetermined inverse problem that solves the problem of relying on efficient image prior knowledge or models. Unlike conventional image denoising tasks, denoising of seismic data is more complex to fully process multidimensional (M-D) seismic data (e.g., three-dimensional post-stack data, 4-D pre-stack data, and 5-D wide azimuth pre-stack data). Therefore, the most important problem faced by seismic data denoising is to reasonably extract priori knowledge of the underlying M-D structure from noisy seismic data, and fully utilize the priori information to perform noise suppression and find effective reflection. However, seismic image prior modeling is somewhat challenging due to the M-D nature of the seismic image.
The most promising way in the image prior field is to learn the prior knowledge of the underlying structure through various two-dimensional or three-dimensional Deep Learning (DL). However, for high-dimensional seismic data, such as 4-D prestack data, these DL denoising schemes certainly cannot capture the complete image structure in the absence of flattening operations. The prior seismic image prior modeling and learning technology mainly comprises a sparse representation model, a low-rank model, a deep learning model and the like, and all have the problems.
In the seismic literature, there are many seismic image prior modeling and learning techniques, including sparse representation models, low rank models, and deep learning models. So far, sparse representation has become important prior knowledge for denoising seismic images, and can be further divided into analytic-sparse transformation and data-driven sparse coding. In the case of analytical sparse transforms, it is assumed that a clean seismic image can be well sparsely represented, while noise cannot, on the basis of a given analytical transform. Typical transformations include hilbert yellow, wavelet, curvelet, shear wave, seismic wave, and Radon transformations, but single analytical sparse transformations do not sparsely represent all types of image features, limiting denoising quality. In contrast, sparse coding is a useful sparse representation for learning input seismic data in a linear combination of basic elements, i.e., dictionary, which has the advantage of better fitting to the seismic image; examples of such methods include fast, statistical, constrained, multi-tasking and dual sparse dictionary learning.
In a narrow sense, the sparse representation model is essentially an efficient and straightforward local image prior, while another low rank image prior is able to explore global structures hidden in the M-D seismic data (hence the term global image prior). There are two types of global image priors known, matrix-based and tensor-based low-rank approximations (LRAs). The former is based on the idea that M-D seismic data can be stacked into a low rank matrix. If the seismic image is contaminated with noise, the rank of the matrix will increase. Thus, the image denoising problem can be realized by a reduced order technique, including general unstructured and LRA methods based on structured Hankel matrices. Whereas the latter spontaneously concentrates on tensors rather than matrix representations; typical examples include tensor singular value decomposition (tvvd) and CANDECOMP/PARAFAC (CP) decomposition. However, these methods, including sparse representation and low rank models, have limitations in performance because they rely only on a hand-made image priors, which sometimes are not satisfactory in data involving complex geologic structures.
As an alternative to hand-crafted image priors, deep priors learning is to explicitly learn a reduced mapping function from damage monitoring to potential seismic images, and does not make any assumptions during the learning process. Currently, current deep a priori learning can be generally divided into two categories, depending on the form of the processing unit: two-dimensional or three-dimensional seismic images are denoised. The former is inspired by conventional depth image denoising, slicing M-D seismic data into a matrix or image, and then image denoising by various deep learning models, such as Convolutional Neural Networks (CNNs), generating countermeasure networks (GANs), self-encoders, and U-Net. It is clear that flattening can cause damage to the M-D structure, inevitably affecting denoising performance. To remedy this deficiency, the latter category uses the original three-dimensional cube as input to better preserve the three-dimensional spatial structure of the post-stack seismic data. In this regard, there are two types of 3-D DL denoising methods to find such depth image priors: supervised 3-D CNN, and unsupervised 3-D self-encoder. However, current three-dimensional DL denoising schemes cannot automatically describe such high-dimensional structures (e.g., 4-D pre-stack data), which may make it ineffective to perform the M-D seismic data denoising task.
Disclosure of Invention
To solve this problem, the present invention proposes a framework-based p-order tensor neural network (FPTNN for short) model to learn data-driven high-dimensional prior knowledge reflecting the typical behavior of clear M-D seismic images.
The invention adopts the technical scheme that: the multidimensional seismic data denoising method based on the p-order tensor deep learning of the frame set is used for denoising M-D seismic data to be processed based on a p-order tensor neural network, wherein the input of the p-order tensor neural network is p-order tensor product, and the output of the p-order tensor neural network is denoised seismic data.
The frame of the p-th order tensor product is expressed as:
Figure SMS_1
wherein,
Figure SMS_2
is the M-D seismic vector->
Figure SMS_3
Is the frame transformation matrix.
Figure SMS_4
W comprises n p A filter and an intermediate layer of layer +.>
Figure SMS_5
Represents a real set, ω p =(n p -1)l+1。
The loss function during training of the neural network is expressed as:
Figure SMS_6
wherein ρ (·) represents an unsupervised objective function,
Figure SMS_7
representing in training samplesM-D image of->
Figure SMS_8
Representing noisy M-D seismic images in the training samples, Θ representing weights, N representing the number of training samples.
The expression of ρ (·) is:
Figure SMS_9
wherein K represents a positive integer, y i Representing a noisy M-D seismic image,
Figure SMS_10
representing the noise variance.
The invention has the beneficial effects that: the method of the invention first, taking advantage of the framework over the fourier transform, replaces the fourier with the framework and gives a new definition of the p-order tensor-tensor product (t-product) to trigger. Then, the p-th order tNN framework extends the tNN of the standard tensor product directly to M-D seismic denoising by redefining the p-th order tensor product. The characteristic that the matrix multiplication in the frame domain can be used for calculating the p-order tensor product is utilized, and the optimal weighting parameter sigma in the FPTNN can be solved through a group of transformed matrix slices DL; the method of the invention can learn data-driven high-dimensional prior knowledge reflecting the typical behavior of clear M-D seismic images, and can fully utilize the prior information to carry out noise suppression and find effective reflection.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a graph showing a comparison of denoising effects of different slices in an example of a synthetic three-dimensional physical model according to an embodiment of the present invention;
wherein, (a) is real data, (b) is noisy data, and (c) is denoising data obtained by using the SURE-TCNN algorithm provided by the invention;
FIG. 3 is a graph showing a comparison of denoising effects of different slices in a synthesized five-dimensional data example provided by an embodiment of the present invention;
wherein, (a) is noisy data, (b) is a denoising effect using an SGK algorithm, (c) is a denoising effect using a TNN algorithm, (d) is a denoising effect using a PMF algorithm, (e) is a denoising effect using the FPTNN method of the present invention, (f) is initially added noise, (g) is a denoising effect using an SGK algorithm, (h) is a denoising effect using a TNN algorithm, (i) is a denoising effect using a PMF algorithm, and (j) is a denoising effect using the FPTNN method of the present invention;
fig. 4 is a graph showing SNR comparisons of the percentage of data lost after denoising using four different methods, as provided by an embodiment of the present invention.
FIG. 5 is a second example four-dimensional artificial VSP data provided by an embodiment of the invention;
wherein, (a) is clean xline-yline-timeline dimension data, (b) is noisy xline-yline-timeline dimension data, (c) is clean xline-offset-timeline dimension data, and (d) is noisy xline-offset-timeline dimension data;
fig. 6 is a denoising comparison of xline-yline-timeline data (offset=1) in four-dimensional synthetic VSP seismic data provided by an embodiment of the present invention;
wherein, (a) is a denoising effect using an SGK algorithm, (b) is a denoising effect using a TNN algorithm, (c) is a denoising effect using a PMF algorithm, (d) is a denoising effect using a FPTNN algorithm of the present invention, (e) is a denoising effect using an SGK algorithm, (f) is a denoising using a TNN algorithm, (g) is a denoising using a PMF algorithm, (h) is a denoising using a FPTNN algorithm of the present invention, (i) is a partial similarity map of denoising data and denoising using an SGK algorithm, (j) is a partial similarity map of denoising data and denoising using a TNN algorithm, (k) is a partial similarity map of denoising data and denoising using a TNN algorithm, (l) is a partial similarity map of denoising data and denoising using a FPTNN algorithm of the present invention;
fig. 7 is a denoising comparison of xline-offset-timing data (yline=1) in four-dimensional synthetic VSP seismic data provided by an embodiment of the present invention;
wherein, (a) is a denoising effect using an SGK algorithm, (b) is a denoising effect using a TNN algorithm, (c) is a denoising effect using a PMF algorithm, (d) is a denoising effect using a FPTNN algorithm of the present invention, (e) is a denoising effect using an SGK algorithm, (f) is a denoising using a TNN algorithm, (g) is a denoising using a PMF algorithm, (h) is a denoising using a FPTNN algorithm of the present invention, (i) is a partial similarity map of denoising data and denoising using an SGK algorithm, (j) is a partial similarity map of denoising data and denoising using a TNN algorithm, (k) is a partial similarity map of denoising data and denoising using a TNN algorithm, (l) is a partial similarity map of denoising data and denoising using a FPTNN algorithm of the present invention;
FIG. 8 is a third four-dimensional real WY data example provided by an embodiment of the present invention;
wherein, (a) is noisy xline-yline-timeline data and (b) is noisy xline-offset-timeline data;
fig. 9 is a denoising comparison of xline-yline-timeline data (offset=1) in four-dimensional real WY seismic data provided by an embodiment of the present invention;
wherein, (a) is a denoising effect using an SGK algorithm, (b) is a denoising effect using a TNN algorithm, (c) is a denoising effect using a PMF algorithm, (d) is a denoising effect using a FPTNN algorithm of the present invention, (e) is a denoising effect using an SGK algorithm, (f) is a denoising using a TNN algorithm, (g) is a denoising using a PMF algorithm, (h) is a denoising using a FPTNN algorithm of the present invention, (i) is a partial similarity map of denoising data and denoising using an SGK algorithm, (j) is a partial similarity map of denoising data and denoising using a TNN algorithm, (k) is a partial similarity map of denoising data and denoising using a TNN algorithm, (l) is a partial similarity map of denoising data and denoising using a FPTNN algorithm of the present invention;
fig. 10 is a denoising comparison of xline-offset-timing data (yline=1) in four-dimensional real WY seismic data provided by an embodiment of the present invention;
wherein, (a) is a denoising effect using an SGK algorithm, (b) is a denoising effect using a TNN algorithm, (c) is a denoising effect using a PMF algorithm, (d) is a denoising effect using a FPTNN algorithm of the present invention, (e) is a denoising effect using an SGK algorithm, (f) is a denoising using a TNN algorithm, (g) is a denoising using a PMF algorithm, (h) is a denoising using a FPTNN algorithm of the present invention, (i) is a partial similarity map of denoising data and denoising using an SGK algorithm, (j) is a partial similarity map of denoising data and denoising using a TNN algorithm, (k) is a partial similarity map of denoising data and denoising using a TNN algorithm, and (l) is a partial similarity map of denoising data and denoising using a FPTNN algorithm of the present invention.
Detailed Description
The present invention will be further explained below with reference to the drawings in order to facilitate understanding of technical contents of the present invention to those skilled in the art.
P-order tensor product and frame transformation definition
Three block-based operators were introduced, namely circ (·), unfold (·) and fold (·). For the following
Figure SMS_11
χ (ξ) For forming a block circulation pattern:
Figure SMS_12
wherein χ represents input;
Figure SMS_13
representing a real set; n1, n2, …, np are the size of each dimension, and are any positive integer
In general, the folding and unfolding operations are defined as follows:
Figure SMS_14
wherein,
Figure SMS_15
n 1 、n 2 、…、n p-1 representing the size of each dimension, some definitions may be expressed as follows using a block-based operator:
definition 1: (time-domain p-th order tensor product) hypothesis
Figure SMS_16
Is n 1 ×r×n 3 ×…×n p Tensor (I)>
Figure SMS_17
Is r multiplied by n 2 ×n 3 ×…×n p Tensor, tensor product->
Figure SMS_18
Is a p-order tensor of size n 1 ×n 2 ×n 3 ×…×n p The definition is as follows:
Figure SMS_19
wherein r represents any positive integer;
definition 2: (frequency-domain p-order tensor product) according to the convolution theorem, the tensor product can be converted in the frequency domain into a matrix multiplication
Figure SMS_20
Definition 3: (tensor SVD) tensor SVD
Figure SMS_21
By->
Figure SMS_22
Give out (I)>
Wherein,
Figure SMS_23
and->
Figure SMS_24
Is the orthogonal tensor, < >>
Figure SMS_25
Is a rectangular f-diagonal tensor, and his slice is a diagonal matrix.
In the present invention, M-D seismic data is naturally represented as a p-order tensor, taking into account the complex relationships between the dimensions. Thus, to process a p-order seismic tensor, its p-order tensor product must be relied upon, which plays a critical role in current FPTNN models.
One of the starting points of the invention is the frame transformation theory, which allows better time-frequency localization than many global transformations. A strict framework is defined as having
Figure SMS_26
The countable set of properties->
Figure SMS_27
f=∑ g∈X <f,g> (5)
Wherein, <·,·>is that
Figure SMS_28
Inner product of L 2 () Representing a square integrable space. This is equivalent to +.>
Figure SMS_29
Then, there are:
Figure SMS_30
for a given
Figure SMS_31
Psi represents the real number set, psi 1 、ψ 2 、…、ψ r Represents any real number in ψ. An affine (or wavelet) system is defined by a set of scales and time shifts of ψ,
Figure SMS_32
Figure SMS_33
wherein psi is l,j,k :=2 j/2 ψ l (2 j -k). When X (ψ) forms->
Figure SMS_34
Is called a tight wavelet frame, and ψ l L=1, 2, …, r is called a (tight) frame.
In the numerical scheme of image processing, vectors
Figure SMS_35
The frame transformation of (1) can be performed using a matrix +.>
Figure SMS_36
This matrix is shown to include n filters and l layers of intermediate layers, where ω= (n-1) l+1. After W is obtained, a discrete signal
Figure SMS_37
The frame transformation of (a) can be written as:
Figure SMS_38
furthermore, unitary Extended Principle (UEP) declares W T Wv=v, where W T Representing an inverse frame transformation.
Problem modeling
The goal of denoising a seismic image, a classical problem in seismic data processing, can be expressed as:
Figure SMS_39
wherein,
Figure SMS_40
is the original M-D image->
Figure SMS_41
And some noise->
Figure SMS_42
Mixed to formM-D seismic image with noise, +.>
Figure SMS_43
Figure SMS_44
Model-based noise mitigation methods attempt to make in equation (8)
Figure SMS_45
As close as possible to +.>
Figure SMS_46
In general, assume that
Figure SMS_47
Resulting from well-defined processes, or +.>
Figure SMS_48
Typically with a compact structure called image priors. Consider that the seismic image denoising task will always be +.>
Figure SMS_49
Considered as random noise, modeling->
Figure SMS_50
The optimal goal for a particular structure can be defined as:
Figure SMS_51
wherein the method comprises the steps of
Figure SMS_52
Representing a potential M-D image->
Figure SMS_53
Is used to guide +.>
Figure SMS_54
For example, if equation (9) is a low rank model, then +.>
Figure SMS_55
It is evident from equation (9) that the denoising performance is largely dependent on the regularization function +.>
Figure SMS_56
I.e. image priors.
With the recent development of deep learning, a denoising method based on a deep architecture is promising. They are easily superior to conventional denoising methods and have fewer restrictions on the specification of the noise generation process or the M-D clean image structure. In this case the number of the elements to be formed is,
Figure SMS_57
can be modeled as a parameterized function in general>
Figure SMS_58
Is a DL (Deep Learning) -based denoising device, which uses a large-scale vector +.>
Figure SMS_59
Parameterization is as follows: />
Figure SMS_60
Where the weight Θ is determined by training to minimize the correlation loss function (equation 13). The training here corresponds to step 2 in fig. 1.
Then, the formula (10) is substituted into (9) while omitting the regularization function
Figure SMS_61
A cost function of the erroneous denoising result can be obtained, which is used +.>
Figure SMS_62
Quantized seismic M-D image denoising apparatus>
Figure SMS_63
The errors with the ground true seismic image are as follows:
Figure SMS_64
given training data set samples
Figure SMS_65
Gradient descent is used to find the cost function +.>
Figure SMS_66
The most common optimization algorithm for the smallest parameter Θ; typical gradient descent algorithms include random gradient descent (SGD), nesterov momentum, or adaptive moment estimation (Adam) optimization algorithms. Thus, due to the elimination of the regularization function +.>
Figure SMS_67
The performance of the DL-based denoising method is determined only by the above-described optimization procedure.
However, the performance of supervised denoising is largely dependent on real data, which is either unreliable or unusable in real world seismic denoising. To better address the lack of ground truth data that occurs in practice, the present invention rewrites equation (11) with a slightly different sign, resulting in equation (12), specific: an unsupervised objective function ρ (·) is used to represent the expected divided by the residual and define:
Figure SMS_68
where ρ (·) represents an unsupervised objective function such as Stein's Unbiased Risk Estimator (SURE), partially linear denoiser, etc. The invention introduces a SURE-based objective function and uses it to accurately describe the criteria for evaluating solutions under unsupervised learning, defined as follows:
Figure SMS_69
where T is the transpose operator and tr (. Cndot.) represents the trace of the matrix. Because the SURE provides an accurate unbiased estimate of MSE, training the SURE-TCNN using the SURE loss function may provide results similar to those obtained using the ground truth image MSE in a supervised manner.
The algorithm process comprises the following steps:
Figure SMS_70
in algorithms
Figure SMS_71
Representing a neural network.
FFT to frame conversion
For an M-D seismic vector
Figure SMS_72
His p-th order tensor product can be calculated by FFT as shown in definition 3. Can get +.>
Figure SMS_73
Fourier transform tensor of (a):
Figure SMS_74
wherein,
Figure SMS_75
is->
Figure SMS_76
Expansion of the p-order,/->
Figure SMS_77
Is of size n p Is a FFT matrix of (c).
The invention can then replace the FFT with a frame transform, resulting in the definition of a frame representation of the p-th order tensor product. As a result after substitution, the present invention expresses the M-D tensor as:
Figure SMS_78
wherein the method comprises the steps of
Figure SMS_79
This matrix comprises n p A filter and an intermediate layer of layer i, wherein ω p =(n p -1) l+1. Considering the UEP (unitary extension principle, unitary extension theorem) properties of the frame transformation, one can get +.>
Figure SMS_80
Figure SMS_81
Wherein (1)>
Figure SMS_82
X W The representation is input with a representation of the M-D tensor after the FFT replaced by a frame transform.
According to definitions 2 and 3, the p-th order tensor product is defined as the matrix product of the slices in the fourier transform domain. Thus, by replacing the FFT with a frame, the frame-based p-th order tensor product is defined as follows:
definition 4 (frame-based p-order tensor product in time domain) hypothesis
Figure SMS_83
Is n 1 ×r×n 3 ×…×n p Tensor (I)>
Figure SMS_84
Is r multiplied by n 2 ×n 3 ×…×n p Tensor, tensor product->
Figure SMS_85
Is a p-order tensor of size n 1 ×n 2 ×n 3 ×…×n p The definition is as follows:
Figure SMS_86
* w is a custom multiplication in the present invention representing tensor product in the frame domain, the result of the multiplication being the right part of the equation, two characters being one. * The frame transformation is used in the process of W, and the frame transformation matrix W is used in the frame transformation. Those skilled in the art will recognize that the frame transformation matrix W is a constant matrix.
In connection with definition 2.1 and section II-a definition 3 in "c.lu, x.pen, and y.wei," Low-rank tensor completion with a new tensor nuclear norm induced by invertible linear transforms, "in proc.ieee/CVF conf.comput.vision Pattern recognit (CVPR)", 2019, pp.5996-6004, the frame-based p-order tensor product can be expressed as a matrix product of slices in the frame domain rather than the fourier domain, defined as:
definition 5 (p-th order tensor product in the frame domain) tensor product (16) can be converted in the frequency domain into a matrix multiplication according to the convolution theorem.
Figure SMS_87
Wherein,
Figure SMS_88
representation->
Figure SMS_89
Is (are) reconstructed, is (are) added>
Figure SMS_90
Representation->
Figure SMS_91
Is a reconstruction of (a).
Using the frame-based p-order tensor product in definition 4 and definition 5, the FPTNN, which will be described later, can process the M-D seismic tensor in the frame domain.
Multi-dimensional tensor neural network based on framework
In the present invention, there is a structure that allows the frame-based p-order tensor product to explicitly fit the tensor neural network, and the final model is called FPTNN. In this model, it is first noted that for a conventional tNN, the input data is always a third order tensor, and the present invention replaces the third order tensor product with a p-order tensor product, which is the most significant difference between FPTNN and a common tNN. In addition, the framework is a better choice than the FFT when dealing with the denoising task of the transform domain.
Definition 6 (tensor neural network based on the p-th order tensor product of the framework) hypothesis tensors
Figure SMS_92
Figure SMS_93
And +.>
Figure SMS_94
Accordingly, the present invention defines tensor forward propagation as:
Figure SMS_95
wherein,
Figure SMS_96
representing the weight set, is a +.>
Figure SMS_97
Is a real tensor of (2); />
Figure SMS_98
Representing the bias set, is a +.>
Figure SMS_99
Is a real tensor of (2); />
Figure SMS_100
Indicate->
Figure SMS_101
Inputting layers;
note that if the tNN has multiple layers, the middle layer will be considered a hidden layer beyond the input and output layers. The calculation process of the middle layer can be expressed as
Figure SMS_102
Wherein,
Figure SMS_103
represents the result after denoising through the neural network, < +.>
Figure SMS_104
Is the last layer of the FPTNN network. The main challenge faced by solving the parameter Θ (i.e. +.in equation (19)>
Figure SMS_105
) How to handle the p-order tensor product operation involved in the optimal Tensor Back Propagation (TBP) algorithm; after all, the existing TBP algorithm is based on third-order tensor operation.
Frame-based p-th order TBP
In order to optimize the p-th order tNN model (19), the invention proposes a TBP algorithm to minimize the function of Θ
Figure SMS_106
Referred to as p-th order TBP. That is, the present invention requires deriving a p-order tensor stacking evolution rule to adjust the parameters Θ of the tNN network. First, it is necessary to determine how the gradient descent is modified +.>
Figure SMS_107
And->
Figure SMS_108
/>
Figure SMS_109
Figure SMS_110
Wherein,
Figure SMS_111
indicate->
Figure SMS_112
-1 parameters of the intermediate layer, ">
Figure SMS_113
Indicate->
Figure SMS_114
The bias of each intermediate layer, alpha is the learning rate, is an adjustable parameter of the optimization algorithm, and determines the step length of each iteration. These two formulas provide two key inspiration for understanding the effects of gradient descent.
The first key step is to calculate
Figure SMS_115
The specific mathematical form of the partial derivative of (a) is as follows:
Figure SMS_116
wherein,
Figure SMS_117
representing input->
Figure SMS_118
Representative pair->
Figure SMS_119
Is reconstructed;
consider first
Figure SMS_120
It is easy to calculate, only needs to derive (12):
Figure SMS_121
where ρ' (. Cndot.) represents the pair of SURE-based objective functions ρ (. Cndot.) for
Figure SMS_122
Is a derivative of (a).
Figure SMS_123
The second key step is the concern
Figure SMS_124
Is calculated by the computer. This item can be rewritten as:
Figure SMS_125
this further demonstrates the elegance of the t-NN framework of the present invention; the present invention can easily extend existing neural network layers and loss functions to higher dimensions. The rest of the calculation is now performed
Figure SMS_126
And->
Figure SMS_127
Figure SMS_128
Wherein the method comprises the steps of
Figure SMS_129
And ρ' (-) is the derivative of the activation function, then the bias in (18)>
Figure SMS_130
Derivation method
Figure SMS_131
By deriving these derivatives as mentioned above, key results for the back propagation, i.e. the results obtained by equation (20), can be obtained by adjusting Θ by the back propagation results for the purpose of minimizing the function on Θ.
Construction of data sets
The present invention uses two synthetic data sets to evaluate the performance of the method of the present invention on synthetic data, namely a simple 5D synthetic data set and a 4-D synthetic VSP data set. First, the noise reduction results were evaluated empirically using 5-D synthetic data (H.Wang, W.Chen, Q.Zhang, X.Liu, S.Zu, and Y.Chen, "Fast dictionary learning for high-dimensional seismic reconstruction," IEEE Trans. Geosci. Remote Sens., 2020.), although it was originally applied to 5-D data reconstruction. In this case, clean 5-D composite data is generated from a Ricker wavelet at a sampling rate of 4ms, containing 76 seismic traces. To test the performance of the proposed FPTNN, the present invention intentionally adds a fixed variance of gaussian noise to the clean data, where the signal to noise ratio of the noise data is-0.3 dB, as shown in fig. 3 (a) (f). Notably, the clean data does not participate in the training of the FPTNN network, but the denoising performance of the FPTNN network is evaluated by calculating the signal-to-noise ratio. The second example is a 4-D composite dataset of size 100 x 10. Because of the inconvenience of drawing a four-dimensional dataset, the present invention extracts two three-dimensional cubes from the four-dimensional dataset for comparison (with a common center and a common offset), as shown in FIG. 5.
The actual work area data follows. Specifically, an example of an actual work area is a 4-D work area dataset from a WY survey with spatial dimensions time, xline, yline and ofets. Unfortunately, one real dataset is corrupted by severe noise as shown in fig. 8 (a) (b), which of course presents a serious challenge to the SURE-TCNN and SOTA algorithms of the present invention. Unfortunately, real data is generally not available for WY data. Thus, in contrast to synthetic data experiments where signal-to-noise values can be calculated, the denoising conclusion here is based solely on visual comparison of restoration effects, residuals, and local similarity values.
Comparison algorithm, evaluation index and parameter setting
All experiments of the SOTA algorithm were run in MATLAB R2016a, on a 64-bit PC with E7-4820.00 ghz CPU and 64GB memory, while the SURE-TCNN was implemented in tensorf low on the google Colab platform.
A. Basic algorithm
The low rank tensor approach provides a solution to the problem of M-D data denoising, where clean seismic data is a p-order low rank tensor, and noise increases the rank of the data tensor. In addition, dictionary learning emphasizes how data-driven sparse learning solves this problem. Thus, after a comprehensive investigation of seismic data denoising, the following three baseline algorithms may be used:
1) TNN: TNN models M-D seismic data denoising as a low-order tensor recovery problem, which is solved by tensor kernel norm minimization based on transformation, which is convex but has higher computational complexity.
2) PMF: both PMF and TNN share this low rank tensor decomposition category, which is sometimes encountered in similar environments. However, TNN is based on the recently proposed tSVD, whereas PMF appears in the CP model, the distinction between the two being evident.
3) SGK: unlike PMF and TNN, SGK performs M-D seismic image denoising according to sparse representation of the image. The basic principle is that under a proper overcomplete dictionary, clear data has sparse representation, and noise data does not have sparse representation.
In the experiment comparing the FPTNN and the reference method, the influence of parameter tuning on the denoising performance is larger and larger. Therefore, in order to ensure fairness of comparison, a fine parameter tuning process must be introduced for each benchmarking method. For example, in order to find the optimal tensor rank that maximizes the recovery quality for each method, the low rank value is increased from 2 to 25 in the embodiment of the present invention.
B. Performance index
Since a complete real data set is available in the synthetic data experiment, the signal-to-noise ratio provides a measure of the intensity of the desired denoised image relative to the real data. In general, the higher the signal-to-noise ratio, the better the denoising effect. The definition of the signal to noise ratio is as follows:
Figure SMS_132
wherein,
Figure SMS_133
for a real image +.>
Figure SMS_134
Is noise ofAnd denoising the sound data. In actual work area data experiments, SOTA and FPTNN have not been available for reference to clean data. Since the local similarity can be used as an evaluation criterion for determining whether the removed noise contains a valid signal, the invention introduces the denoising performance of the local similarity measurement model, which is based on the similarity between the denoising data and the removed noise. The local similarity is between 0 and 1, and the smaller the value is, the better the denoising effect is.
C. Parameter setting
Once the network structure of the FPTNN is determined, the next task is to determine the parameters of the objective function and training algorithm. According to equation (14), the noise variance σ is the only parameter of the objective function, adaptively adjusted according to the following definition:
Figure SMS_135
/>
wherein media (…) is applied to the frame and the transformation coefficients
Figure SMS_136
And performing a median operation. For training of FPTNN, the present invention uses Adam optimizers to optimize suress-based FPTNN networks. The learning rate, denoted by the symbol α in (22), is a super parameter used to control the rate of Adam updates. In general, the present embodiment sets α=0.27 for the composite dataset, α=0.27 for the composite VSP, and α=0.27 for the actual WY work area dataset. The weight update in the FPTNN is done by a batch mode of standard BP, with BP set to 5 for the composite 1 and actual WY work area datasets and BP set to 20 for the composite VSP dataset.
D. Detailed description of the preferred embodiments
During training, the computational efficiency and clear implementation of the TBP algorithm depends on the derivative calculation of (13). The invention can reuse the nature of the tensor product in implementation to rewrite (13) as:
Figure SMS_137
obviously, equation (29) provides a mechanism to facilitate implementation of the p-th order TBP algorithm in the framework domain. On the other hand, using definition 6 again, fptnn can be decomposed into a frame conversion matrix DL model, as described below.
Figure SMS_138
The use of definition 6 thus provides a link between FPTNN and slice DL in the framework domain, which is confirmed by the two exemplary analyses described above in terms of the model and the resolution algorithm. For this purpose, the invention writes FPTNN into slice DL form
Figure SMS_139
The complete solving process of FPTNN is summarized in algorithm 1, and as shown in FIG. 1, it includes three independent calculation stages, namely, the first and the last stages are respectively to
Figure SMS_140
Performing forward and reverse operations; the second stage finds the solution of equation (31) by means of an unsupervised denoising convolutional neural network (DnCNN) MDL (·) of slicing in the frame domain, which algorithm is implemented using SURE instead of MSE for DnCNN. FIG. 2 is an example illustrating a comparison of denoising results for different xline-yline cuts in the frame domain for an artificial four-dimensional Vertical Seismic Profile (VSP) data instance. Wherein the noisy data (b) is considered as an input to the above-described MDL function in the frame domain, and the denoising effect (c) is an output of the MDL function. Wherein the clean data in (a) is used for comparison only and does not participate in the MDL calculation.
Result evaluation
In the first experiment, fig. 3 (b) - (e) are the results of denoising the noisy image of fig. 3 (a) using FPTNN and SOTA denoising methods, respectively. For more intuitive visual distinction, fig. 3 (b) - (e) show a comparison of denoised data (red) and clean data (black), using the seismic toolbox seilab. FPTNN has significant deviations from real data on almost all trajectories. The same problem applies to SGK and PMF. As can be seen from fig. 3 (b) - (e), the FPTNN method removes more noise than the SOTA method, while not damaging the useful signal during denoising. From the denoising index, the output signal-to-noise ratios corresponding to the different methods are shown in table 1. As can be seen from table 1, the signal to noise ratios of PMF, TNN, SGK are 14.83, 15.41, 18.99dB, respectively. It is therefore evident that the FPTNN proposed by the present invention has the best denoising performance and the strongest signal retention capability for 5-D synthesized data.
TABLE 1 comparison of denoising Properties of the method (Proposed FPTNN) Proposed by the present invention and the existing method on two synthetic data
Method Synthetic 5-D Dataset Synthetic VSP Dataset
PMF 14.83dB 7.76dB
TNN 15.41dB 9.23dB
SGK 18.99dB 9.10dB
Proposed FPTNN 26.67dB 13.31dB
The denoising results of FPTNN and SOTA are shown in fig. 6 and 6. It was found that the SOTA method effectively suppressed random noise, but PMF significantly had more noise. Fig. 6 (e) - (h) and fig. 7 (e) - (h) plot the noise volumes removed by the SOTA method and FPTNN, from which the present invention can be seen that the SOTA method appears to produce several useful signals. Fig. 6 (i) - (l) and fig. 7 (i) - (l) depict partial similarity maps of the three methods. It is apparent that noise removed by the SOTA method contains significant signal leakage. In addition, the invention quantitatively compares the denoising performance from the viewpoint of signal to noise ratio, as shown in table 1. In table 1, the signal to noise ratios of PMF, TNN, SGK after denoising on the VSP dataset were 7.76, 9.23, 9.10Db, respectively; the signal to noise ratio of the method provided by the invention is 13.31Db, which is obviously higher than that of three baseline methods. Consistent with the previous conclusion, the useful signal energy after FPTNN denoising is the weakest, which indicates that the denoising performance of FPTNN is the best.
Thereafter, to further illustrate the distinction between these methods, the present invention applies these four methods to noisy 4D data generated by adding varying degrees of random noise at 5dB intervals in the range of 5dB to 20 dB. Fig. 4 summarizes the SOTA method and the denoising performance of the FPTNN of the invention at different noise levels. It is apparent that the FPTNN method is superior to other SOTA methods. The result shows that the method has obvious improvement on the noise data of 10 dB-70 dB, and the effectiveness of the method is proved. Meanwhile, the FPTNN algorithm with the optimal denoising capability is expected to change under different noise environments;
as before, to more clearly compare the differences between the FPTNN and SOTA methods, the present invention presents two volumes of the 4-D WY work area dataset of FIGS. 9 and 10. Notably, as can be seen from the denoising results of fig. 9 (a) - (d) and 10 (a) - (d), the SOTA method more or less retains the random noise that was not removed, whereas the FPTNN method of the present invention successfully removes such noise, particularly over the prestack collection volume. To further evaluate the differences between the four methods, the SOTA method and the denoising and corresponding local similarity of FPTNN were observed as shown in FIGS. 9 (e) - (l) and 9 (e) - (l). As can be seen from the 8 local similarity volumes in fig. 9 (i) - (l) and fig. 10 (i) - (l), the FPTNN of the invention removed the least signal energy from the noise, i.e., the FPTNN had a stronger signal retention capability for the useful signal. In addition, the FPTNN method has obviously better denoising performance on the real data set than the SOTA method by combining the recovery effect, residual error and local similarity, and is consistent with the experimental result of the synthetic data.
The Input SNR (dB) in FIG. 4 of the present invention represents the Input signal-to-noise ratio (dB), the Output SNR (dB) represents the Output signal-to-noise ratio (dB), and "," × "in FIG. 4,
Figure SMS_141
And "O" correspond to the DTAE model, PMF, TNN, and MSSA of the present invention, respectively, with XLine in FIGS. 5-9 representing the crossline, inline representing the crossline, timeline representing the time line, and Offset representing the Offset.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Various modifications and variations of the present invention will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (4)

1. The multidimensional seismic data denoising method based on the p-order tensor deep learning of the frame set is characterized by comprising the following steps of:
s1, converting the noisy seismic data to be processed into noisy frame domain seismic data through frame transformation; replacing FFT with frame transformation to obtain definition of frame representation of p-order tensor product; as a result after the substitution, the M-D tensor is expressed as:
Figure FDA0004100758110000011
wherein the method comprises the steps of
Figure FDA0004100758110000012
This matrix comprises n p A filter and an intermediate layer of layer i, wherein ω p =(n p -1) l+1, taking into account the unitary expansion theorem properties of the frame transformation, resulting in +.>
Figure FDA0004100758110000013
Wherein (1)>
Figure FDA0004100758110000014
X W The representation is replaced with a frame transform, and then a representation of the M-D tensor is input;
frame-based p-order tensor product in the time domain: assume that
Figure FDA0004100758110000015
Is n 1 ×r×n 3 ×…×n p Tensor (I)>
Figure FDA0004100758110000016
Is r multiplied by n 2 ×n 3 ×…×n p Tensor, tensor product->
Figure FDA0004100758110000017
Is a p-order tensor of size n 1 ×n 2 ×n 3 ×…×n p The definition is as follows:
Figure FDA0004100758110000018
* w is a custom multiplication representing tensor product in the frame domain, the result of the multiplication being the right part of the equation, two characters being one; * In the process of W, frame transformation is used, and the frame transformation is used for a frame transformation matrix W; the frame-based p-th order tensor product is expressed as a matrix product of slices in the frame domain instead of the fourier domain, defined as:
according to the convolution theorem, the tensor product (16) is converted in the frequency domain into a matrix multiplication:
Figure FDA0004100758110000019
wherein,
Figure FDA00041007581100000110
representation->
Figure FDA00041007581100000111
Is (are) reconstructed, is (are) added>
Figure FDA00041007581100000112
Representation->
Figure FDA00041007581100000113
Is reconstructed;
s2, inputting the frame domain seismic data with noise into a tNN neural network to obtain frame domain seismic data after noise removal;
s3, performing frame inverse transformation on the frame domain seismic data to obtain denoised seismic data.
2. The multi-dimensional seismic data denoising method based on p-order tensor deep learning of a frame set according to claim 1, wherein a loss function in a tNN neural network training process is expressed as:
Figure FDA00041007581100000114
wherein ρ (·) represents an unsupervised objective function,
Figure FDA00041007581100000115
representing M-D images in training samples, h Θ () Representing denoising, y n Representing noisy seismic data in training samples, ΘRepresenting the weights, N represents the number of training samples.
3. The method for denoising multidimensional seismic data based on p-order tensor deep learning of a frame set according to claim 2, wherein the expression of ρ (·) is:
Figure FDA0004100758110000021
wherein K represents a positive integer, y i Representing a noisy M-D seismic image,
Figure FDA0004100758110000022
representing the noise variance.
4. A method for denoising multi-dimensional seismic data based on p-order tensor deep learning of a frame set according to claim 3, wherein h Θ (y n ) The expression is:
Figure FDA0004100758110000023
where σ represents the noise variance, w represents tensor product in the frame domain,
Figure FDA0004100758110000024
weight set representing layer I, +.>
Figure FDA0004100758110000025
Represents the bias set of layer 1, < ->
Figure FDA0004100758110000026
Weight set representing layer 1, < ->
Figure FDA0004100758110000027
Bias set representing layer I, +.>
Figure FDA0004100758110000028
Weight set representing layer 0, +.>
Figure FDA0004100758110000029
Representing the bias set for layer 0. />
CN202210292980.1A 2022-03-24 2022-03-24 Multi-dimensional seismic data denoising method based on p-order tensor deep learning of frame set Active CN114662045B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210292980.1A CN114662045B (en) 2022-03-24 2022-03-24 Multi-dimensional seismic data denoising method based on p-order tensor deep learning of frame set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210292980.1A CN114662045B (en) 2022-03-24 2022-03-24 Multi-dimensional seismic data denoising method based on p-order tensor deep learning of frame set

Publications (2)

Publication Number Publication Date
CN114662045A CN114662045A (en) 2022-06-24
CN114662045B true CN114662045B (en) 2023-05-09

Family

ID=82031250

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210292980.1A Active CN114662045B (en) 2022-03-24 2022-03-24 Multi-dimensional seismic data denoising method based on p-order tensor deep learning of frame set

Country Status (1)

Country Link
CN (1) CN114662045B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011013608A1 (en) * 2009-07-31 2011-02-03 富士フイルム株式会社 Image processing device and method, data processing device and method, program, and recording medium
CN109923441A (en) * 2016-03-24 2019-06-21 沙特阿拉伯石油公司 The wave-field reconstruction of streamer data is simultaneously shaken using L1 inverting and receiver goes puppet

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831588B (en) * 2012-08-29 2015-06-17 电子科技大学 De-noising processing method for three-dimensional seismic images
US11125866B2 (en) * 2015-06-04 2021-09-21 Chikayoshi Sumi Measurement and imaging instruments and beamforming method
CN111929733A (en) * 2020-08-21 2020-11-13 电子科技大学 Seismic signal regularization processing method based on slice sampling
CN114114397B (en) * 2021-11-12 2023-04-18 电子科技大学 Unsupervised seismic data denoising method based on depth tensor neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011013608A1 (en) * 2009-07-31 2011-02-03 富士フイルム株式会社 Image processing device and method, data processing device and method, program, and recording medium
CN109923441A (en) * 2016-03-24 2019-06-21 沙特阿拉伯石油公司 The wave-field reconstruction of streamer data is simultaneously shaken using L1 inverting and receiver goes puppet

Also Published As

Publication number Publication date
CN114662045A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
Nazari Siahsar et al. Data-driven multitask sparse dictionary learning for noise attenuation of 3D seismic data
CN110361778B (en) Seismic data reconstruction method based on generation countermeasure network
Bianco et al. Travel time tomography with adaptive dictionaries
Wu et al. An effective approach for underwater sonar image denoising based on sparse representation
Patel et al. Separated component-based restoration of speckled SAR images
CN112578471B (en) Clutter noise removing method for ground penetrating radar
Yang et al. Seislet-based morphological component analysis using scale-dependent exponential shrinkage
CN114418886B (en) Robust denoising method based on depth convolution self-encoder
Zhou et al. A hybrid method for noise suppression using variational mode decomposition and singular spectrum analysis
CN108428221A (en) A kind of neighborhood bivariate shrinkage function denoising method based on shearlet transformation
Liu et al. DL2: Dictionary learning regularized with deep learning prior for simultaneous denoising and interpolation
Feng et al. Multigranularity feature fusion convolutional neural network for seismic data denoising
CN106934398A (en) Image de-noising method based on super-pixel cluster and rarefaction representation
Qiao et al. Random noise attenuation of seismic data via self-supervised Bayesian deep learning
Silva et al. Efficient separable filter estimation using rank-1 convolutional dictionary learning
Kuruguntla et al. Erratic noise attenuation using double sparsity dictionary learning method
CN114662045B (en) Multi-dimensional seismic data denoising method based on p-order tensor deep learning of frame set
Xu et al. MoG-based robust sparse representation for seismic erratic noise suppression
CN114114397B (en) Unsupervised seismic data denoising method based on depth tensor neural network
CN102509268B (en) Immune-clonal-selection-based nonsubsampled contourlet domain image denoising method
CN110838096B (en) Seismic image completion method based on information entropy norm
Prasad Dual stage bayesian network with dual-tree complex wavelet transformation for image denoising
Huang A two-step singular spectrum analysis method for robust low-rank approximation of seismic data
Oboué et al. Mixed rank-constrained model for simultaneous denoising and reconstruction of 5-d seismic data
Zhang et al. Unsupervised seismic random noise attenuation by a recursive deep image prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant