CN116610937A - Method and device for carrying out low-frequency information continuation in implicit space and electronic equipment - Google Patents

Method and device for carrying out low-frequency information continuation in implicit space and electronic equipment Download PDF

Info

Publication number
CN116610937A
CN116610937A CN202310876134.9A CN202310876134A CN116610937A CN 116610937 A CN116610937 A CN 116610937A CN 202310876134 A CN202310876134 A CN 202310876134A CN 116610937 A CN116610937 A CN 116610937A
Authority
CN
China
Prior art keywords
frequency information
data
low
seismic data
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310876134.9A
Other languages
Chinese (zh)
Other versions
CN116610937B (en
Inventor
孙剑
贾安琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202310876134.9A priority Critical patent/CN116610937B/en
Publication of CN116610937A publication Critical patent/CN116610937A/en
Application granted granted Critical
Publication of CN116610937B publication Critical patent/CN116610937B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

The application relates to the technical field of geophysical exploration, and provides a method and a device for carrying out low-frequency information continuation in implicit space and electronic equipment, wherein the method comprises the following steps: acquiring training data; constructing a deep learning model, the deep learning model comprising: the self-encoder neural network and the light-weight fully-connected neural network are used for representing the hidden code mapping relation; inputting the input data into the deep learning model for training to obtain a trained deep learning model; and inputting the seismic data which are to be extended and have no low-frequency information into the trained deep learning model, and obtaining the extended seismic data containing the low-frequency information. According to the method, the self-encoder is utilized to compress the seismic data into the hidden code feature vector, so that redundant information of the seismic data is removed, the difficulty of low-frequency continuation is greatly simplified, and generalization of the deep learning model in a low-frequency continuation task is improved.

Description

Method and device for carrying out low-frequency information continuation in implicit space and electronic equipment
Technical Field
The present application relates to the field of geophysical exploration, and in particular, to a method and apparatus for implementing low frequency information continuation in implicit space, and an electronic device.
Background
The current process of oil and gas resource exploration is turning from shallow to deep and simple oil and gas reservoirs, and higher requirements are put on high-precision imaging and inversion of seismic exploration. Full waveform inversion can accurately image and invert deep complex geological structures by fully utilizing full wavefield information, and is a hot spot technology for current seismic imaging and inversion. However, limited by the strong nonlinearity of wave equations and local optimization algorithms, the premise of achieving full waveform inversion is to ensure that the seismic data contains low frequency information. Because the low-frequency information is insensitive to detail construction, the low-frequency seismic data can provide a low wave number initial model for full waveform inversion, so that the multi-solution in inversion is reduced, the trap of local extremum is avoided, and the seismic imaging and inversion precision is improved. Meanwhile, compared with the rapid attenuation of high-frequency information in the underground medium propagation process, the low-frequency information has the characteristic of difficult attenuation, and has important significance in deep complex-structure oil and gas reservoir imaging.
Chinese patent CN103336303a discloses a method for performing seismic frequency expansion by using acoustic logging, wherein a reflection coefficient sequence obtained by acoustic logging and density logging data and a selected high-frequency seismic wavelet are made into a broadband synthetic seismic record, and compared with a through-well seismic trace, a longitudinal high-frequency signal loss rule caused by a low-pass stratum filter is obtained; simultaneously obtaining the filtering effect of the stratum transverse change; and designing a frequency expansion series inverse filter comprising longitudinal frequency loss and transverse frequency change, and applying the frequency expansion series inverse filter to the cross-well superimposed seismic profile so as to achieve the purposes of expanding the frequency bandwidth and improving the resolution of the post-stack reflection seismic profile. However, the method damages the amplitude characteristics of the original seismic data in the process of obtaining the low-frequency information under the influence of the low signal-to-noise ratio of the seismic data, so that the seismic data is distorted after frequency expansion; the obtained low frequency information is not ideal without damaging the original seismic data information. Therefore, there is a need in the art for a method that can effectively extend the low frequency band of seismic data while preserving the amplitude characteristics and improving the signal-to-noise ratio.
Disclosure of Invention
The embodiment of the application provides a method, a device and electronic equipment for carrying out low-frequency information prolongation in implicit space, which are used for solving the technical problem that the frequency extension method in the prior art is difficult to be suitable for low signal-to-noise ratio seismic data processing under complex geological conditions.
In a first aspect, the present application provides a method for implementing low frequency information continuation in implicit space, the method comprising:
acquiring training data, wherein the training data comprises input data and label data, the input data comprises input seismic data containing low-frequency information and input seismic data without low-frequency information, and the label data comprises label seismic data containing low-frequency information and label seismic data without low-frequency information;
constructing a deep learning model, the deep learning model comprising: the self-encoder comprises an encoder and a decoder, wherein the encoder encodes the input data into hidden code feature vectors in an implicit space, the hidden code feature vectors comprise main features of the input data, and the dimensions of the hidden code feature vectors are lower than those of the input data; the decoder decodes the hidden code feature vector into output data;
Inputting the input data into the deep learning model for training to obtain a trained deep learning model;
and inputting the seismic data which are to be extended and have no low-frequency information into the trained deep learning model, and obtaining the extended seismic data containing the low-frequency information.
Further, the encoder encodes the input seismic data containing low frequency information into hidden code feature vectors containing low frequency information in implicit space, and encodes the input seismic data without low frequency information into hidden code feature vectors without low frequency information in implicit space; the light weight fully connected neural network converts the hidden code feature vector without low frequency information into a hidden code feature vector containing low frequency information; the decoder decodes the hidden code feature vector containing low frequency information into output seismic data containing low frequency information, and decodes the hidden code feature vector without low frequency information into output seismic data without low frequency information, wherein the output seismic data comprises the output seismic data containing low frequency information and the output seismic data without low frequency information.
Further, inputting the input data into the deep learning model for training, comprising:
Step 1, inputting the input data into the encoder, encoding the input data into hidden code feature vectors in an implicit space by the encoder, inputting the hidden code feature vectors into the decoder, and decoding the hidden code feature vectors into output data by the decoder;
step 2, constructing an objective function to calculate the data errors of the tag data and the output data;
step 3, judging whether the objective function meets the preset condition, if not, executing step 4, and if so, completing training of the self-encoder to obtain the trained self-encoder;
and 4, reversely transmitting the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the steps 1 to 3.
The input data is input seismic data containing low-frequency information or input seismic data without low-frequency information;
when the input data is input seismic data containing low-frequency information, the tag data is tag seismic data containing low-frequency information, and the output data is output seismic data containing low-frequency information;
when the input data is the input seismic data without low-frequency information, the tag data is the tag seismic data without low-frequency information, and the output data is the output seismic data without low-frequency information.
After obtaining the trained self-encoder, executing the steps 5 to 8;
step 5, inputting the input seismic data without low-frequency information into the encoder, encoding the input seismic data without low-frequency information into hidden code feature vectors without low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors without low-frequency information into the light-weight fully-connected neural network, converting the hidden code feature vectors without low-frequency information into hidden code feature vectors containing low-frequency information by the light-weight fully-connected neural network, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
step 6, constructing an objective function to calculate the data error of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 7, judging whether the objective function meets the preset condition, if not, executing step 8, and if so, completing training of the deep learning model to obtain a trained deep learning model;
Step 8, back-propagating the data error to the light-weight fully-connected neural network, updating the weight value of the light-weight fully-connected neural network, and continuously executing the steps 5 to 7;
further, inputting the input data into the deep learning model for training, comprising:
step 1, judging whether the input data is the input seismic data containing low-frequency information or the input seismic data without the low-frequency information, and if the input data is the seismic data containing the low-frequency information, executing the steps 2 to 5; if the seismic data without low-frequency information is the seismic data without low-frequency information, executing the steps 6 to 12;
step 2, inputting the input seismic data containing low-frequency information into the encoder, encoding the input seismic data containing low-frequency information into hidden code feature vectors containing low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
step 3, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
Step 4, judging whether the objective function meets the preset condition, if not, executing step 5, and if so, completing training of the deep learning model to obtain a trained deep learning model;
step 5, the data error is reversely transmitted to the self-encoder, the weight value of the self-encoder neural network is updated, and the step 1 is continuously executed;
step 6, inputting the input seismic data without low frequency information into the encoder, encoding the input seismic data without low frequency information into hidden code feature vectors without low frequency information in an implicit space by the encoder, inputting the hidden code feature vectors without low frequency information into the decoder, and decoding the hidden code feature vectors without low frequency information into output data by the decoder, wherein the output data is the output seismic data without low frequency information;
step 7, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data without low-frequency information;
step 8, judging whether the objective function meets a preset condition, if not, reversely transmitting the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the steps 9 to 12, and if so, completing training of a deep learning model to obtain a trained deep learning model;
Step 9, inputting the hidden code feature vector without low-frequency information in the step 6 into the lightweight fully-connected neural network, converting the hidden code feature vector without low-frequency information into a hidden code feature vector containing low-frequency information by the lightweight fully-connected neural network, inputting the hidden code feature vector containing low-frequency information into the decoder, and decoding the hidden code feature vector containing low-frequency information into output data by the decoder, wherein the output data is output seismic data containing low-frequency information;
step 10, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 11, judging whether the objective function meets the preset condition, if not, executing step 12, and if so, completing training of the deep learning model to obtain a trained deep learning model;
and step 12, back-propagating the data error to the self-encoder and the light-weight fully-connected neural network, updating the weight values of the self-encoder neural network and the light-weight fully-connected neural network, and continuing to execute the step 1.
Further, inputting the input data into the deep learning model for training, comprising:
Step 1, judging whether the input data is the input seismic data containing low-frequency information or the input seismic data without the low-frequency information, and if the input data is the seismic data containing the low-frequency information, executing the steps 2 to 5; if the seismic data without low-frequency information is the seismic data without low-frequency information, executing the steps 6 to 12;
step 2, inputting the input seismic data containing low-frequency information into the encoder, encoding the input seismic data containing low-frequency information into hidden code feature vectors containing low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
step 3, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 4, judging whether the objective function meets the preset condition, if not, executing step 5, and if so, completing training of the deep learning model to obtain a trained deep learning model;
step 5, the data error is reversely transmitted to the self-encoder, the weight value of the self-encoder neural network is updated, and the step 1 is continuously executed;
Step 6, inputting the input seismic data without low-frequency information into the encoder, encoding the input seismic data without low-frequency information into hidden code feature vectors without low-frequency information in implicit space by the encoder, inputting the hidden code feature vectors without low-frequency information into the light-weight fully-connected neural network, converting the hidden code feature vectors without low-frequency information into hidden code feature vectors containing low-frequency information by the light-weight fully-connected neural network, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data, wherein the output data is output seismic data containing low-frequency information;
step 7, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 8, judging whether the objective function meets a preset condition, if not, reversely transmitting the data error to the self-encoder and the light-weight fully-connected neural network, updating weight values of the self-encoder neural network and the light-weight fully-connected neural network, and continuously executing the steps 9-12, and if yes, completing training of a deep learning model to obtain a trained deep learning model;
Step 9, inputting the hidden code feature vector without low-frequency information in the step 6 into the decoder, and decoding the hidden code feature vector without low-frequency information into output data by the decoder, wherein the output data is output seismic data without low-frequency information;
step 10, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data without low-frequency information;
step 11, judging whether the objective function meets the preset condition, if not, executing step 12, and if so, completing training of the deep learning model to obtain a trained deep learning model;
and step 12, back-propagating the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the step 1.
Further, the acquiring training data includes:
performing one-dimensional Fourier transform on a plurality of single shot seismic data acquired by seismic exploration to obtain the seismic data containing low-frequency information, wherein the seismic data containing low-frequency information are respectively used as the input seismic data containing low-frequency information and the tag seismic data containing low-frequency information;
Filtering the seismic data containing the low-frequency information to obtain the seismic data without the low-frequency information, and taking the seismic data without the low-frequency information as the input seismic data without the low-frequency information and the tag seismic data without the low-frequency information respectively.
Further, respectively carrying out normalization processing on the seismic data containing low-frequency information and the seismic data without low-frequency information to obtain normalized seismic data containing low-frequency information and normalized seismic data without low-frequency information, and respectively taking the normalized seismic data containing low-frequency information as the input seismic data containing low-frequency information and the tag seismic data containing low-frequency information; and respectively taking the normalized seismic data without low-frequency information as the input seismic data without low-frequency information and the tag seismic data without low-frequency information.
Further, gaussian noise is added to the tag data, data added with the gaussian noise is obtained, and the data added with the gaussian noise is used as the input data.
In a second aspect, the present application provides an apparatus for performing low frequency information continuation in implicit space, the apparatus comprising:
the training data acquisition module is used for acquiring training data; the training data comprises input data and label data, wherein the input data comprises input seismic data containing low-frequency information and input seismic data without low-frequency information, and the label data comprises label seismic data containing low-frequency information and label seismic data without low-frequency information;
The deep learning model construction module is used for constructing a deep learning model; the deep learning model includes: the self-encoder comprises an encoder and a decoder, wherein the encoder encodes the input data into hidden code feature vectors in an implicit space, the hidden code feature vectors comprise main features of the input data, and the dimensions of the hidden code feature vectors are lower than those of the input data; the decoder decodes the hidden code feature vector into output data;
the training module is used for inputting the input data into the deep learning model for training to obtain a trained deep learning model;
the low-frequency information extension module is used for inputting the seismic data without low-frequency information to be extended into the trained deep learning model to obtain the extended seismic data containing the low-frequency information.
In a third aspect, the present application provides an electronic device, comprising:
a processor;
a memory;
and a computer program, wherein the computer program is stored in the memory, the computer program comprising instructions that, when executed by the processor, cause the electronic device to perform the method of any of the first aspects.
The application uses a method for carrying out low-frequency continuation in implicit space to construct a deep learning model comprising a self-encoder neural network for encoding and reconstructing data and a light-weight fully-connected neural network for representing the hidden code mapping relation. The method allows the correlation between the seismic data containing low frequency information and the seismic data without low frequency information to be learned in an implicit space, thereby widening the frequency band of the seismic data and compensating the low frequency information of the seismic data. Compared with direct learning of original data, the method utilizes the self-encoder to compress the seismic data into the hidden code feature vector, greatly simplifies the difficulty of low-frequency extension in the hidden space, so that the required low-frequency extension network is lighter and simpler, and the calculation cost is lower; in addition, by adding Gaussian noise into the input data, the encoder not only can compress and reduce the dimension of the input data, but also can remove redundant information of the seismic data, so that a low-frequency continuation model of an implicit space is not easy to cause an overfitting problem and has stronger generalization capability.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for implementing low frequency information continuation in implicit space according to an embodiment of the present application;
FIG. 2 is a schematic view of single shot seismic data in the time-space domain provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of single shot seismic data in the frequency space domain provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of single shot seismic data without low frequency information obtained by filtering single shot seismic data with low frequency information in a frequency space domain according to an embodiment of the present application;
FIG. 5 is a training diagram of inputting low frequency information-free input seismic data to outputting low frequency information-containing output seismic data provided by an embodiment of the application;
FIG. 6 is a graph showing a comparison of low frequency continuation results of a deep learning model according to an embodiment of the present application;
FIG. 7 is a block diagram of an apparatus for implementing low frequency information continuation in implicit space according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solution of the present application, the following detailed description of the embodiments of the present application refers to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, a flow chart of a method for implementing low frequency information continuation in implicit space according to an embodiment of the present application mainly includes the following steps.
Step S1: training data is obtained, wherein the training data comprises input data and label data, the input data comprises input seismic data containing low-frequency information and input seismic data without low-frequency information, and the label data comprises label seismic data containing low-frequency information and label seismic data without low-frequency information.
Specifically, training data is generated using actual seismic data acquired from a seismic survey. A plurality of single shot seismic data in a time-space domain obtained by seismic exploration of a certain region are known, fig. 2 is a single shot seismic data in a time-space domain, and the abscissa in fig. 2 is offset, and the ordinate is a double travel time of seismic waves. Performing one-dimensional Fourier transform on a plurality of single shot seismic data acquired by seismic exploration, transforming the seismic data in a time space domain into a frequency space domain, wherein the seismic data in the frequency space domain is the seismic data containing low-frequency information, and the seismic data containing low-frequency information is respectively used as the input seismic data containing low-frequency information and the tag seismic data containing low-frequency information.
The one-dimensional fourier transform is expressed as:
wherein,,is single shot seismic data in a frequency space domain, FFT is a one-dimensional Fourier transform operator,the method is characterized in that the method is single shot seismic data in a time space domain, x is the position of a wave detection point, t is the double-travel time of seismic waves, and f is the frequency. Fig. 3 is single shot seismic data in the frequency space domain.
In one embodiment, for the seismic data in the frequency space domain, the effective information is concentrated primarily in the frequency range below 64 Hz. In order to improve the capability of the deep learning model for extracting effective information in data and reduce the operation cost, only the seismic data in the frequency band range of 0-64 Hz in the seismic data are intercepted to participate in training.
Filtering the seismic data containing the low-frequency information to obtain the seismic data without the low-frequency information, and taking the seismic data without the low-frequency information as the input seismic data without the low-frequency information and the tag seismic data without the low-frequency information respectively.
And filtering the seismic data in the frequency space domain by using a high-pass filter to filter out low-frequency information in the seismic data, wherein the filtered seismic data is the seismic data without the low-frequency information. The expression of the filtering is as follows:
Wherein,,is seismic data without low-frequency information, HPF is high-pass filter operator,>is seismic data containing low-frequency information, x is the position of a wave detector, f is the frequency, < + >>Is the filter cut-off frequency. Fig. 4 shows low-frequency information-free single shot seismic data obtained by filtering single shot seismic data containing low-frequency information, and it can be seen from fig. 4 that the single shot seismic data without low-frequency information lacks information in a frequency range below 10 Hz.
In one embodiment, in order to better perform model training, to avoid the situation that the training time of the deep learning model is prolonged or even not converged due to the existence of singular sample data, normalization processing is required to be performed on low-frequency information-containing seismic data and low-frequency information-free seismic data respectively, so as to obtain low-frequency information-containing normalized seismic data and low-frequency information-free normalized seismic data, and the low-frequency information-containing normalized seismic data is used as the low-frequency information-containing input seismic data and the low-frequency information-containing tag seismic data respectively; and respectively taking the normalized seismic data without low-frequency information as the input seismic data without low-frequency information and the tag seismic data without low-frequency information.
The normalization processing uses the average amplitude spectrum of the seismic data to perform linear transformation on the seismic data, and changes the seismic data into dimensionless data. The expression of the normalization process is:
wherein D is seismic data before normalization processing,for normalizing the mean amplitude spectrum of the pre-processed seismic data, < >>Is a very small constant disturbance quantity +.>The normalized seismic data is seismic data containing low-frequency information or seismic data without low-frequency information. The seismic data subjected to normalization processing enhances the amplitude value comparison of different frequency bands, and avoids the conditions that the training time of the deep learning model is prolonged or even the training time is not converged due to the existence of singular sample data.
In one embodiment, in order to improve the capability of removing redundant information from the encoder and enhance the noise immunity of the deep learning model, after tag data is obtained, gaussian noise may be added to the tag data first to obtain data added with gaussian noise, and the data added with gaussian noise is used as the input data.
The expression of adding Gaussian noise is as follows:
wherein noise is Gaussian noise, multiplication is given, R is noise coefficient, and any value larger than 0 can be selected according to the numerical distribution condition of training data and the denoising capability of the deep learning model, in the embodiment of the application, R is 0.4, For inputting data +.>Is tag data.
Step S2: constructing a deep learning model, the deep learning model comprising: a self-encoder neural network and a lightweight fully-connected neural network for characterizing a hidden code mapping relationship, the self-encoder comprising an encoder and a decoder, the encoder encoding the input data as hidden code feature vectors in implicit space, the hidden code feature vectors comprising a dominant feature of the input data, the hidden code feature vectors having dimensions lower than dimensions of the input data; the decoder decodes the hidden code feature vector into output data.
The light-weight fully-connected neural network may be a multi-layer perceptron or a light-weight neural network similar to the multi-layer perceptron, and is not particularly limited herein.
Specifically, the encoder encodes the input seismic data containing low-frequency information into hidden code feature vectors containing low-frequency information in an implicit space, and encodes the input seismic data without low-frequency information into hidden code feature vectors without low-frequency information in the implicit space; the light weight fully connected neural network converts the hidden code feature vector without low frequency information into a hidden code feature vector containing low frequency information; the decoder decodes the hidden code feature vector containing low frequency information into output seismic data containing low frequency information, and decodes the hidden code feature vector without low frequency information into output seismic data without low frequency information, wherein the output seismic data comprises the output seismic data containing low frequency information and the output seismic data without low frequency information.
By constructing the deep learning model, the self-encoder for encoding and reconstructing the data encodes the complex data into an implicit code feature vector in an implicit space, and the dimension of the implicit code feature vector is greatly reduced compared with that of the original data, so that the compression representation of the data is realized; meanwhile, the hidden code contains main characteristics of the original data, and can be decoded into data similar to the original data by a decoder. The lightweight fully-connected neural network for representing the hidden code mapping relation can realize the transformation from the hidden code characteristic vector without low-frequency information to the hidden code characteristic vector containing low-frequency information.
Step S3: and inputting the input data into the deep learning model for training to obtain a trained deep learning model.
In one embodiment, the self-encoder neural network is trained first, and then the lightweight fully-connected neural network is trained, specifically comprising:
step 1, inputting the input data into the encoder, encoding the input data into hidden code feature vectors in an implicit space by the encoder, inputting the hidden code feature vectors into the decoder, and decoding the hidden code feature vectors into output data by the decoder;
step 2, constructing an objective function to calculate the data errors of the tag data and the output data;
Step 3, judging whether the objective function meets the preset condition, if not, executing step 4, and if so, completing training of the self-encoder to obtain the trained self-encoder;
and 4, reversely transmitting the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the steps 1 to 3.
The input data is input seismic data containing low-frequency information or input seismic data without low-frequency information;
when the input data is input seismic data containing low-frequency information, the tag data is tag seismic data containing low-frequency information, and the output data is output seismic data containing low-frequency information;
when the input data is the input seismic data without low-frequency information, the tag data is the tag seismic data without low-frequency information, and the output data is the output seismic data without low-frequency information.
After obtaining the trained self-encoder, executing the steps 5 to 8;
step 5, inputting the input seismic data without low-frequency information into the encoder, encoding the input seismic data without low-frequency information into hidden code feature vectors without low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors without low-frequency information into the light-weight fully-connected neural network, converting the hidden code feature vectors without low-frequency information into hidden code feature vectors containing low-frequency information by the light-weight fully-connected neural network, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
Step 6, constructing an objective function to calculate the data error of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 7, judging whether the objective function meets the preset condition, if not, executing step 8, and if so, completing training of the deep learning model to obtain a trained deep learning model;
and 8, reversely transmitting the data error to the light-weight fully-connected neural network, updating the weight value of the light-weight fully-connected neural network, and continuously executing the steps 5 to 7.
In one embodiment, the self-encoder neural network and the lightweight fully-connected neural network can be trained synchronously, specifically comprising:
step 1, judging whether the input data is the input seismic data containing low-frequency information or the input seismic data without the low-frequency information, and if the input data is the seismic data containing the low-frequency information, executing the steps 2 to 5; if the seismic data without low-frequency information is the seismic data without low-frequency information, executing the steps 6 to 12;
step 2, inputting the input seismic data containing low-frequency information into the encoder, encoding the input seismic data containing low-frequency information into hidden code feature vectors containing low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
Step 3, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 4, judging whether the objective function meets the preset condition, if not, executing step 5, and if so, completing training of the deep learning model to obtain a trained deep learning model;
and 5, reversely transmitting the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the step 1.
Step 6, inputting the input seismic data without low frequency information into the encoder, encoding the input seismic data without low frequency information into hidden code feature vectors without low frequency information in an implicit space by the encoder, inputting the hidden code feature vectors without low frequency information into the decoder, and decoding the hidden code feature vectors without low frequency information into output data by the decoder, wherein the output data is the output seismic data without low frequency information;
step 7, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data without low-frequency information;
Step 8, judging whether the objective function meets a preset condition, if not, reversely transmitting the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the steps 9 to 12, and if so, completing training of a deep learning model to obtain a trained deep learning model;
step 9, inputting the hidden code feature vector without low-frequency information in the step 6 into the lightweight fully-connected neural network, converting the hidden code feature vector without low-frequency information into a hidden code feature vector containing low-frequency information by the lightweight fully-connected neural network, inputting the hidden code feature vector containing low-frequency information into the decoder, and decoding the hidden code feature vector containing low-frequency information into output data by the decoder, wherein the output data is output seismic data containing low-frequency information;
step 10, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 11, judging whether the objective function meets the preset condition, if not, executing step 12, and if so, completing training of the deep learning model to obtain a trained deep learning model;
And step 12, back-propagating the data error to the self-encoder and the light-weight fully-connected neural network, updating the weight values of the self-encoder neural network and the light-weight fully-connected neural network, and continuing to execute the step 1.
After the hidden code feature vector without low-frequency information is obtained in step 6, the hidden code feature vector may be input into the lightweight fully-connected neural network for training, which specifically includes:
step 1, judging whether the input data is the input seismic data containing low-frequency information or the input seismic data without the low-frequency information, and if the input data is the seismic data containing the low-frequency information, executing the steps 2 to 5; if the seismic data without low-frequency information is the seismic data without low-frequency information, executing the steps 6 to 12;
step 2, inputting the input seismic data containing low-frequency information into the encoder, encoding the input seismic data containing low-frequency information into hidden code feature vectors containing low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
Step 3, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 4, judging whether the objective function meets the preset condition, if not, executing step 5, and if so, completing training of the deep learning model to obtain a trained deep learning model;
and 5, reversely transmitting the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the step 1.
Step 6, inputting the input seismic data without low-frequency information into the encoder, encoding the input seismic data without low-frequency information into hidden code feature vectors without low-frequency information in implicit space by the encoder, inputting the hidden code feature vectors without low-frequency information into the light-weight fully-connected neural network, converting the hidden code feature vectors without low-frequency information into hidden code feature vectors containing low-frequency information by the light-weight fully-connected neural network, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data, wherein the output data is output seismic data containing low-frequency information;
Step 7, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 8, judging whether the objective function meets a preset condition, if not, reversely transmitting the data error to the self-encoder and the light-weight fully-connected neural network, updating weight values of the self-encoder neural network and the light-weight fully-connected neural network, and continuously executing the steps 9-12, and if yes, completing training of a deep learning model to obtain a trained deep learning model;
step 9, inputting the hidden code feature vector without low-frequency information in the step 6 into the decoder, and decoding the hidden code feature vector without low-frequency information into output data by the decoder, wherein the output data is output seismic data without low-frequency information;
step 10, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data without low-frequency information;
step 11, judging whether the objective function meets the preset condition, if not, executing step 12, and if so, completing training of the deep learning model to obtain a trained deep learning model;
And step 12, back-propagating the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the step 1.
By inputting two kinds of seismic data, namely input seismic data containing low-frequency information and input seismic data without low-frequency information, and setting a proper program to train the two kinds of seismic data, the self-encoder and the lightweight fully-connected neural network can be trained more fully, and a trained model is more beneficial to the prolongation of the low-frequency information.
Fig. 5 is a training diagram of inputting low frequency information-free input seismic data to outputting low frequency information-containing output seismic data, an encoder encodes the low frequency information-free input seismic data to an implicit space to form a hidden code feature vector Z1, a multi-layer perceptron converts the low frequency information-free hidden code feature vector Z1 into a low frequency information-containing hidden code feature vector Z2, and a decoder decodes the low frequency information-containing hidden code feature vector Z2 into low frequency information-containing output seismic data.
In one embodiment, the objective functionThe method comprises the following steps:
wherein,,for outputting data +.>For inputting data +.>Is tag data.
The preset condition comprises that the data error is smaller than a preset error value, the error drop rate is smaller than a preset error drop rate or the iteration number exceeds a preset value.
FIG. 6 is a graph showing the comparison of the low frequency continuation result of the deep learning model according to the embodiment of the present application. Wherein a is input seismic data containing Gaussian noise and no low-frequency information; b is tag seismic data containing low frequency information; c is output seismic data containing low frequency information; d is error data between the output seismic data containing low frequency information and the tag seismic data containing low frequency information. According to the error data, when the seismic data containing Gaussian noise is input, the deep learning model can still carry out low-frequency information prolongation, and the error between the prolongation result and the target data is smaller, so that the precision requirement is met.
Step S4: and inputting the seismic data which are to be extended and have no low-frequency information into the trained deep learning model, and obtaining the extended seismic data containing the low-frequency information.
The application uses a method for carrying out low-frequency continuation in implicit space to construct a deep learning model comprising a self-encoder neural network for encoding and reconstructing data and a light-weight fully-connected neural network for representing the hidden code mapping relation. The encoder encodes input data into hidden code feature vectors in an implicit space, so that data feature extraction and dimension reduction are realized; the light weight fully connected neural network learns the mapping relation between the hidden code feature vectors in the implicit space and realizes the conversion from the hidden code feature vectors without low frequency information to the hidden code feature vectors containing low frequency information. The method allows the correlation between the seismic data containing low frequency information and the seismic data without low frequency information to be learned in an implicit space, thereby widening the frequency band of the seismic data and compensating the low frequency information of the seismic data. Compared with direct learning of original data, the method utilizes the self-encoder to compress the seismic data into the hidden code feature vector, realizes compression representation of the data, and greatly simplifies the difficulty of low-frequency continuation in an implicit space, so that the required low-frequency continuation network is lighter and simpler, and the calculation cost is lower. Through adding Gaussian noise into input data, the encoder not only can compress and dimension-reduce the input data, but also can remove redundant information of seismic data, so that a low-frequency continuation model of an implicit space is not easy to have a fitting problem and has stronger generalization capability. The seismic data for training is normalized, the amplitude value comparison of different frequency bands is enhanced by the seismic data subjected to normalization, and the situation that the training time of the deep learning model is prolonged or even not converged due to the existence of singular sample data is avoided. And through inputting two kinds of seismic data, namely input seismic data containing low-frequency information and input seismic data without low-frequency information, and setting a proper program to train the two kinds of seismic data, the self-encoder and the lightweight fully-connected neural network can be trained more fully, and a trained model is more beneficial to the prolongation of the low-frequency information.
Corresponding to the embodiment, the application also provides a device for carrying out low-frequency information continuation in implicit space.
Fig. 7 is a block diagram of an apparatus for implementing low frequency information continuation in implicit space according to an embodiment of the present application, which mainly includes the following modules.
A training data acquisition module 701, configured to acquire training data, where the training data includes input data and tag data, the input data includes input seismic data including low frequency information and input seismic data without low frequency information, and the tag data includes tag seismic data including low frequency information and tag seismic data without low frequency information;
a deep learning model construction module 702 for constructing a deep learning model, the deep learning model comprising: the self-encoder comprises an encoder and a decoder, wherein the encoder encodes the input data into hidden code feature vectors in an implicit space, the hidden code feature vectors comprise main features of the input data, and the dimensions of the hidden code feature vectors are lower than those of the input data; the decoder decodes the hidden code feature vector into output data;
The training module 703 is configured to input the input data into the deep learning model for training, so as to obtain a trained deep learning model;
the low-frequency information continuation module 704 is configured to input the seismic data to be extended without low-frequency information into the trained deep learning model, and obtain the extended seismic data containing low-frequency information.
It should be noted that, for brevity, details of the embodiments of the present application may be referred to the description of the embodiments of the method, and are not described herein again.
Corresponding to the embodiment, the embodiment of the application also provides electronic equipment.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 800 may include: a processor 801, a memory 802, and a communication unit 803. The components may communicate via one or more buses, and it will be appreciated by those skilled in the art that the electronic device structure shown in the drawings is not limiting of the embodiments of the application, as it may be a bus-like structure, a star-like structure, or include more or fewer components than shown, or may be a combination of certain components or a different arrangement of components.
Wherein the communication unit 803 is configured to establish a communication channel, so that the electronic device can communicate with other devices.
The processor 801, which is a control center for the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and/or processes data by running or executing software programs and/or modules stored in the memory 802, and invoking data stored in the memory. The processor may be comprised of integrated circuits (integrated circuit, ICs), such as a single packaged IC, or may be comprised of packaged ICs that connect multiple identical or different functions. For example, the processor 801 may include only a central processing unit (central processing unit, CPU). In the embodiment of the application, the CPU can be a single operation core or can comprise multiple operation cores.
Memory 802 for storing instructions for execution by processor 801, memory 802 may be implemented with any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk.
The execution of the instructions in memory 802, when executed by processor 801, enables electronic device 800 to perform some or all of the steps of the method embodiments described above.
Corresponding to the above embodiment, the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium may store a program, where when the program runs, the device where the computer readable storage medium is located may be controlled to execute some or all of the steps in the above method embodiment. In particular, the computer readable storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random access memory (randomaccess memory, RAM), or the like.
Corresponding to the above embodiments, the present application also provides a computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform some or all of the steps of the above method embodiments.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The foregoing is merely exemplary embodiments of the present application, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present application, which should be covered by the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for implicit spatial extension of low frequency information, comprising:
acquiring training data, wherein the training data comprises input data and label data, the input data comprises input seismic data containing low-frequency information and input seismic data without low-frequency information, and the label data comprises label seismic data containing low-frequency information and label seismic data without low-frequency information;
constructing a deep learning model, the deep learning model comprising: the self-encoder comprises an encoder and a decoder, wherein the encoder encodes the input data into hidden code feature vectors in an implicit space, the hidden code feature vectors comprise main features of the input data, and the dimensions of the hidden code feature vectors are lower than those of the input data; the decoder decodes the hidden code feature vector into output data;
Inputting the input data into the deep learning model for training to obtain a trained deep learning model;
and inputting the seismic data which are to be extended and have no low-frequency information into the trained deep learning model, and obtaining the extended seismic data containing the low-frequency information.
2. The method of claim 1, wherein the encoder encodes the input seismic data containing low frequency information as implicit spatially hidden code feature vectors containing low frequency information, and encodes the input seismic data without low frequency information as implicit spatially hidden code feature vectors without low frequency information; the light weight fully connected neural network converts the hidden code feature vector without low frequency information into a hidden code feature vector containing low frequency information; the decoder decodes the hidden code feature vector containing low frequency information into output seismic data containing low frequency information, and decodes the hidden code feature vector without low frequency information into output seismic data without low frequency information, wherein the output seismic data comprises the output seismic data containing low frequency information and the output seismic data without low frequency information.
3. The method of claim 1, wherein inputting the input data into the deep learning model for training comprises:
Step 1, inputting the input data into the encoder, encoding the input data into hidden code feature vectors in an implicit space by the encoder, inputting the hidden code feature vectors into the decoder, and decoding the hidden code feature vectors into output data by the decoder;
step 2, constructing an objective function to calculate the data errors of the tag data and the output data;
step 3, judging whether the objective function meets the preset condition, if not, executing step 4, and if so, completing training of the self-encoder to obtain the trained self-encoder;
step 4, back-propagating the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the steps 1 to 3;
the input data is input seismic data containing low-frequency information or input seismic data without low-frequency information;
when the input data is input seismic data containing low-frequency information, the tag data is tag seismic data containing low-frequency information, and the output data is output seismic data containing low-frequency information;
when the input data is input seismic data without low-frequency information, the tag data is tag seismic data without low-frequency information, and the output data is output seismic data without low-frequency information;
After obtaining the trained self-encoder, executing the steps 5 to 8;
step 5, inputting the input seismic data without low-frequency information into the encoder, encoding the input seismic data without low-frequency information into hidden code feature vectors without low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors without low-frequency information into the light-weight fully-connected neural network, converting the hidden code feature vectors without low-frequency information into hidden code feature vectors containing low-frequency information by the light-weight fully-connected neural network, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
step 6, constructing an objective function to calculate the data error of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 7, judging whether the objective function meets the preset condition, if not, executing step 8, and if so, completing training of the deep learning model to obtain a trained deep learning model;
And 8, reversely transmitting the data error to the light-weight fully-connected neural network, updating the weight value of the light-weight fully-connected neural network, and continuously executing the steps 5 to 7.
4. The method of claim 1, wherein inputting the input data into the deep learning model for training comprises:
step 1, judging whether the input data is the input seismic data containing low-frequency information or the input seismic data without the low-frequency information, and if the input data is the seismic data containing the low-frequency information, executing the steps 2 to 5; if the seismic data without low-frequency information is the seismic data without low-frequency information, executing the steps 6 to 12;
step 2, inputting the input seismic data containing low-frequency information into the encoder, encoding the input seismic data containing low-frequency information into hidden code feature vectors containing low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
step 3, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
Step 4, judging whether the objective function meets the preset condition, if not, executing step 5, and if so, completing training of the deep learning model to obtain a trained deep learning model;
step 5, the data error is reversely transmitted to the self-encoder, the weight value of the self-encoder neural network is updated, and the step 1 is continuously executed;
step 6, inputting the input seismic data without low frequency information into the encoder, encoding the input seismic data without low frequency information into hidden code feature vectors without low frequency information in an implicit space by the encoder, inputting the hidden code feature vectors without low frequency information into the decoder, and decoding the hidden code feature vectors without low frequency information into output data by the decoder, wherein the output data is the output seismic data without low frequency information;
step 7, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data without low-frequency information;
step 8, judging whether the objective function meets a preset condition, if not, reversely transmitting the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the steps 9 to 12, and if so, completing training of a deep learning model to obtain a trained deep learning model;
Step 9, inputting the hidden code feature vector without low-frequency information in the step 6 into the lightweight fully-connected neural network, converting the hidden code feature vector without low-frequency information into a hidden code feature vector containing low-frequency information by the lightweight fully-connected neural network, inputting the hidden code feature vector containing low-frequency information into the decoder, and decoding the hidden code feature vector containing low-frequency information into output data by the decoder, wherein the output data is output seismic data containing low-frequency information;
step 10, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 11, judging whether the objective function meets the preset condition, if not, executing step 12, and if so, completing training of the deep learning model to obtain a trained deep learning model;
and step 12, back-propagating the data error to the self-encoder and the light-weight fully-connected neural network, updating the weight values of the self-encoder neural network and the light-weight fully-connected neural network, and continuing to execute the step 1.
5. The method of claim 1, wherein inputting the input data into the deep learning model for training comprises:
Step 1, judging whether the input data is the input seismic data containing low-frequency information or the input seismic data without the low-frequency information, and if the input data is the seismic data containing the low-frequency information, executing the steps 2 to 5; if the seismic data without low-frequency information is the seismic data without low-frequency information, executing the steps 6 to 12;
step 2, inputting the input seismic data containing low-frequency information into the encoder, encoding the input seismic data containing low-frequency information into hidden code feature vectors containing low-frequency information in an implicit space by the encoder, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data by the decoder, wherein the output data is the output seismic data containing low-frequency information;
step 3, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 4, judging whether the objective function meets the preset condition, if not, executing step 5, and if so, completing training of the deep learning model to obtain a trained deep learning model;
step 5, the data error is reversely transmitted to the self-encoder, the weight value of the self-encoder neural network is updated, and the step 1 is continuously executed;
Step 6, inputting the input seismic data without low-frequency information into the encoder, encoding the input seismic data without low-frequency information into hidden code feature vectors without low-frequency information in implicit space by the encoder, inputting the hidden code feature vectors without low-frequency information into the light-weight fully-connected neural network, converting the hidden code feature vectors without low-frequency information into hidden code feature vectors containing low-frequency information by the light-weight fully-connected neural network, inputting the hidden code feature vectors containing low-frequency information into the decoder, and decoding the hidden code feature vectors containing low-frequency information into output data, wherein the output data is output seismic data containing low-frequency information;
step 7, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data containing low-frequency information;
step 8, judging whether the objective function meets a preset condition, if not, reversely transmitting the data error to the self-encoder and the light-weight fully-connected neural network, updating weight values of the self-encoder neural network and the light-weight fully-connected neural network, and continuously executing the steps 9-12, and if yes, completing training of a deep learning model to obtain a trained deep learning model;
Step 9, inputting the hidden code feature vector without low-frequency information in the step 6 into the decoder, and decoding the hidden code feature vector without low-frequency information into output data by the decoder, wherein the output data is output seismic data without low-frequency information;
step 10, constructing an objective function to calculate the data errors of the tag data and the output data, wherein the tag data is tag seismic data without low-frequency information;
step 11, judging whether the objective function meets the preset condition, if not, executing step 12, and if so, completing training of the deep learning model to obtain a trained deep learning model;
and step 12, back-propagating the data error to the self-encoder, updating the weight value of the self-encoder neural network, and continuously executing the step 1.
6. The method of claim 1, wherein the acquiring training data comprises:
performing one-dimensional Fourier transform on a plurality of single shot seismic data acquired by seismic exploration to obtain the seismic data containing low-frequency information, wherein the seismic data containing low-frequency information are respectively used as the input seismic data containing low-frequency information and the tag seismic data containing low-frequency information;
Filtering the seismic data containing the low-frequency information to obtain the seismic data without the low-frequency information, and taking the seismic data without the low-frequency information as the input seismic data without the low-frequency information and the tag seismic data without the low-frequency information respectively.
7. The method according to claim 6, wherein the low frequency information-containing seismic data and the low frequency information-free seismic data are normalized respectively to obtain low frequency information-containing normalized seismic data and low frequency information-free normalized seismic data, and the low frequency information-containing normalized seismic data is used as the low frequency information-containing input seismic data and the low frequency information-containing tag seismic data respectively; and respectively taking the normalized seismic data without low-frequency information as the input seismic data without low-frequency information and the tag seismic data without low-frequency information.
8. The method according to claim 1, wherein gaussian noise is added to the tag data to obtain gaussian noise added data, and the gaussian noise added data is used as the input data.
9. An apparatus for implicit spatial extension of low frequency information, comprising:
The training data acquisition module is used for acquiring training data; the training data comprises input data and label data, wherein the input data comprises input seismic data containing low-frequency information and input seismic data without low-frequency information, and the label data comprises label seismic data containing low-frequency information and label seismic data without low-frequency information;
the deep learning model construction module is used for constructing a deep learning model; the deep learning model includes: the self-encoder comprises an encoder and a decoder, wherein the encoder encodes the input data into hidden code feature vectors in an implicit space, the hidden code feature vectors comprise main features of the input data, and the dimensions of the hidden code feature vectors are lower than those of the input data; the decoder decodes the hidden code feature vector into output data;
the training module is used for inputting the input data into the deep learning model for training to obtain a trained deep learning model;
the low-frequency information extension module is used for inputting the seismic data without low-frequency information to be extended into the trained deep learning model to obtain the extended seismic data containing the low-frequency information.
10. An electronic device for implicit spatial extension of low frequency information, comprising:
a processor;
a memory;
and a computer program, wherein the computer program is stored in the memory, the computer program comprising instructions which, when executed by the processor, cause the electronic device to perform the steps of the method of any of claims 1 to 8.
CN202310876134.9A 2023-07-18 2023-07-18 Method and device for carrying out low-frequency information continuation in implicit space and electronic equipment Active CN116610937B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310876134.9A CN116610937B (en) 2023-07-18 2023-07-18 Method and device for carrying out low-frequency information continuation in implicit space and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310876134.9A CN116610937B (en) 2023-07-18 2023-07-18 Method and device for carrying out low-frequency information continuation in implicit space and electronic equipment

Publications (2)

Publication Number Publication Date
CN116610937A true CN116610937A (en) 2023-08-18
CN116610937B CN116610937B (en) 2023-09-22

Family

ID=87678549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310876134.9A Active CN116610937B (en) 2023-07-18 2023-07-18 Method and device for carrying out low-frequency information continuation in implicit space and electronic equipment

Country Status (1)

Country Link
CN (1) CN116610937B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415583A (en) * 2020-11-06 2021-02-26 中国科学院精密测量科学与技术创新研究院 Seismic data reconstruction method and device, electronic equipment and readable storage medium
US20210215841A1 (en) * 2020-01-10 2021-07-15 Exxonmobil Upstream Research Company Bandwith Extension of Geophysical Data
CN114117906A (en) * 2021-11-19 2022-03-01 山东大学 Multi-scale unsupervised seismic wave velocity inversion method based on observation data self-encoding
CN114545494A (en) * 2022-01-21 2022-05-27 中国地质大学(武汉) Non-supervision seismic data reconstruction method and device based on sparse constraint
WO2022140717A1 (en) * 2020-12-21 2022-06-30 Exxonmobil Upstream Research Company Seismic embeddings for detecting subsurface hydrocarbon presence and geological features
CN115587614A (en) * 2022-10-18 2023-01-10 中国海洋大学 Implicit full-waveform inversion method and device and electronic equipment
CN116011338A (en) * 2023-02-01 2023-04-25 北京华电力拓能源科技有限公司 Full waveform inversion method based on self-encoder and deep neural network
CN116047583A (en) * 2021-10-27 2023-05-02 中国石油化工股份有限公司 Adaptive wave impedance inversion method and system based on depth convolution neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210215841A1 (en) * 2020-01-10 2021-07-15 Exxonmobil Upstream Research Company Bandwith Extension of Geophysical Data
CN112415583A (en) * 2020-11-06 2021-02-26 中国科学院精密测量科学与技术创新研究院 Seismic data reconstruction method and device, electronic equipment and readable storage medium
WO2022140717A1 (en) * 2020-12-21 2022-06-30 Exxonmobil Upstream Research Company Seismic embeddings for detecting subsurface hydrocarbon presence and geological features
CN116047583A (en) * 2021-10-27 2023-05-02 中国石油化工股份有限公司 Adaptive wave impedance inversion method and system based on depth convolution neural network
CN114117906A (en) * 2021-11-19 2022-03-01 山东大学 Multi-scale unsupervised seismic wave velocity inversion method based on observation data self-encoding
WO2023087451A1 (en) * 2021-11-19 2023-05-25 山东大学 Observation data self-encoding-based multi-scale unsupervised seismic wave velocity inversion method
CN114545494A (en) * 2022-01-21 2022-05-27 中国地质大学(武汉) Non-supervision seismic data reconstruction method and device based on sparse constraint
CN115587614A (en) * 2022-10-18 2023-01-10 中国海洋大学 Implicit full-waveform inversion method and device and electronic equipment
CN116011338A (en) * 2023-02-01 2023-04-25 北京华电力拓能源科技有限公司 Full waveform inversion method based on self-encoder and deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张攀龙;李尧;张田涛;岳景杭;董锐;曹帅;张庆松;: "基于U-Net深度神经网络的地震数据去噪研究", 金属矿山, no. 01 *
李媛媛;李振春;张凯;: "频率域多尺度弹性波全波形反演", 石油物探, no. 03 *
李录明, 罗省贤: "波场延拓表层模型校正", 石油地球物理勘探, no. 05 *

Also Published As

Publication number Publication date
CN116610937B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
Chen Fast dictionary learning for noise attenuation of multidimensional seismic data
Liu et al. Seismic data reconstruction via wavelet-based residual deep learning
Chen et al. Double-sparsity dictionary for seismic noise attenuation
US20220043174A1 (en) Model-driven deep learning-based seismic super-resolution inversion method
CN110806602B (en) Intelligent seismic data random noise suppression method based on deep learning
Saad et al. Self-attention deep image prior network for unsupervised 3-D seismic data enhancement
CN109143331B (en) Seismic wavelet extraction method
Meng et al. Self-supervised learning for seismic data reconstruction and denoising
CN105116442A (en) Lithologic oil-gas reservoir weak-reflection seismic signal reconstruction method
Saad et al. Unsupervised deep learning for single-channel earthquake data denoising and its applications in event detection and fully automatic location
CN111551988B (en) Seismic data anti-alias interpolation method combining deep learning and prediction filtering
Dodda et al. Simultaneous seismic data denoising and reconstruction with attention-based wavelet-convolutional neural network
Zhong et al. Multiscale residual pyramid network for seismic background noise attenuation
CN117574062A (en) Small loop transient electromagnetic signal denoising method based on VMD-DNN model
CN117076858B (en) Deep learning-based low-frequency geomagnetic strong interference suppression method and system
CN117272138B (en) Geomagnetic data denoising method and system based on reference channel data constraint and deep learning
Qiao et al. Random noise attenuation of seismic data via self-supervised Bayesian deep learning
Kuruguntla et al. Erratic noise attenuation using double sparsity dictionary learning method
CN116610937B (en) Method and device for carrying out low-frequency information continuation in implicit space and electronic equipment
Bai et al. A U-Net based deep learning approach for seismic random noise suppression
Zhang et al. Ground-roll attenuation using a dual-filter-bank convolutional neural network
Liao et al. A twice denoising autoencoder framework for random seismic noise attenuation
CN116068644B (en) Method for improving resolution and noise reduction of seismic data by using generation countermeasure network
Wang et al. Seismic Multi-Channel Deconvolution via 2D K-SVD and MSD-oCSC
He et al. A reliable online dictionary learning denoising strategy for noisy microseismic data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant