CN113625336A - Seismic wave impedance thin layer inversion method based on full convolution neural network - Google Patents

Seismic wave impedance thin layer inversion method based on full convolution neural network Download PDF

Info

Publication number
CN113625336A
CN113625336A CN202110809060.8A CN202110809060A CN113625336A CN 113625336 A CN113625336 A CN 113625336A CN 202110809060 A CN202110809060 A CN 202110809060A CN 113625336 A CN113625336 A CN 113625336A
Authority
CN
China
Prior art keywords
wave impedance
seismic
neural network
inversion
convolution neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110809060.8A
Other languages
Chinese (zh)
Other versions
CN113625336B (en
Inventor
许辉群
王泽峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yangtze University
Original Assignee
Yangtze University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yangtze University filed Critical Yangtze University
Priority to CN202110809060.8A priority Critical patent/CN113625336B/en
Publication of CN113625336A publication Critical patent/CN113625336A/en
Application granted granted Critical
Publication of CN113625336B publication Critical patent/CN113625336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. analysis, for interpretation, for correction
    • G01V1/282Application of seismic models, synthetic seismograms
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. analysis, for interpretation, for correction
    • G01V1/30Analysis
    • G01V1/306Analysis for determining physical properties of the subsurface, e.g. impedance, porosity or attenuation profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Acoustics & Sound (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geology (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

The invention relates to the technical field of reservoir prediction in seismic exploration, in particular to a seismic wave impedance thin-layer inversion method based on a full convolution neural network. The method comprises the steps of obtaining a model by training a seismic sample, inputting predicted seismic data into the model, and predicting to obtain a wave impedance inversion result. The known seismic data nonlinear modeling is utilized through the full convolution neural network, and the full convolution neural network is utilized to use zero filling to enable each output layer to be continuously transmitted in the same size as the input layer, so that the effect of improving the resolution ratio is achieved, and an intelligent new method is provided for seismic wave impedance thin layer prediction.

Description

Seismic wave impedance thin layer inversion method based on full convolution neural network
Technical Field
The invention relates to the technical field of reservoir prediction in seismic exploration, in particular to a seismic wave impedance thin-layer inversion method based on a full convolution neural network.
Background
The 'wave impedance inversion is a final expression form of high-resolution seismic data processing', which indicates that the seismic wave impedance inversion has a special position and is very important in the seismic exploration technology. The wave impedance is a composite parameter closely related to the formation speed and density, is also closely related to the formation lithology and has good correspondence with an oil-bearing reservoir. Wave impedance inversion is one of seismic lithology inversion, and an inversion method is a process of using existing knowledge for seismic inversion modeling, so the seismic wave impedance inversion is one of extremely important and effective means for reservoir prediction.
Seismic wave impedance inversion is divided into two major categories, narrow-band inversion and wide-band inversion. The former category is the early traditional approach. This type of method is directly inverted through seismic recording to obtain wave impedance. The reflection coefficient sequence is obtained by the inverse filtering method, and then the wave impedance is calculated. Because seismic recordings are band-pass limited (i.e., narrowband), and the wave impedance inversion results are limited by the frequency bandwidth of the seismic recordings, such inversion methods are referred to as narrowband inversion. Model-based seismic wave impedance inversion methods have been developed over the last several years. Because the free setting of the model can ensure that the infinite bandwidth is not restricted by the frequency bandwidth of the seismic record; seismic records function to fit forward theoretical records of the model to them, and therefore this type of inversion method is called bandwidth inversion. Model-based inversion can lead to multi-solution. That is, forward theoretical records of different models can be fitted well with actual seismic records. In order to achieve the purpose of eliminating the ambiguity, various constraint conditions are usually used, so the bandwidth inversion is generally the bandwidth constraint inversion, which is also the most important inversion technique in the current industry.
Due to the appearance of thin layers with increasingly complex formations, due to the factors of low treatment efficiency, high treatment cost, low precision and the like, the prediction of the thin layers by the conventional wave impedance inversion cannot completely meet the production requirements. The field researchers carry out more in-depth research on seismic wave impedance inversion, and various new methods and new technologies are continuously applied to seismic wave impedance inversion nowadays, so that the further development of the seismic inversion is promoted, and various achievements emerge successively. The full convolution neural network is a process of nonlinear modeling and optimization by utilizing known knowledge, is very similar to the process of an inversion method, and shows a plurality of advantages in seismic wave impedance thin-layer inversion. On the basis, a seismic wave impedance thin layer inversion method based on a full convolution neural network is provided and is tested in theoretical model seismic data obtained through forward modeling, a high-precision inversion result is finally obtained, the effects of thin layer prediction and reservoir prediction improvement are achieved, the feasibility and effectiveness of the method are proved, and the method is of great importance for seismic exploration thin layer prediction.
Disclosure of Invention
Aiming at the problems, the invention provides a seismic wave impedance thin layer inversion method based on a full convolution neural network, which is used for predicting a thin layer and improving the reservoir prediction effect.
In order to achieve the purpose, the invention adopts the technical scheme that:
a seismic wave impedance thin layer inversion method based on a full convolution neural network is characterized by comprising the following steps:
s1: extracting seismic model data and corresponding wave impedance from forward data, and taking out a part as a sample pair;
s2: constructing a full convolution neural network, and giving a group of hyper-parameters;
the full convolution neural network comprises a convolution layer, a dropout layer, an activation layer, a pooling layer, a batch standardization layer, a primary deconvolution layer and an activation layer which are circulated for three times;
s3: carrying out normalization and random sampling preprocessing on the sample pairs and then inputting the preprocessed sample pairs into a full convolution neural network;
s4: the full convolution neural network executes a time period number epoch to obtain training time, a loss value and a trained inversion model;
s5: inputting seismic data needing wave impedance inversion into a trained inversion model, and outputting a predicted wave impedance inversion result; comparing the predicted wave impedance with the original wave impedance, observing and training to obtain a loss value and time of the inversion model at the same time, reflecting the quality of the obtained inversion model, and if the quality is good, storing the inversion model; if the difference is not found, the method returns to the step S2 to call the parameters for retraining.
Further, the hyper-parameters include training times, batch sample number, learning rate, discarding rate, weight attenuation coefficient, optimizer momentum, and convolution kernel size.
Further, in the above hyper-parameter, the learning rate range is [1e ]-3,1e-2](ii) a The adjustment range of the discarding rate is [0.1,0.5 ]]The adjustment range of the weight attenuation coefficient is [0,1e ]-4]。
Further, in step S3, the specific formula is normalized:
Figure BDA0003167593740000031
wherein: x is the number of*Is a normalized value; x is original data; max maximum in raw data; min is the minimum value in the raw data.
Further, in step S4, the loss function used herein is an MSE loss function, and the specific formula is as follows:
Figure BDA0003167593740000032
wherein: l represents a loss value, n is the number of samples, yiIs a predicted value of the number of the frames,
Figure BDA0003167593740000033
is the tag value.
Further, in the step S5, the specific process of prediction is as follows:
firstly, loading a trained inversion model, inputting a three-dimensional tensor converted from seismic data of one line into the trained inversion model, predicting wave impedance data of the three-dimensional tensor, and drawing the wave impedance data into a picture to obtain a section of predicted wave impedance;
then, comparing the predicted wave impedance with the original wave impedance, and if the information of the change of the formation thickness can be well reflected and the precision is high, storing the model and the loss value; and if the information of the change of the thickness of the stratum cannot be well reflected or the precision is not high, returning to the parameter adjustment and retraining to obtain a new model.
Furthermore, the depth of the full convolution neural network is related to the size of the input seismic sample, and the structure of the network can be changed through the size of the input seismic sample; and changing the network structure according to the size of the input seismic sample, wherein the size of the sample is particularly related to the number of sampling points, the sampling time and the number of tracks, and the data is large, so that a plurality of network layers such as convolutional layers and deconvolution layers can be added.
The invention has the beneficial effects that:
the known seismic data nonlinear modeling is utilized through the full convolution neural network, and the full convolution neural network is utilized to use zero filling to enable each output layer to be continuously transmitted in the same size as the input layer, so that the effect of improving the resolution ratio is achieved, and an intelligent new method is provided for seismic wave impedance thin layer prediction.
Drawings
FIG. 1 is a schematic diagram of a seismic wave impedance thin layer inversion process for a full convolution neural network.
FIG. 2 is a schematic diagram of a full convolution neural network structure.
FIG. 3a is the input 10 th-30 th sample pairs in the embodiment of the present invention.
Fig. 3b is the input 50 th-70 th sample pairs in the embodiment of the present invention.
FIG. 3c shows the input 90 th-110 th sample pairs in the embodiment of the present invention.
FIG. 3d shows the input 130 th and 150 th sample pairs in the embodiment of the present invention.
Fig. 4 is a schematic diagram of an actual network training structure in the embodiment of the present invention.
FIG. 5 is a loss value transformation curve according to an embodiment of the present invention.
FIG. 6 shows the results of preliminary tests in an embodiment of the present invention.
Fig. 7 is a tag wave impedance profile in an embodiment of the invention.
FIG. 8 is a predicted seismic data profile in an embodiment of the invention.
FIG. 9 shows the result of high-precision wave impedance inversion in an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to the model data obtained in the forward study, but the present invention is not limited to the above-described embodiments, and the present invention is only exemplified. While the advantages of the invention will be apparent and readily appreciated by the description.
As shown in fig. 1, a method for seismic wave impedance thin layer inversion based on a full convolution neural network of this embodiment includes the following steps:
s1: extracting seismic model data and corresponding wave impedance from forward data, and taking out a part as a sample pair;
since the forward-derived profiles are stored in array form, the entire forward-derived seismic model data and corresponding wave impedance are taken out of the section by slicing operation in python to be used as a sample pair.
S2: constructing a full convolution neural network, and giving a group of hyper-parameters;
as shown in fig. 2, a full convolution neural network is constructed according to a network structure diagram, and the full convolution neural network sequentially constructs a convolution layer, a dropout layer, an activation layer, a pooling layer, a BN (batch normalization) layer, and finally, a primary deconvolution layer and an activation layer.
The hyper-parameters comprise training times, batch sample number, learning rate, discarding rate, weight attenuation coefficient, optimizer momentum and convolution kernel size.
The number of training times increases with the number of epochs, the number of updates of the weights in the neural network also increases, and the curve becomes fitted from under-fitting, but not too large, or from fitting to over-fitting. The effective capacity is easily reduced when the number of the batch samples is too large or too small, and the number of the batch samples needs to be selected according to the hardware capacity. Learning rates that are too high or too low can lead to a reduction in model validation tolerances due to failure to optimize, and there is generally a range of recommendations, as used herein, a range of recommendations of [1e ]-3,1e-2]. The discarding rate is expressed, and less discarded parameters mean the improvement of the number of model parameters, the adaptability among parameters and the capacity of the model, but the effective tolerance of the model is not necessarily improved, and the adjustment range is generally [0.1,0.5 ]]. The weight attenuation coefficient can effectively limit the variation range of the parameters and play a certain regular role, and the adjustment range is generally [0,1e ]-4]. The optimizer momentum is used to speed up the training and avoid falling into a locally optimal solution. Convolution kernel size indicates that the size increases and the model capacity increases. Commonly used convolution kernel sizes are: 7 × 7, 5 × 5, 3 × 3, 1 × 1, 7 × 1, 1 × 7.
S3: carrying out normalization and random sampling preprocessing on sample data, and inputting the sample data into a full convolution neural network;
the samples are normalized, so that the network can be fitted better and faster; by scrambling the random sampling, the random error can be reduced, and each sample can be obtained with the same probability as possible.
The specific formula of normalization is as follows:
Figure BDA0003167593740000061
wherein: x is the number of*Is a normalized value; x is original data; max maximum in raw data; min is the minimum value in the raw data.
S4: and (3) the full convolution neural network executes a time period number epoch to obtain training time, a loss value and a trained inversion model.
The samples sequentially pass through the built network layers, as shown in fig. 4, and the output of the previous layer is used as the input of the next layer until all layers are executed. Specifically, the obtained loss value needs to be calculated according to an error (difference between the true value and the predicted value of the wave impedance) generated during training and verification, and the time required for each time is obtained by calculating the difference between the previous time and the next time. The loss function used herein is an MSE loss function, and the specific formula is:
Figure BDA0003167593740000062
wherein: l represents a loss value, n is the number of samples, yiIs a predicted value of the number of the frames,
Figure BDA0003167593740000063
is the tag value. Gradient withThe loss value is reduced, so that the inversion mapping result gradually becomes stable and finally reaches convergence.
S5: inputting the seismic data needing to carry out wave impedance inversion into the trained inversion model, and outputting the predicted wave impedance inversion result. And comparing the predicted wave impedance with the original wave impedance, and observing and training to obtain the loss value and time of the inversion model at the same time, thereby reflecting the quality of the obtained inversion model. If so, saving the inversion model; if the difference is not found, the method returns to the step S2 to call the parameters for retraining. The parameter regulation and control is carried out according to whether the fluctuation of the loss value is smooth and whether the inversion result can reflect the accurate prediction of the thickness difference of the stratum so as to achieve the expected requirement, and the specific numerical value is regulated according to the description range of the super-parameter given in the step S2.
In the above step S5, the specific process of prediction is as follows:
let x be seismic data for a line, with a size of 201 traces, 66 sample points per trace, 2ms sampling. Firstly, loading a trained model, inputting x-conversion three-dimensional tensors (1,201 and 66) into the model, predicting wave impedance data of the three-dimensional tensors (1,201 and 66), and drawing the wave impedance data into a graph to obtain a section of predicted wave impedance; and then comparing the predicted wave impedance with the original wave impedance, if the change information of the formation thickness can be well reflected and the precision is high, namely, storing the model and the loss value, and if the change information of the formation thickness cannot be well reflected or the precision is not high, returning to the parameter adjustment and retraining to obtain a new model.
In the above step S5, the quality control criterion is that the following two conditions are satisfied simultaneously: first, setting of hyper-parameters (as mentioned above, given values of respective hyper-parameters such as learning rate); secondly, the fidelity of the input data (the source of the data is both true and reliable).
The depth of the network used in this example is also related to the size of the input seismic sample, and the structure of the network can be modified by the size of the input seismic sample. If the input seismic sample data is larger, the network structure can be deepened; if the input sample data is not large, a too deep network structure is not required.
And (3) changing the network structure according to the size of the input seismic sample, wherein the size of the sample is specifically related to the number of sampling points, the sampling time and the number of tracks. The data is large, so that a plurality of network layers such as convolution layers and deconvolution layers can be added. In this case, only 201 sample data are input, 66 sampling points are input, and a part of theoretical model data sampled in 2ms is obtained, so that the network training model built by using the network training structure schematic diagram of fig. 4 is needed.
The full convolution neural network (FCN) is used for classifying input samples at a pixel level, so that the problem of image segmentation at a semantic level is solved, the full convolution neural network is different from a classic convolution neural network in that a last layer obtains a feature vector with a fixed length by using a full connection layer for classification, the full convolution neural network can accept input images with any size, and an deconvolution layer is used for up-sampling a feature map of the last convolution layer, so that the feature map is restored to the same size as the input image and the resolution is improved, each pixel can be predicted, meanwhile, spatial information in the original input image is reserved, the precision is improved, and finally, pixel-by-pixel classification is carried out on an up-sampled feature map. In general, the full convolution neural network directly gives the classification condition of the corresponding area of the input sample by using the corresponding relation between the output result and the input sample, and cancels the selection of a candidate frame by a sliding window in the traditional target detection.
The structural diagram of the full convolution neural network based on seismic wave impedance thin layer inversion drawn by referring to the network structural diagram of the full convolution neural network for semantic segmentation is shown in fig. 2. The schematic shows that the input sample sequentially passes through each network layer, the last convolutional layer is a deconvolution layer, and upsampling is carried out, so that the output wave impedance is restored to the same size as the input seismic data, and the resolution is improved, which is the process of forward reasoning and reverse learning.
In the existing deep learning algorithm, because the data set is too single, more data sets are developed only aiming at specific data sets, such as human faces, a handwriting font library and the like. More and more researchers are applying deep learning to seismic wave impedance inversion, but in seismic exploration, the source quality of seismic data is not uniform, the original seismic data is generally defective and is very easy to be infected by various factors, so that the preprocessing of the seismic data is very important before the seismic data and the corresponding wave impedance labels are input into a neural network. On one hand, the effectiveness of the seismic data can be ensured, and on the other hand, the earthquake is more consistent with the input format of the full convolution neural network through the adjustment of the seismic data format. The preprocessed seismic data can enable the network to be converged faster and better in the training process.
The input samples are preprocessed through normalization and random sampling, and under the condition of the same hyper-parameter, an inversion model trained by preprocessed seismic data is better. The application limitation in an actual scene is relatively small, and the method is more suitable for seismic data inversion under different observation conditions.
The full convolution neural network can accept sample input of any size and classification aiming at pixel level, so that the resolution is improved, the seismic wave impedance thin layer inversion is friendly, and the method can be well applied to the wave impedance thin layer inversion. The size of the input seismic sample is not fixed, general actual seismic data is huge, the required sample input can be constructed according to the original seismic data, and meanwhile, the structure of the network can be changed according to different sizes of the input seismic sample. The input data is large, the network depth can be relatively deepened, and a residual block and the like can be added to reduce the influence brought by the deep network. Because the forward modeling data used herein is not very large, no particularly deep networks are required. The sample pairs as input were constructed artificially, with a total of four pairs of samples as input, each pair of samples being 20 traces, 66 sample points, 2ms samples. The input sample pairs are shown in fig. 3a to 3 d.
By establishing the network structure schematic diagram and the sample pair, a network training process diagram for training data to obtain an inversion model is established. In the process, training of the convolution layer, the dropout layer, the relu active layer, the pooling layer, the Batch Normalization layer and the final deconvolution layer which are circulated for three times is carried out on the seismic sample pair as input, and finally a trained inversion model can be obtained. The actual network training process diagram is shown in fig. 4.
Each built neural network has the best hyper-parameter combination matched with the neural network, such as the setting of the values of various hyper-parameters such as the convolution kernel size, the learning rate, the batch _ size, the loss function part hyper-parameter, the discarding rate and the like. The combination of the optimal set of hyper-parameters generally results in relatively small loss values and the inverse model obtained by training is also better. Of course, for each neural network, no direct method of determining the optimal hyper-parameter combination is obtained by repeated tests. According to the above, the seismic wave impedance thin layer inversion method based on the full convolution neural network is mainly realized by three steps: firstly, preprocessing data; secondly, repeatedly adjusting and training the network by training and verifying the obtained loss value change curve (as shown in figure 5) and comparing the inversion result of the primary test (as shown in figure 6) with the tag wave impedance (as shown in figure 7); thirdly, inputting the seismic data (shown in figure 8) needing to obtain the wave impedance into a stored inversion model for prediction, and finally obtaining a high-precision wave impedance inversion result (shown in figure 9).
Details not described in this specification are within the skill of the art that are well known to those skilled in the art.

Claims (7)

1. A seismic wave impedance thin layer inversion method based on a full convolution neural network is characterized by comprising the following steps:
s1: extracting seismic model data and corresponding wave impedance from forward data, and taking out a part as a sample pair;
s2: constructing a full convolution neural network, and giving a group of hyper-parameters;
the full convolution neural network comprises a convolution layer, a dropout layer, an activation layer, a pooling layer, a batch standardization layer, a primary deconvolution layer and an activation layer which are circulated for three times;
s3: carrying out normalization and random sampling preprocessing on the sample pairs and then inputting the preprocessed sample pairs into a full convolution neural network;
s4: the full convolution neural network executes a time period number epoch to obtain training time, a loss value and a trained inversion model;
s5: inputting seismic data needing wave impedance inversion into a trained inversion model, and outputting a predicted wave impedance inversion result; comparing the predicted wave impedance with the original wave impedance, observing and training to obtain a loss value and time of the inversion model at the same time, reflecting the quality of the obtained inversion model, and if the quality is good, storing the inversion model; if the difference is not found, the method returns to the step S2 to call the parameters for retraining.
2. The method of claim 1, wherein the hyper-parameters comprise training times, number of batch samples, learning rate, discarding rate, weight attenuation coefficient, optimizer momentum, convolution kernel size.
3. The seismic wave impedance sheet inversion method based on the full convolution neural network as claimed in claim 2, wherein in the hyper-parameters, the learning rate range is [1e ]-3,1e-2](ii) a The adjustment range of the discarding rate is [0.1,0.5 ]]The adjustment range of the weight attenuation coefficient is [0,1e ]-4]。
4. The seismic wave impedance sheet inversion method based on the full convolution neural network as claimed in claim 1, wherein in the step S3, the specific formula is normalized:
Figure FDA0003167593730000011
wherein: x is the number of*Is a normalized value; x is original data; max maximum in raw data; min is the minimum value in the raw data.
5. The seismic wave impedance thin layer inversion method based on the full convolution neural network as claimed in claim 1, wherein in the step S4, the loss function used herein is an MSE loss function, and the specific formula is as follows:
Figure FDA0003167593730000021
wherein: l represents a loss value, n is the number of samples, yiIs a predicted value of the number of the frames,
Figure FDA0003167593730000022
is the tag value.
6. The seismic wave impedance sheet inversion method based on the full-convolution neural network as claimed in claim 1, wherein in the step S5, the specific process of prediction is as follows:
firstly, loading a trained inversion model, inputting a three-dimensional tensor converted from seismic data of one line into the trained inversion model, predicting wave impedance data of the three-dimensional tensor, and drawing the wave impedance data into a picture to obtain a section of predicted wave impedance;
then, comparing the predicted wave impedance with the original wave impedance, and if the information of the change of the formation thickness can be well reflected and the precision is high, storing the model and the loss value; and if the information of the change of the thickness of the stratum cannot be well reflected or the precision is not high, returning to the parameter adjustment and retraining to obtain a new model.
7. The seismic wave impedance sheet inversion method based on the full convolution neural network as claimed in claim 1, wherein the depth of the full convolution neural network is related to the size of the input seismic sample, and the structure of the network can be changed by the size of the input seismic sample; and changing the network structure according to the size of the input seismic sample, wherein the size of the sample is particularly related to the number of sampling points, the sampling time and the number of tracks, and the data is large, so that a plurality of network layers such as convolutional layers and deconvolution layers can be added.
CN202110809060.8A 2021-07-16 2021-07-16 Seismic wave impedance thin layer inversion method based on full convolution neural network Active CN113625336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110809060.8A CN113625336B (en) 2021-07-16 2021-07-16 Seismic wave impedance thin layer inversion method based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110809060.8A CN113625336B (en) 2021-07-16 2021-07-16 Seismic wave impedance thin layer inversion method based on full convolution neural network

Publications (2)

Publication Number Publication Date
CN113625336A true CN113625336A (en) 2021-11-09
CN113625336B CN113625336B (en) 2024-03-26

Family

ID=78380015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110809060.8A Active CN113625336B (en) 2021-07-16 2021-07-16 Seismic wave impedance thin layer inversion method based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN113625336B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063169A (en) * 2021-11-10 2022-02-18 中国石油大学(北京) Wave impedance inversion method, system, equipment and storage medium
CN115795994A (en) * 2022-09-29 2023-03-14 西安石油大学 Orientation electromagnetic wave logging while drilling data inversion method based on Unet convolution neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190064389A1 (en) * 2017-08-25 2019-02-28 Huseyin Denli Geophysical Inversion with Convolutional Neural Networks
US20190064378A1 (en) * 2017-08-25 2019-02-28 Wei D. LIU Automated Seismic Interpretation Using Fully Convolutional Neural Networks
CN111580162A (en) * 2020-05-21 2020-08-25 长江大学 Seismic data random noise suppression method based on residual convolutional neural network
US20200301036A1 (en) * 2017-09-12 2020-09-24 Schlumberger Technology Corporation Seismic image data interpretation system
CN111723329A (en) * 2020-06-19 2020-09-29 南京大学 Seismic phase feature recognition waveform inversion method based on full convolution neural network
CN112925012A (en) * 2021-01-26 2021-06-08 中国矿业大学(北京) Seismic full-waveform inversion method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190064389A1 (en) * 2017-08-25 2019-02-28 Huseyin Denli Geophysical Inversion with Convolutional Neural Networks
US20190064378A1 (en) * 2017-08-25 2019-02-28 Wei D. LIU Automated Seismic Interpretation Using Fully Convolutional Neural Networks
US20200301036A1 (en) * 2017-09-12 2020-09-24 Schlumberger Technology Corporation Seismic image data interpretation system
CN111580162A (en) * 2020-05-21 2020-08-25 长江大学 Seismic data random noise suppression method based on residual convolutional neural network
CN111723329A (en) * 2020-06-19 2020-09-29 南京大学 Seismic phase feature recognition waveform inversion method based on full convolution neural network
CN112925012A (en) * 2021-01-26 2021-06-08 中国矿业大学(北京) Seismic full-waveform inversion method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BANGYU WU ET AL.: "Seismic Impedance Inversion Using Fully Convolutional Residual Network and Transfer Learning", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, vol. 17, no. 12, pages 2140 - 2142 *
V. DAS ET AL.: "Convolutional neural network for seismic impedance inversion", 《GEOPHYSICS》, vol. 84, no. 6, pages 869 *
刘文涛 等: "基于全卷积神经网络的建筑物屋顶自动提取", 《地球信息科学学报》, vol. 20, no. 11, pages 1564 - 1567 *
王泽峰 等: "基于时域卷积神经网络的地震波阻抗反演方法", 《第四届油气地球物理学术年会论文集》, pages 1 - 4 *
王泽峰 等: "基于深度全卷积神经网络的地震波阻抗预测方法", 《工程地球物理学报》, vol. 19, no. 3, pages 386 - 392 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063169A (en) * 2021-11-10 2022-02-18 中国石油大学(北京) Wave impedance inversion method, system, equipment and storage medium
CN114063169B (en) * 2021-11-10 2023-03-14 中国石油大学(北京) Wave impedance inversion method, system, equipment and storage medium
CN115795994A (en) * 2022-09-29 2023-03-14 西安石油大学 Orientation electromagnetic wave logging while drilling data inversion method based on Unet convolution neural network
CN115795994B (en) * 2022-09-29 2023-10-20 西安石油大学 Method for inverting logging data of azimuth electromagnetic wave while drilling based on Unet convolutional neural network

Also Published As

Publication number Publication date
CN113625336B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
US11320551B2 (en) Training machine learning systems for seismic interpretation
CN109611087B (en) Volcanic oil reservoir parameter intelligent prediction method and system
US20200183035A1 (en) Data Augmentation for Seismic Interpretation Systems and Methods
CN108648191B (en) Pest image recognition method based on Bayesian width residual error neural network
CN107688201B (en) RBM-based seismic prestack signal clustering method
CN113625336A (en) Seismic wave impedance thin layer inversion method based on full convolution neural network
CN111983681A (en) Seismic wave impedance inversion method based on countermeasure learning
CN115393656B (en) Automatic classification method for stratum classification of logging-while-drilling image
CN111381275A (en) First arrival picking method and device for seismic data
Dou et al. MD loss: Efficient training of 3-D seismic fault segmentation network under sparse labels by weakening anomaly annotation
CN116047583A (en) Adaptive wave impedance inversion method and system based on depth convolution neural network
Wang et al. Seismic stratum segmentation using an encoder–decoder convolutional neural network
CN113406695B (en) Seismic inversion method and system based on interval velocity seismic geological model
Liu et al. Lithology prediction of one-dimensional residual network based on regularization constraints
CN114357372A (en) Aircraft fault diagnosis model generation method based on multi-sensor data driving
Su et al. Seismic impedance inversion based on deep learning with geophysical constraints
CN113325480A (en) Seismic lithology identification method based on integrated deep learning
CN117292225A (en) Seismic data first arrival pickup method based on SimpleNet network
CN113642232B (en) Intelligent inversion exploration method for surface waves, storage medium and terminal equipment
CN116009080A (en) Seismic wave impedance inversion method and system, electronic equipment and storage medium
CN116934603A (en) Logging curve missing segment completion method and device, storage medium and electronic equipment
US20230140656A1 (en) Method and system for determining seismic processing parameters using machine learning
CN115983094A (en) Logging curve generation method based on S-CNN-Bi-GRU network, processing terminal and readable storage medium
CN114942473A (en) Pre-stack seismic velocity inversion method based on attention gate neural network
CN114299330A (en) Seismic facies classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant