CN116643310A - Earthquake frequency extension method and device, electronic equipment and computer readable storage medium - Google Patents

Earthquake frequency extension method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN116643310A
CN116643310A CN202310901589.1A CN202310901589A CN116643310A CN 116643310 A CN116643310 A CN 116643310A CN 202310901589 A CN202310901589 A CN 202310901589A CN 116643310 A CN116643310 A CN 116643310A
Authority
CN
China
Prior art keywords
data
frequency
full
frequency data
seismic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310901589.1A
Other languages
Chinese (zh)
Other versions
CN116643310B (en
Inventor
石颖
王维红
王宁
倪京阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanya Offshore Oil And Gas Research Institute Of Northeast Petroleum University
Original Assignee
Sanya Offshore Oil And Gas Research Institute Of Northeast Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanya Offshore Oil And Gas Research Institute Of Northeast Petroleum University filed Critical Sanya Offshore Oil And Gas Research Institute Of Northeast Petroleum University
Priority to CN202310901589.1A priority Critical patent/CN116643310B/en
Publication of CN116643310A publication Critical patent/CN116643310A/en
Application granted granted Critical
Publication of CN116643310B publication Critical patent/CN116643310B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/282Application of seismic models, synthetic seismograms
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/32Transforming one recording into another or one representation into another
    • G01V1/325Transforming one representation into another
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V1/00Seismology; Seismic or acoustic prospecting or detecting
    • G01V1/28Processing seismic data, e.g. for interpretation or for event detection
    • G01V1/36Effecting static or dynamic corrections on records, e.g. correcting spread; Correlating seismic signals; Eliminating effects of unwanted energy
    • G01V1/364Seismic filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V2210/00Details of seismic processing or analysis
    • G01V2210/20Trace signal pre-filtering to select, remove or transform specific events or signal components, i.e. trace-in/trace-out
    • G01V2210/21Frequency-domain filtering, e.g. band pass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V2210/00Details of seismic processing or analysis
    • G01V2210/40Transforming data representation
    • G01V2210/48Other transforms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Geology (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Environmental & Geological Engineering (AREA)
  • Geophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

The application provides a seismic frequency extension method, a seismic frequency extension device, electronic equipment and a computer readable storage medium; the method comprises the following steps: acquiring first intermediate frequency data and first full frequency data in seismic sample data; the first intermediate frequency data is used for representing data in a first frequency range in the seismic sample data, and the first full frequency data is used for representing data in a second frequency range in the seismic sample data; training a first neural network according to the first intermediate frequency data and the first full frequency data; and carrying out earthquake frequency expansion on the second intermediate frequency data according to the trained first neural network to obtain predicted second full frequency data. The earthquake frequency expansion method provided by the application can recover the low-frequency information and the high-frequency information of the earthquake data with lower cost, improve the full waveform inversion precision and improve the resolution of the earthquake data.

Description

Earthquake frequency extension method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of seismic frequency expansion technology, and in particular, to a seismic frequency expansion method, apparatus, electronic device, and computer readable storage medium.
Background
The existing seismic data frequency expansion method can only be used for recovering the low-frequency part of the seismic data missing, can only improve the accuracy of full waveform inversion, and cannot improve the resolution of the seismic data. Therefore, how to recover the low-frequency information and the high-frequency information with a low cost, thereby improving the accuracy of full waveform inversion and improving the resolution of the seismic data is a problem to be solved in the application.
Disclosure of Invention
The embodiment of the application provides a seismic frequency extension method, a seismic frequency extension device, electronic equipment and a computer readable storage medium, which can recover low-frequency information and high-frequency information by using lower cost, thereby improving the full waveform inversion precision and the resolution of seismic data.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a method for seismic frequency extension, including:
acquiring first intermediate frequency data and first full frequency data in seismic sample data; the first intermediate frequency data is used for representing data in a first frequency range in the seismic sample data, and the first full frequency data is used for representing data in a second frequency range in the seismic sample data;
training a first neural network according to the first intermediate frequency data and the first full frequency data;
And carrying out earthquake frequency expansion on the second intermediate frequency data according to the trained first neural network to obtain predicted second full frequency data.
In the above aspect, the training the first neural network according to the first intermediate frequency data and the first full frequency data includes:
determining sample data from the first intermediate frequency data and the first full frequency data;
carrying out at least one data preprocessing of resampling processing, normalization processing, energy balance processing and data slicing processing on the sample data to obtain preprocessed sample data;
determining a training set according to the preprocessed sample data;
training the first neural network according to the training set;
each sample data comprises first intermediate frequency data and first full frequency data, and the first intermediate frequency data and the first full frequency data are in one-to-one correspondence.
In the above aspect, the resampling process includes:
determining a sampling interval size;
dividing the sample data according to the size of the sampling interval to obtain a divided window;
determining a resampled value corresponding to each window after mapping based on the values corresponding to all the data points in each window;
And determining a set of mapped values of each window corresponding to the sample data as resampled data.
In the above aspect, the normalization processing includes:
determining a normalized data range;
determining, for a feature of each of the sample data, a minimum value and a maximum value of the feature in a dataset corresponding to the feature;
and determining the ratio of the difference between the actual value corresponding to the feature and the minimum value to the difference between the maximum value and the minimum value as the value corresponding to the feature after normalization.
In the above aspect, the energy balance processing includes:
determining a size of a time window in the sample data, and an average energy of each time window;
determining a normalization coefficient corresponding to the average energy;
and aiming at the data point in each time window, adjusting the amplitude of the data point according to the product value between the amplitude corresponding to the data point and the normalization coefficient.
In the above scheme, the data slicing process includes:
determining a data slice size for each of the sample data;
determining the data region of the data slice size as a separate channel;
And carrying out three-dimensional expansion on the data in the sample data according to the size of the channel corresponding to the sample data.
In the above aspect, the training the first neural network according to the preprocessed sample data includes:
randomly initializing the weight and bias parameters of the first neural network;
inputting the first intermediate frequency data of the training set into the first neural network for forward propagation to obtain predicted first full frequency data;
determining a gradient calculation result according to the loss value between the predicted first full frequency data and the real first full frequency data;
and updating the weight and bias of the first neural network according to a back propagation algorithm and the gradient calculation result.
In the above scheme, the inputting the first intermediate frequency data of the training set into the first neural network, performing forward propagation, to obtain predicted first full frequency data, includes:
performing convolution calculation aiming at the mapping relation between the first intermediate frequency data and the first full frequency data in the training set;
after the convolution calculation is completed, determining a corresponding feature map based on a nonlinear activation function;
pooling operation is carried out on the feature map, and the feature map obtained after the pooling operation is used as input of a full-connection layer to obtain output features;
And mapping the output characteristics to an output space of the full-frequency data to obtain predicted full-frequency data.
In a second aspect, an embodiment of the present application provides an earthquake frequency extension apparatus, including:
the acquisition module is used for acquiring first intermediate frequency data and first full frequency data in the seismic sample data; the first intermediate frequency data is used for representing data in a first frequency range in the seismic sample data, and the first full frequency data is used for representing data in a second frequency range in the seismic sample data;
the training module is used for training a first neural network according to the first intermediate frequency data and the first full frequency data;
the prediction module is used for predicting second full-frequency data based on the second intermediate frequency data according to the trained first neural network;
and the modeling module is used for carrying out earthquake frequency expansion on the second intermediate frequency data according to the first neural network after training is completed, and obtaining predicted second full frequency data.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the seismic frequency extension method provided by the embodiment of the application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium comprising a set of computer-executable instructions, which when executed, are configured to perform the seismic frequency extension method provided by embodiments of the present application.
The earthquake frequency expanding method provided by the embodiment of the application obtains first intermediate frequency data and first full frequency data in earthquake sample data; the first intermediate frequency data is used for representing data in a first frequency range in the seismic sample data, and the first full frequency data is used for representing data in a second frequency range in the seismic sample data; training a first neural network according to the first intermediate frequency data and the first full frequency data; and carrying out earthquake frequency expansion on the second intermediate frequency data according to the trained first neural network to obtain predicted second full frequency data. According to the method, the relation between the intermediate frequency data and the full frequency data in the seismic data is further fitted by using the first neural network, and the intermediate frequency data is easier to obtain compared with the high frequency component, so that the implementation cost of the scheme is lower; and secondly, the full-frequency data is further predicted through the medium-frequency data, namely, the method can recover the high-frequency information besides the low-frequency information of the seismic data, so that the full-waveform inversion precision is improved, and the resolution of the seismic data is improved.
Drawings
The accompanying drawings are included to provide a better understanding of the application, and are not to be construed as limiting the application, wherein:
FIG. 1 is a schematic diagram of an alternative process flow of a seismic frequency extension method according to an embodiment of the application;
FIG. 2 is a schematic illustration of a full band seismic record and a corresponding mid-frequency seismic record provided by an embodiment of the application;
FIG. 3 is a graph comparing the full-band seismic records before and after energy balancing processing according to an embodiment of the application;
FIG. 4 is a schematic diagram of a training set of a first neural network according to an embodiment of the present application;
fig. 5 is a network architecture diagram of a first neural network according to an embodiment of the present application;
FIG. 6 is a graph comparing effects of the 16 th shot seismic record before and after recovery provided by an embodiment of the application;
FIG. 7 is a graph of the spectra before and after recovery of a 16 th shot seismic record provided by an embodiment of the application;
FIG. 8 is a graph showing the comparison of effects before and after recovery of a 33 rd shot seismic record according to an embodiment of the present application;
FIG. 9 is a graph of the spectrum before and after recovery of the 33 rd shot seismic record provided by an embodiment of the application;
FIG. 10 is a schematic diagram of waveforms of the 420 th lane of the 33 rd cannon according to an embodiment of the present application;
FIG. 11 is a graph comparing a pushing body model provided by an embodiment of the present application with inversion results;
FIG. 12 is a schematic diagram of an alternative structure of a seismic frequency extension apparatus according to an embodiment of the present application;
fig. 13 is a schematic block diagram of an alternative electronic device provided by an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", and the like are merely used to distinguish between similar objects and do not represent a particular ordering of the objects, it being understood that the "first", "second", or the like may be interchanged with one another, if permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
Referring to fig. 1, fig. 1 is a schematic diagram of an optional processing flow of the seismic frequency extension method according to the embodiment of the present application, and the following description will refer to steps S101 to S103 shown in fig. 1.
Step S101, acquiring first intermediate frequency data and first full frequency data in seismic data; the first intermediate frequency data is used to characterize data in a first frequency range in the seismic sample data, and the first full frequency data is used to characterize data in a second frequency range in the seismic sample data.
In some embodiments, the seismic data may be filtered to obtain first intermediate frequency data and first full frequency data. The first intermediate frequency data and the first full frequency data after filtering can be obtained respectively by setting appropriate cut-off frequencies, such as filtering or retaining components in a specific frequency range based on the first frequency range and the second frequency range respectively, and then converting the filtered data back to the time domain through an inverse fourier transform or other frequency domain inverse transform method. The first frequency range may be a frequency range corresponding to intermediate frequency data in the seismic data, for example, a frequency range of 10hz to 40hz, and the second frequency range may be a full frequency range, that is, all frequency ranges including a frequency range corresponding to low frequency data, a frequency range corresponding to intermediate frequency data, and a frequency range corresponding to high frequency data.
Step S102, training a first neural network according to the first intermediate frequency data and the first full frequency data.
In some embodiments, sample data may be determined for the first intermediate frequency data and the first full frequency data, and at least one of resampling, normalizing, energy balancing, and data slicing of the sample data may be performed to obtain preprocessed sample data.
In some embodiments, the data resampling process may be as follows: firstly, determining the size of a sampling interval; secondly, dividing the first intermediate frequency data and the first full frequency data included in the sample data according to the size of a sampling interval, if the size of the sampling interval is N, dividing every N data points, and after dividing, the size of each window is N, namely, the data in each window after dividing is N continuous data points; thirdly, determining a resampled value corresponding to each window after mapping based on the values corresponding to all the data points in each window; and finally, determining a set formed by data points calculated by all windows corresponding to the current sample data as resampled data. Wherein the value of each mapped data point may be calculated by a different method for each window.
As an example, common methods include the following: for example, the average value of all data points in the window can be mapped into a value of one data point; maximum or minimum sampling, the maximum or minimum value within the window can be selected as a data point; median sampling may be performed by calculating the median of all data points within a window as one sampling point.
In some embodiments, the ranges of the values corresponding to different features may be unified based on normalization processing, so that the ranges are in the same order of magnitude, so that the situation that the model cannot learn other features sufficiently due to the fact that the influence of some features on the model of the first neural network is too large is avoided. The normalization process may include: firstly, determining a normalized data range, namely determining a data range to which each data in sample data is mapped; the common data range can be between 0 and 1, and other ranges can be selected according to specific requirements; secondly, determining the minimum value and the maximum value of the feature in a data set corresponding to the feature according to the feature of each data to be processed, if the feature is the data feature of the first intermediate frequency data, determining the minimum value and the maximum value of all the features in the first intermediate frequency data; and determining the ratio of the difference between the actual value and the minimum value corresponding to the feature to the difference between the maximum value and the minimum value as the value corresponding to the feature after normalization. The normalization processing is performed on each data in the data set to be processed, and it is determined that all the data are in the same order of magnitude.
In some embodiments, the sample data may be amplitude adjusted based on an energy balance process such that signal energy within different time windows may remain relatively balanced, thereby enhancing the energy of the signal. The energy balance treatment process can be as follows: the time window size for calculating the energy balance is first determined. The window is selected to have enough time context information reserved and calculation efficiency; secondly, for each time window, determining the average energy of the data in the window, wherein the average energy can be determined by means of square summation and average, namely, the value obtained by dividing the square summation of amplitude values corresponding to each data point in the window by the size of the window is determined as the average energy; and determining a normalization coefficient according to the average energy result, and adjusting the amplitude of the data in the window to balance the energy, wherein the amplitude of the data to be processed in the time window can be adjusted according to the product value between the amplitude corresponding to each data point in the window and the normalization coefficient.
In some embodiments, the sample data two-dimensional data may be expanded into three-dimensional data based on the data slicing process for better application in subsequent training of the first neural network. The data slicing process may be: firstly, determining the size of each slice, wherein the slice size can be any square or rectangular area; secondly, determining the data area of the data slice size as a single channel; and finally, carrying out three-dimensional expansion on the data in the data according to the channel size corresponding to each data. In this way, the original first intermediate frequency data and the first full frequency data are two-dimensional data, and three-dimensional data points with three dimensions of length, width and channel are formed as the dimension of one channel can be added. The three-dimensional data formed in this way can be better applied to model training of the first neural network.
In some embodiments, the preprocessing may include one or more of resampling, normalization, energy balancing, and data slicing. If the method comprises the steps of respectively carrying out resampling processing on all data in the sample data, carrying out normalization processing on the data obtained after resampling, carrying out energy balance processing on the data obtained after normalization processing, carrying out data slicing processing on the data obtained after energy balance processing, finally obtaining the sample data after preprocessing, determining a training set according to the sample data after preprocessing, and training a first neural network according to the training set. In the sample data, each sample data comprises first intermediate frequency data and first full frequency data corresponding to the first intermediate frequency data, and meanwhile, the first intermediate frequency data and the first full frequency data are in one-to-one correspondence.
By preprocessing the sample data, the feature strength of the training set can be enhanced, so that the robustness of the first neural network is improved.
In some embodiments, the process of training the first neural network according to the training set may be: firstly, determining a training set according to preprocessed sample data; secondly, randomly initializing the weight and bias parameters of a first neural network, wherein the first neural network can be a convolutional neural network or other neural networks; thirdly, inputting the first intermediate frequency data of the training set into a first neural network for forward propagation, wherein the first neural network can perform convolution calculation aiming at the mapping relation between the first intermediate frequency data and the first full frequency data in the training set; after the convolution calculation is completed, the first neural network may determine a corresponding feature map based on the nonlinear activation function; the first neural network can carry out pooling operation on the feature map, the feature map obtained after pooling operation can be used as input of a full-connection layer to obtain output features, and the output features are mapped to an output space of full-frequency data to obtain predicted first full-frequency data; thirdly, the first neural network can determine a gradient calculation result according to the predicted loss value between the first full frequency data and the real first full frequency data; finally, the weights and bias parameters of the first neural network may be updated according to the back propagation algorithm and the gradient calculation result.
As an example, the mapping relationship between the first intermediate frequency data and the first full frequency data may be as shown in formula (1). In the formula (1), f (x, t) represents the value of the first full-frequency data of the data x at the t-th time point, m (x, t) represents the value of the first intermediate-frequency data of the data x at the t-th time point, and G represents the mapping relationship between the two, which is an implicit form.
(1)
A first neural network may be introduced to approximate the mapping relationship in equation (1), as shown in equation (2). In the formula (2) of the present invention,representing a first neural network pre-emptionThe measured value of the full frequency data, net, represents the network structure of the first neural network, ++>A network parameter representing a first neural network.
(2)
By selecting a series of training samplesWherein->And->The first full frequency data +.>And first intermediate frequency data->The total number of samples is +.>. Establishing an objective function as shown in formula (3), wherein, < ->Is an optimization problem in the least squares sense and is used for measuring the error between the predicted first full frequency data of the first neural network and the real first full frequency data. After the gradient of the objective function to the model parameters is found, the parameters of the first neural network may be updated using an iterative algorithm. To accelerate convergence, some faster convergence optimization algorithms may be used, such as SGD, SGDM, NAG, adaGrad, adaDelta, adam and Nadam, among others.
(3)
The convolution calculation is mainly divided into two steps: the first step is a linear operation, which may be performed by convolving the raw input data or other low-level feature maps with weights, and then convolving the kernels in stepsPerforming multiple convolutions, summing the multiple convolutions and adding the sum to the bias +.>And (5) adding. The second step is a nonlinear operation, which can be performed by activating the function +.>Obtaining an output characteristic diagramIn practice, this corresponds to a process of applying a weighted sum of a plurality of input signals to a neuron and then activating the output, and the calculation formula may be shown in formula (4). In formula (4), ∈>Indicate->Layer->Personal profile->Representing the total number of feature maps input, +.>Representing convolution operation,/->Indicate->First->Characteristic diagram and->First->Connection weight parameters between feature maps, < ->Representing the corresponding bias.
(4)
If the calculation result of the convolution layer is directly used as the input of the network, the final desired result can be output. However, this process is computationally intensive and therefore requires a reduction in the dimension of the feature map matrix. The pooling is used for reducing the dimension, so that the number of network parameters can be reduced, and the calculation cost is further reduced. In addition, the pooled features discard some unimportant features, and only the main feature information effective for the task is reserved. The pooling operation may be as shown in equation (5). In the formula (5) of the present invention, For pooling functions, +.>Is->Layer->Coefficients corresponding to the feature map, < >>Is a bias parameter. The pooling mode can be one of maximum pooling, average pooling and random pooling.
(5)
Because the lack of a low enough frequency component in the seismic data can cause the problem of local minimum in the full waveform inversion process, and meanwhile, the lack of a high frequency component in the seismic data can cause the resolution reduction, aiming at the two problems, the method adopts the neural network to predict the full frequency data, so that the missing high and low frequency data can be recovered, the problem of local minimum in the full waveform inversion process can be avoided, and the resolution of the seismic data is improved.
And step S103, predicting second full-frequency data based on the second intermediate frequency data according to the trained first neural network.
In some embodiments, the second full frequency data may be predicted from the second intermediate frequency data based on the trained first neural network. Wherein at least one of a resampling process, a normalization process, an energy balance process, and a data slicing process may be performed on the second intermediate frequency data before the second intermediate frequency data is input to the first neural network.
In some embodiments, the predicted second full-frequency data obtained by the first neural network may be subjected to data inversion to obtain a predicted complete seismic record, full-waveform inversion is performed on the seismic record, and a synthetic seismic record is generated and compared with an observed seismic record by using an initial velocity model and a wave field simulation method. By minimizing the difference between the two, the velocity model can be updated by using optimization methods such as gradient descent and the like, so that the difference between the synthetic seismic record and the observed seismic record is minimized, the difference gradually approaches the real underground medium condition, and the velocity model corresponding to the second intermediate frequency data after the seismic frequency expansion is obtained.
The application can effectively realize parameter modeling by combining the restored full-band data with the multi-scale full-waveform inversion algorithm.
The test results of the seismic frequency expansion method according to the embodiment of the present application will be described below, and the description will be made with reference to fig. 2 to 11.
In the test example, inversion test can be carried out on the recovered full-frequency data, namely the second full-frequency data, so as to verify the effectiveness of the first convolutional neural network on the broadening of the frequency band of the seismic data. The test may use a finite difference method to generate 40 shot gathers as observations. The sources are evenly distributed on the top of the model, and the center frequency of the observed data is 30Hz. To prepare the training and test data, the observation data may first be filtered out of the intermediate frequency data using a 10Hz cut-off filter and the components below 5Hz in the high frequency data set to zero. A comparison of the full-frequency shot record before and after filtering is shown in fig. 2. FIG. 2 shows the seismic record corresponding to the predicted full band data and the corresponding mid-frequency seismic record for each full band seismic record after filtering. In fig. 2, (a) shows a 20 th shot full band seismic record, (b) shows a 35 th shot mid band seismic record, and (d) shows a 35 th shot mid band seismic record.
Since the amplitude of the seismic data is balanced in the time domain, the amplitude can be averaged by an energy balancing process. When the energy balance processing is performed, the distribution of the energy balance intensity of the seismic record at each position is shown in fig. 3, and fig. 3 shows a comparison chart of the front-band seismic record before and after the energy balance processing. Wherein a higher brightness indicates a larger magnitude of the energy balance. In fig. 3, (a) shows the distribution of the seismic-recording energy balance intensity at each position before the energy balance processing, and (b) shows the distribution of the seismic-recording energy balance intensity at each position after the energy balance processing.
By comparing fig. 3 (a) with fig. 3 (b), it can be observed that the energy balance makes the effective wave in the shot record more pronounced. In addition, the sample width and the label height used in this test were 64×64 pixels.
20 of the 40 data samples were selected to train the network, with the remaining samples being used as a test set for subsequent testing. The data is split into pairs of data slices before being input into the first neural network. Model training is carried out on the earthquake medium-frequency data with the size of 64 multiplied by 64 and the corresponding full-frequency label by using 23113, and the iteration number is 40, wherein 20000 pairs of data are used as training data, 2000 pairs of data are used as verification data, 1113 medium-frequency data are used as test data, and the training data set, the verification data set and the test data set are ensured to be mutually disjoint. The training set inputs the labels corresponding thereto as shown in fig. 4. Fig. 4 is a schematic diagram of a training set of the first neural network, and in fig. 4, (a) shows one sample data input by the training set, and (b) shows a label corresponding to the sample data input by the training set (a).
The first neural network used in the application can be based on a convolution network model built by a Tensorflow2.0 and Keras framework, and the model can be configured into 12 layers, including an input layer, an output layer, a 5-layer convolution layer and a 5-layer deconvolution layer. The network uses an Adadelta optimization algorithm to perform network optimization, and uses a mean square error as an error loss function. Fig. 5 shows a network architecture of a first neural network in the present application.
After the first neural network is trained, the trained neural network can be used for predicting the earthquake intermediate frequency data, and after the predicted full-frequency earthquake data are obtained, the prediction result is reversely preprocessed. The reverse preprocessing comprises upsampling, reverse energy balancing and reverse slider operation, and finally the seismic record of the recovery frequency band is obtained.
The prediction result can be displayed by selecting the 16 th shot and the 33 th shot, and the prediction result shot record and the corresponding spectrogram are shown in fig. 6-9, wherein the three subgraphs in fig. 6 and 8 are an intermediate frequency seismic record, an all-frequency seismic record and an all-frequency seismic record predicted by a neural network from left to right.
By comparing the gathers shown in fig. 6 or 8, it can be determined that the inverse energy corrected predicted shot gather is similar to the true full frequency shot gather with little error between them, within an acceptable range. The error between the predicted full frequency seismic data and the actual full frequency seismic data can be determined by equation (6). N is the total number of predicted full frequency seismic data, q is a parameter value, Values representing real full frequency seismic data, +.>Values representing predicted full frequency seismic data, +.>Representing the maximum in the true full frequency seismic data.
(6)
Further, as obtained by the formula (6), the average relative error of the training data and the test data was 4.92% and 8.32%, respectively. As can be seen from fig. 7 and 9, the curve of each predicted full channel dataset is quite close to the curve of its actual full channel dataset. Fig. 10 shows waveforms of 420 th lane in the 33 rd shot. In fig. 8-10, the predicted full frequency signal is also very matched to the true full frequency signal.
In order to verify the validity of the complete band prediction data, a Marmousi model can be selected for inversion simulation. Fig. 11 (a) is a real Marmousi velocity model, fig. 11 (b) is an initial velocity model, the number of model grid points is 234×663, 640 shots are excited at the ground surface, and the number ratio of sources to detectors is 1:1, a Rake wavelet with a dominant frequency of 25Hz can be used as a seismic source, and a PML (Perfectly Matched Layer, perfect matching layer) is used to absorb the boundary conditions. 12 frequencies in 0-20Hz are selected for inversion by adopting a multi-scale FWI technology, the maximum iteration number is 100, the convergence threshold of the objective function is set to be 0.01, and the data can be fully iterated within a given maximum iteration number range.
The final inversion results of the missing high frequency and low frequency data and the predicted full band data on the Marmousi model are shown in fig. 11 (c) and 11 (e), respectively. Among them, fig. 11 (c) illustrates that when the observed data do not have enough low-frequency and high-frequency components, the objective function of the conventional FWI is easily trapped in a local minimum, resulting in a lower resolution of the inversion result. Fig. 11 (e) illustrates that when the predicted low frequency data is added, inversion is performed by using the high frequency data with the full low frequency, a high-precision inversion speed model with almost the same difference with the inversion result of the full frequency data can be obtained, so that the inversion quality of the predicted low frequency component can be remarkably improved, and the FWI (Full Waveform Inversion ) cycle skip problem can be effectively solved. Fig. 11 (d) shows the inversion result of a real full-frequency seismic record, and the results of fig. 11 (d) and 11 (e) show that the seismic frequency expansion method has higher accuracy.
The application introduces the idea of utilizing a basic data block of the seismic data to establish the local data driving mapping of low-frequency recovery, adopts a preprocessing means to manufacture high-low frequency data for training the first neural network, and establishes the relation between the high-low frequency data pairs. The trained first neural network may be used to predict low frequency data from the high frequency data. The first neural network provided by the application is trained on the pushing body model and tested. Comprehensive experimental results show that the predicted low-frequency data are well matched with the real low-frequency data in the time domain and the frequency domain, and field results show that the low-frequency spectrum expansion is successful. In addition, two FWI tests using predictive data indicate that the method can reliably recover high and low frequency data.
Fig. 12 is a schematic diagram of an alternative device structure of a seismic frequency expansion device according to an embodiment of the present application, where a seismic frequency expansion device 1200 includes an acquisition module 1201, a training module 1202, a prediction module 1203, and a modeling module 1204. Wherein,,
an acquisition module 1201, configured to acquire first intermediate frequency data and first full frequency data in the seismic data; the first intermediate frequency data is used for representing data in a first frequency range in the seismic sample data, and the first full frequency data is used for representing data in a second frequency range in the seismic sample data;
a training module 1202 for training a first neural network based on the first intermediate frequency data and the first full frequency data;
the prediction module 1203 is configured to perform seismic frequency extension on the second intermediate frequency data according to the trained first neural network, so as to obtain predicted second full frequency data.
In some embodiments, training module 1202 is further to: determining sample data according to the first intermediate frequency data and the first total frequency data, and carrying out at least one data preprocessing of resampling processing, normalization processing, energy balance processing and data slicing processing on the sample data to obtain preprocessed sample data; determining a training set according to the preprocessed sample data; training the first neural network according to a training set; each sample data comprises first intermediate frequency data and first full frequency data, and the first intermediate frequency data and the first full frequency data are in one-to-one correspondence.
In some embodiments, training module 1202 is further to: determining a sampling interval size; dividing the sample data according to the size of the sampling interval to obtain a divided window; determining a resampled value corresponding to each window after mapping based on the values corresponding to all the data points in each window; and determining a set of mapped values of each window corresponding to the sample data into a resampled data set.
In some embodiments, training module 1202 is further to: determining a normalized data range; determining, for a feature of each of the sample data, a minimum value and a maximum value of the feature in a dataset corresponding to the feature; and determining the ratio of the difference between the actual value corresponding to the feature and the minimum value to the difference between the maximum value and the minimum value as the value corresponding to the feature after normalization.
In some embodiments, training module 1202 is further to: determining a size of a time window in the sample data, and an average energy of each time window; determining a normalization coefficient corresponding to the average energy; and aiming at the data point in each time window, adjusting the amplitude of the data point according to the product value between the amplitude corresponding to the data point and the normalization coefficient.
In some embodiments, training module 1202 is further to: determining a data slice size for each of the sample data; determining the data region of the data slice size as a separate channel; and carrying out three-dimensional expansion on the data in the sample data according to the size of the channel corresponding to the sample data.
In some embodiments, training module 1202 is further to: randomly initializing the weight and bias of the first neural network; inputting the first intermediate frequency data of the training set into the first neural network for forward propagation to obtain predicted first full frequency data; determining a gradient calculation result according to the loss value between the predicted first full frequency data and the real first full frequency data; and updating the weight and bias of the first neural network according to a back propagation algorithm and the gradient calculation result.
In some embodiments, training module 1202 is further to: performing convolution calculation aiming at the mapping relation between the first intermediate frequency data and the first full frequency data in the training set; after the convolution calculation is completed, determining a corresponding feature map based on a nonlinear activation function; pooling operation is carried out on the feature map, and the feature map obtained after the pooling operation is used as input of a full-connection layer to obtain output features; mapping the output characteristics to the output space of the full-frequency data to obtain predicted first full-frequency data.
It should be noted that, the seismic frequency expansion device in the embodiment of the present application is similar to the description of the foregoing seismic frequency expansion method embodiment, and has similar beneficial effects as the method embodiment, so that a detailed description is omitted. The technical details of the underground pipeline seismic frequency expansion device provided by the embodiment of the application can be understood according to the description of any one of the drawings from fig. 1 to fig. 11.
Fig. 13 illustrates a schematic block diagram of an example electronic device 1300 that can be used to implement embodiments of the present disclosure. The electronic device 1300 is used to implement the seismic frequency extension method of the embodiments of the present disclosure. In some alternative embodiments, the electronic device 1300 may implement the seismic frequency extension method provided by the embodiments of the application by running a computer program, for example, the computer program may be a software module in an operating system; a local (Native) APP (Application), i.e. a program that needs to be installed in an operating system to run; the method can also be an applet, namely a program which can be run only by being downloaded into a browser environment; but also an applet that can be embedded in any APP. In general, the computer programs described above may be any form of application, module or plug-in.
In practical applications, the electronic device 1300 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a Cloud server that provides Cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic Cloud computing services such as big data and artificial intelligence platforms, where Cloud Technology (Cloud Technology) refers to a hosting Technology that unifies serial resources such as hardware, software, and networks in a wide area network or a local area network to implement computing, storing, processing and sharing of data. The electronic device 1300 may be, but is not limited to, a smart phone, tablet, notebook, desktop, smart box, smart television, smart watch, etc.
Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, wearable devices, vehicle terminals, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 13, the electronic device 1300 includes a computing unit 1301 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1302 or a computer program loaded from a storage unit 1308 into a Random Access Memory (RAM) 1303. In the RAM1303, various programs and data required for the operation of the electronic device 1300 can also be stored. The computing unit 1301, the ROM 1302, and the RAM1303 are connected to each other via the bus 504. An input/output (I/O) interface 1305 is also connected to bus 1304.
Various components in electronic device 1300 are connected to I/O interface 1305, including: an input unit 1306 such as a keyboard, a mouse, or the like; an output unit 1307 such as various types of displays, speakers, and the like; storage unit 1308, such as a magnetic disk, optical disk, etc.; and a communication unit 1309 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1309 allows the electronic device 1300 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1301 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1301 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1301 performs the various methods and processes described above, such as the seismic frequency extension method. For example, in some alternative embodiments, the seismic frequency extension method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1308. In some alternative embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1300 via the ROM 1302 and/or the communication unit 1309. When the computer program is loaded into RAM1303 and executed by computing unit 1301, one or more steps of the seismic frequency extension method described above may be performed. Alternatively, in other embodiments, computing unit 1301 may be configured as a seismic frequency extension method by any other suitable means (e.g., by means of firmware).
Embodiments of the present application provide a computer readable storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform the seismic frequency extension method provided by the embodiments of the present application.
In some embodiments, the computer readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; but may be a variety of devices including one or any combination of the above memories.
In some embodiments, the executable instructions may be in the form of programs, software modules, scripts, or code, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and they may be deployed in any form, including as stand-alone programs or as modules, components, subroutines, or other units suitable for use in a computing environment.
As an example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices located at one site or, alternatively, distributed across multiple sites and interconnected by a communication network.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that, in various embodiments of the present application, the size of the sequence number of each implementation process does not mean that the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The above is merely an example of the present application and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (11)

1. A method of seismic frequency development, the method comprising:
acquiring first intermediate frequency data and first full frequency data in seismic sample data; the first intermediate frequency data is used for representing data in a first frequency range in the seismic sample data, and the first full frequency data is used for representing data in a second frequency range in the seismic sample data;
training a first neural network according to the first intermediate frequency data and the first full frequency data;
and carrying out earthquake frequency expansion on the second intermediate frequency data according to the trained first neural network to obtain predicted second full frequency data.
2. The method of claim 1, wherein said training a first neural network based on said first intermediate frequency data and said first full frequency data comprises:
Determining sample data from the first intermediate frequency data and the first full frequency data;
carrying out at least one data preprocessing of resampling processing, normalization processing, energy balance processing and data slicing processing on the sample data to obtain preprocessed sample data;
determining a training set according to the preprocessed sample data;
training the first neural network according to the training set;
each sample data comprises first intermediate frequency data and first full frequency data, and the first intermediate frequency data and the first full frequency data are in one-to-one correspondence.
3. The method of claim 2, wherein the resampling process comprises:
determining a sampling interval size;
dividing the sample data according to the size of the sampling interval to obtain a divided window;
determining a resampled value corresponding to each window after mapping based on the values corresponding to all the data points in each window;
and determining a set of mapped values of each window corresponding to the sample data as resampled data.
4. The method of claim 2, wherein the normalizing process comprises:
Determining a normalized data range;
determining, for a feature of each of the sample data, a minimum value and a maximum value of the feature in a dataset corresponding to the feature;
and determining the ratio of the difference between the actual value corresponding to the feature and the minimum value to the difference between the maximum value and the minimum value as the value corresponding to the feature after normalization.
5. The method of claim 2, wherein the energy balancing process comprises:
determining a size of a time window in the sample data, and an average energy of each time window;
determining a normalization coefficient corresponding to the average energy;
and aiming at the data point in each time window, adjusting the amplitude of the data point according to the product value between the amplitude corresponding to the data point and the normalization coefficient.
6. The method of claim 2, wherein the data slicing process comprises:
determining a data slice size for each of the sample data;
determining the data region of the data slice size as a separate channel;
and carrying out three-dimensional expansion on the data of the sample data according to the size of the channel corresponding to the sample data.
7. The method of claim 2, wherein the training the first neural network according to the training set comprises:
randomly initializing the weight and bias parameters of the first neural network;
inputting the training set into the first neural network for forward propagation to obtain predicted first full-frequency data;
determining a gradient calculation result according to the loss value between the predicted first full frequency data and the real first full frequency data;
and updating the weight and bias of the first neural network according to a back propagation algorithm and the gradient calculation result.
8. The method of claim 7, wherein inputting the training set into the first neural network for forward propagation results in predicted first full frequency data, comprising:
performing convolution calculation aiming at the mapping relation between the first intermediate frequency data and the first full frequency data in the training set;
after the convolution calculation is completed, determining a corresponding feature map based on a nonlinear activation function;
pooling operation is carried out on the feature map, and the feature map obtained after the pooling operation is used as input of a full-connection layer to obtain output features;
Mapping the output characteristics to the output space of the full-frequency data to obtain predicted first full-frequency data.
9. A seismic frequency extension apparatus, the apparatus comprising:
the acquisition module is used for acquiring first intermediate frequency data and first full frequency data in the seismic sample data; the first intermediate frequency data is used for representing data in a first frequency range in the seismic sample data, and the first full frequency data is used for representing data in a second frequency range in the seismic sample data;
the training module is used for training a first neural network according to the first intermediate frequency data and the first full frequency data;
and the prediction module is used for carrying out earthquake frequency expansion on the second intermediate frequency data according to the first neural network after training is completed, so as to obtain predicted second full frequency data.
10. An electronic device, the electronic device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
11. A computer-readable storage medium comprising a set of computer-executable instructions for performing the seismic frequency extension method of any of claims 1-8 when the instructions are executed.
CN202310901589.1A 2023-07-20 2023-07-20 Earthquake frequency extension method and device, electronic equipment and computer readable storage medium Active CN116643310B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310901589.1A CN116643310B (en) 2023-07-20 2023-07-20 Earthquake frequency extension method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310901589.1A CN116643310B (en) 2023-07-20 2023-07-20 Earthquake frequency extension method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN116643310A true CN116643310A (en) 2023-08-25
CN116643310B CN116643310B (en) 2023-11-10

Family

ID=87623273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310901589.1A Active CN116643310B (en) 2023-07-20 2023-07-20 Earthquake frequency extension method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116643310B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206264A (en) * 2007-11-08 2008-06-25 符力耘 Method for inversion of high resolution non-linear earthquake wave impedance
KR20150035633A (en) * 2013-09-27 2015-04-07 한국전력공사 Apparatus for measuring earthquake intensity and method for the same
CN112926232A (en) * 2020-12-10 2021-06-08 中国石油大学(华东) Seismic low-frequency component recovery method based on layered fusion
US20230186201A1 (en) * 2016-05-09 2023-06-15 Strong Force Iot Portfolio 2016, Llc Industrial digital twin systems providing neural net-based adjustment recommendation with data relevant to role taxonomy
CN116299702A (en) * 2023-03-07 2023-06-23 中国地质科学院地球物理地球化学勘查研究所 CNN-based frequency domain low-frequency expansion multi-scale full waveform inversion method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206264A (en) * 2007-11-08 2008-06-25 符力耘 Method for inversion of high resolution non-linear earthquake wave impedance
KR20150035633A (en) * 2013-09-27 2015-04-07 한국전력공사 Apparatus for measuring earthquake intensity and method for the same
US20230186201A1 (en) * 2016-05-09 2023-06-15 Strong Force Iot Portfolio 2016, Llc Industrial digital twin systems providing neural net-based adjustment recommendation with data relevant to role taxonomy
CN112926232A (en) * 2020-12-10 2021-06-08 中国石油大学(华东) Seismic low-frequency component recovery method based on layered fusion
CN116299702A (en) * 2023-03-07 2023-06-23 中国地质科学院地球物理地球化学勘查研究所 CNN-based frequency domain low-frequency expansion multi-scale full waveform inversion method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈志明 等: "《中国石油学会2021年物探技术研讨会论文集》", 《中国学术期刊(光盘版)》电子杂志社有限公司, pages: 988 - 991 *

Also Published As

Publication number Publication date
CN116643310B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN111401516B (en) Searching method for neural network channel parameters and related equipment
CN111723732B (en) Optical remote sensing image change detection method, storage medium and computing equipment
CN111562611B (en) Semi-supervised depth learning seismic data inversion method based on wave equation drive
CN112084923B (en) Remote sensing image semantic segmentation method, storage medium and computing device
Li et al. IncepTCN: A new deep temporal convolutional network combined with dictionary learning for strong cultural noise elimination of controlled-source electromagnetic data
CN109784488B (en) Construction method of binary convolution neural network suitable for embedded platform
CN113761805B (en) Controllable source electromagnetic data denoising method, system, terminal and readable storage medium based on time domain convolution network
CN104008420A (en) Distributed outlier detection method and system based on automatic coding machine
CN112990112A (en) Edge-guided cyclic convolution neural network building change detection method and system
CN109948452A (en) A kind of clock signal prediction technique and device
Song et al. High-frequency wavefield extrapolation using the Fourier neural operator
CN110954950A (en) Underground transverse wave velocity inversion method, device, computing equipment and storage medium
CN112949944A (en) Underground water level intelligent prediction method and system based on space-time characteristics
CN116643310B (en) Earthquake frequency extension method and device, electronic equipment and computer readable storage medium
CN117214950B (en) Multiple wave suppression method, device, equipment and storage medium
CN112433249B (en) Horizon tracking method and device, computer equipment and computer readable storage medium
Nathaniel et al. Chaosbench: A multi-channel, physics-based benchmark for subseasonal-to-seasonal climate prediction
CN116263735A (en) Robustness assessment method, device, equipment and storage medium for neural network
CN108398719A (en) The processing method and processing device of seismic data
CN112578458B (en) Pre-stack elastic impedance random inversion method and device, storage medium and processor
CN115629413A (en) Physical and data driving-based receiving function inversion method
CN114943189A (en) XGboost-based acoustic velocity profile inversion method and system
Lyu et al. Rough discrete fracture network multi-parameter joint modeling based on improved neural spline flow
CN106405504A (en) Combined shear wave transformation and singular value decomposition ground penetrating radar data denoising method
Barrion et al. Modified Fast and Robust Fuzzy C-means Algorithm for Flood Damage Assessment using Optimal Image Segmentation Cluster Number

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant