CN115903022B - Deep learning chip suitable for real-time seismic data processing - Google Patents

Deep learning chip suitable for real-time seismic data processing Download PDF

Info

Publication number
CN115903022B
CN115903022B CN202211556940.XA CN202211556940A CN115903022B CN 115903022 B CN115903022 B CN 115903022B CN 202211556940 A CN202211556940 A CN 202211556940A CN 115903022 B CN115903022 B CN 115903022B
Authority
CN
China
Prior art keywords
wave
unit
input end
layer
subsystem
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211556940.XA
Other languages
Chinese (zh)
Other versions
CN115903022A (en
Inventor
薛清峰
王一博
郑忆康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Geology and Geophysics of CAS
Original Assignee
Institute of Geology and Geophysics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Geology and Geophysics of CAS filed Critical Institute of Geology and Geophysics of CAS
Priority to CN202211556940.XA priority Critical patent/CN115903022B/en
Publication of CN115903022A publication Critical patent/CN115903022A/en
Application granted granted Critical
Publication of CN115903022B publication Critical patent/CN115903022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a deep learning chip suitable for real-time seismic data processing, which comprises: the system comprises a feature extraction subsystem, a P-wave first-arrival induction subsystem, an S-wave first-arrival induction subsystem and a microseism estimation subsystem; the feature extraction subsystem is used for extracting microseism detection data to obtain microseism feature data; the P-wave first arrival induction subsystem is used for extracting P-wave first arrival time in the microseism characteristic data; the S-wave first arrival induction subsystem is used for extracting S-wave first arrival time in the microseism characteristic data; the micro-seismic estimation subsystem is used for estimating a micro-seismic source according to the P-wave first arrival time and the S-wave first arrival time; the method solves the problem that the distance of the seismic source of the microseism can not be measured only by extracting the P wave or the first arrival of the P wave in the prior art.

Description

Deep learning chip suitable for real-time seismic data processing
Technical Field
The invention relates to the technical field of seismic induction, in particular to a deep learning chip suitable for real-time seismic data processing.
Background
Microseism is a small-scale earthquake. Rock cracking and seismic activity during deep mining in an underground mine are often unavoidable phenomena. Microseisms are generally defined as those earthquakes that are caused by rock failure due to changes in stress fields within the rock mass near the production gallery.
The prior literature 'a method and a system for picking up a first arrival of a microseism P wave based on a capsule neural network' and 'a method and a system for identifying the microseism P wave based on a depth convolution neural network' are used for identifying the P wave or extracting the first arrival of the P wave, but only extracting the P wave or the first arrival of the P wave can not measure the distance of a seismic source of the microseism.
Disclosure of Invention
Aiming at the defects in the prior art, the deep learning chip suitable for real-time seismic data processing provided by the invention solves the problem that the prior art only extracts P waves or P wave first arrivals cannot measure the source distance of micro-earthquakes.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: a deep learning chip suitable for real-time seismic data processing, comprising: the system comprises a feature extraction subsystem, a P-wave first-arrival induction subsystem, an S-wave first-arrival induction subsystem and a microseism estimation subsystem;
the feature extraction subsystem is used for extracting microseism detection data to obtain microseism feature data; the P-wave first arrival induction subsystem is used for extracting P-wave first arrival time in the microseism characteristic data; the S-wave first arrival induction subsystem is used for extracting S-wave first arrival time in the microseism characteristic data; the micro-seismic estimation subsystem is used for estimating a micro-seismic source according to the P-wave first arrival time and the S-wave first arrival time.
Further, the feature extraction subsystem includes: a CNN unit, a first BiLSTM unit, a second BiLSTM unit, and a first global attention unit;
the input end of the CNN unit is used as the input end of the feature extraction subsystem and is used for inputting microseism detection data; the input end of the first BiLSTM unit is connected with the output end of the CNN unit, and the output end of the first BiLSTM unit is connected with the input end of the second BiLSTM unit; the input end of the first global attention unit is connected with the output end of the second BiLSTM unit, and the output end of the first global attention unit is used as the output end of the feature extraction subsystem.
Further, the CNN unit includes: a first convolution layer, a first maximum pooling layer, a second convolution layer, a second maximum pooling layer, a third convolution layer, a third maximum pooling layer, a fourth convolution layer, a fourth maximum pooling layer, a fifth convolution layer, and a fifth maximum pooling layer;
the input end of the first convolution layer is used as the input end of the CNN unit, and the output end of the first convolution layer is connected with the input end of the first maximum pooling layer; the input end of the second convolution layer is connected with the output end of the first maximum pooling layer, and the output end of the second convolution layer is connected with the input end of the second maximum pooling layer; the input end of the third convolution layer is connected with the output end of the second maximum pooling layer, and the output end of the third convolution layer is connected with the input end of the third maximum pooling layer; the input end of the fourth convolution layer is connected with the output end of the third maximum pooling layer, and the output end of the fourth convolution layer is connected with the input end of the fourth maximum pooling layer; the input end of the fifth convolution layer is connected with the output end of the fourth maximum pooling layer, and the output end of the fifth convolution layer is connected with the input end of the fifth maximum pooling layer; the output end of the fifth maximum pooling layer is used as the output end of the CNN unit.
Further, the P-wave first arrival induction subsystem includes: a third BiLSTM unit, a second global attention unit, and a first full connection layer unit;
the input end of the third BiLSTM unit is used as the input end of the P-wave first arrival induction subsystem; the input end of the second global attention unit is connected with the output end of the third BiLSTM unit, and the output end of the second global attention unit is connected with the input end of the first full-connection layer unit; the output end of the first full-connection layer unit is used as the output end of the P-wave first arrival induction subsystem.
Further, the S-wave first arrival induction subsystem includes: a fourth BiLSTM unit, a third global attention unit, and a second full connection layer unit;
the input end of the fourth BiLSTM unit is connected with the input end of the S-wave first arrival induction subsystem; the input end of the third global attention unit is connected with the output end of the fourth BiLSTM unit, and the output end of the third global attention unit is connected with the input end of the second full connection layer unit; and the output end of the second full-connection layer unit is used as the output end of the S-wave first arrival induction subsystem.
Further, the input/output relationship of the cells in the LSTM module of the BiLSTM unit in the feature extraction subsystem, the P-wave first-arrival induction subsystem or the S-wave first-arrival induction subsystem is as follows:
f t =σ[(W f ·(y t-1 ,x t ,C t-1 )+b f ]
i t =tanh[W i ·(y t-1 ,x t ,C t-1 )+b i ]
h t =σ[W h ·(y t-1 ,x t ,C t-1 )+b h ]
C t =(C t-1 ⊙f t +(1-f t )⊙i t )⊙((1-i t )⊙h t )
y t =σ[W o ·(y t-1 ,x t ,C t-1 ,C t )+b o ]⊙tanh[C t ]
wherein f t For forgetting to leave the door at time tOutput, sigma []To activate a function, W f Weight of forgetting gate b f For forgetting the bias of the door, y t-1 For the output of the cell at time t-1, x t For t-time cell input, C t-1 Is the state of the cell at time t-1, i t For the output of the input gate at time t, tanh [ []For hyperbolic tangent activation function, W i B is the weight of the input gate i For biasing the input gate, h t For the output of the candidate gate at time t, W h Weights for candidate gates, b h Bias for candidate gate, C t As the state of the cell at time t, as the Hadamard product, y t For outputting the output of the gate at time t, W o To output the weight of the door, b o To output the gate bias.
The beneficial effects of the above-mentioned further scheme are: consider state C of last time in LSTM module t-1 Input x at the current time t Output y of last moment t-1 The cell is made to fully consider the relationship among state, input and output at the time of calculation.
Further, the global attention unit in the feature extraction subsystem, the P-wave first-arrival induction subsystem or the S-wave first-arrival induction subsystem comprises: a sixth convolution layer, a Softmax layer, a multiplier, a seventh convolution layer, a ReLU layer, an eighth convolution layer, and an adder;
the input end of the sixth convolution layer is connected with the first input end of the multiplier and the first input end of the adder respectively and is used as the input end of the global attention unit; the input end of the Softmax layer is connected with the output end of the sixth convolution layer, and the output end of the Softmax layer is connected with the second input end of the multiplier; the input end of the seventh convolution layer is connected with the output end of the multiplier, and the output end of the seventh convolution layer is connected with the input end of the ReLU layer; the input end of the eighth convolution layer is connected with the output end of the ReLU layer, and the output end of the eighth convolution layer is connected with the second input end of the adder; the output of the adder acts as the output of the global attention unit.
Further, the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem adopt microseism detection data and data labels to form a training data set, the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem are trained by the training data set, the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem which are trained are obtained, and the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem which are trained are arranged in the processor.
Further, the weight update formula in the training process is:
wherein w is i+1 Weight of the (i+1) th iteration, w i Weighting the ith iteration, η i For the learning rate of the ith iteration, η i-1 Learning rate for the i-1 th iteration, J i Loss function for the ith iteration, J i-1 The loss function of the i-1 th iteration is shown, gamma is the proportionality coefficient, and ζ is the adjustment constant.
The beneficial effects of the above-mentioned further scheme are: the weighted type of the past second derivative and the current second derivative of the loss function is designed, the larger the second derivative of the loss function is, the larger the change rate of the gradient of the loss function is represented, the weighted type and the past gradient change rate are weighted and accumulated to be used for regulating and controlling the weight after being smoothly filtered, so that the step length and the gradient change rate of the weight iteration are hooked, the overshoot of the weight iteration is prevented, and the slow of the weight iteration is avoided; in the design of learning rate parameters, the degree of the decline of the loss function is considered, when the decline degree of the loss function is larger, J i-1 -J i Is larger, thereby leading to a learning rate eta i The speed change is increased, the step length adjusting force of weight updating iteration is increased, the reduction degree of a loss function is smaller, J i-1 -J i The difference of (2) is smaller, so that the learning rate eta i The speed change is reduced, the step length adjusting force of the weight updating iteration is reduced, and finallyThe weight can be self-adaptive, fast and stable in iteration, and the optimal value can be reached rapidly.
In summary, the invention has the following beneficial effects: according to the invention, the feature extraction subsystem is used for extracting the microseism feature data in the microseism detection data, so that the number is reduced on one hand, the data features are reserved on the other hand, the P-wave first-arrival time and the S-wave first-arrival time are respectively extracted through the P-wave first-arrival induction subsystem and the S-wave first-arrival time, and the microseism focus position is estimated through the microseism estimation subsystem.
Drawings
FIG. 1 is a system block diagram of a deep learning chip suitable for real-time seismic data processing;
FIG. 2 is a system block diagram of a CNN unit;
fig. 3 is a system block diagram of a global attention unit.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a deep learning chip suitable for real-time seismic data processing includes: the system comprises a feature extraction subsystem, a P-wave first-arrival induction subsystem, an S-wave first-arrival induction subsystem and a microseism estimation subsystem;
the feature extraction subsystem is used for extracting microseism detection data to obtain microseism feature data; the P-wave first arrival induction subsystem is used for extracting P-wave first arrival time in the microseism characteristic data; the S-wave first arrival induction subsystem is used for extracting S-wave first arrival time in the microseism characteristic data; the micro-seismic estimation subsystem is used for estimating a micro-seismic source according to the P-wave first arrival time and the S-wave first arrival time.
The feature extraction subsystem includes: a CNN unit, a first BiLSTM unit, a second BiLSTM unit, and a first global attention unit;
the input end of the CNN unit is used as the input end of the feature extraction subsystem and is used for inputting microseism detection data; the input end of the first BiLSTM unit is connected with the output end of the CNN unit, and the output end of the first BiLSTM unit is connected with the input end of the second BiLSTM unit; the input end of the first global attention unit is connected with the output end of the second BiLSTM unit, and the output end of the first global attention unit is used as the output end of the feature extraction subsystem.
As shown in fig. 2, the CNN unit includes: a first convolution layer, a first maximum pooling layer, a second convolution layer, a second maximum pooling layer, a third convolution layer, a third maximum pooling layer, a fourth convolution layer, a fourth maximum pooling layer, a fifth convolution layer, and a fifth maximum pooling layer;
the input end of the first convolution layer is used as the input end of the CNN unit, and the output end of the first convolution layer is connected with the input end of the first maximum pooling layer; the input end of the second convolution layer is connected with the output end of the first maximum pooling layer, and the output end of the second convolution layer is connected with the input end of the second maximum pooling layer; the input end of the third convolution layer is connected with the output end of the second maximum pooling layer, and the output end of the third convolution layer is connected with the input end of the third maximum pooling layer; the input end of the fourth convolution layer is connected with the output end of the third maximum pooling layer, and the output end of the fourth convolution layer is connected with the input end of the fourth maximum pooling layer; the input end of the fifth convolution layer is connected with the output end of the fourth maximum pooling layer, and the output end of the fifth convolution layer is connected with the input end of the fifth maximum pooling layer; the output end of the fifth maximum pooling layer is used as the output end of the CNN unit.
The P-wave first arrival induction subsystem comprises: a third BiLSTM unit, a second global attention unit, and a first full connection layer unit;
the input end of the third BiLSTM unit is used as the input end of the P-wave first arrival induction subsystem; the input end of the second global attention unit is connected with the output end of the third BiLSTM unit, and the output end of the second global attention unit is connected with the input end of the first full-connection layer unit; the output end of the first full-connection layer unit is used as the output end of the P-wave first arrival induction subsystem.
The S-wave first arrival induction subsystem comprises: a fourth BiLSTM unit, a third global attention unit, and a second full connection layer unit;
the input end of the fourth BiLSTM unit is connected with the input end of the S-wave first arrival induction subsystem; the input end of the third global attention unit is connected with the output end of the fourth BiLSTM unit, and the output end of the third global attention unit is connected with the input end of the second full connection layer unit; and the output end of the second full-connection layer unit is used as the output end of the S-wave first arrival induction subsystem.
The characteristic extraction subsystem, the P-wave first-arrival induction subsystem or the LSTM module of the BiLSTM unit in the S-wave first-arrival induction subsystem has the following input-output relationship:
f t =σ[(W f ·(y t-1 ,x t ,C t-1 )+b f ]
i t =tanh[W i ·(y t-1 ,x t ,C t-1 )+b i ]
h t =σ[W h ·(y t-1 ,x t ,C t-1 )+b h ]
C t =(C t-1 ⊙f t +(1-f t )⊙i t )⊙((1-i t )Qh t )
y t =σ[W o ·(y t-1 ,x t ,C t-1 ,C t )+b o ]⊙tanh[C t ]
wherein f t For the output of the forgetting gate at time t, sigma]To activate a function, W f Weight of forgetting gate b f For forgetting the bias of the door, y t-1 For the output of the cell at time t-1, x t For t-time cell input, C t-1 Is the state of the cell at time t-1, i t For the output of the input gate at time t, tanh [ []For hyperbolic tangent activation function, W i B is the weight of the input gate i For biasing the input gate, h t For the output of the candidate gate at time t, W h Weights for candidate gates, b h Bias for candidate gate, C t Is the state of the cell at time t, as indicated by HadamardProduct, y t For outputting the output of the gate at time t, W o To output the weight of the door, b o To output the gate bias.
Consider state C of last time in LSTM module t-1 Input x at the current time t Output y of last moment t-1 The cell is made to fully consider the relationship among state, input and output at the time of calculation.
As shown in fig. 3, the global attention unit in the feature extraction subsystem, the P-wave first-arrival induction subsystem or the S-wave first-arrival induction subsystem includes: a sixth convolution layer, a Softmax layer, a multiplier, a seventh convolution layer, a ReLU layer, an eighth convolution layer, and an adder;
the input end of the sixth convolution layer is connected with the first input end of the multiplier and the first input end of the adder respectively and is used as the input end of the global attention unit; the input end of the Softmax layer is connected with the output end of the sixth convolution layer, and the output end of the Softmax layer is connected with the second input end of the multiplier; the input end of the seventh convolution layer is connected with the output end of the multiplier, and the output end of the seventh convolution layer is connected with the input end of the ReLU layer; the input end of the eighth convolution layer is connected with the output end of the ReLU layer, and the output end of the eighth convolution layer is connected with the second input end of the adder; the output of the adder acts as the output of the global attention unit.
The feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem adopt microseism detection data and data labels to form a training data set, the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem are trained by adopting the training data set, the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem which are trained are obtained, and the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem which are trained are arranged in the processor.
The data labels are P wave first arrival time and S wave first arrival time.
The weight updating formula in the training process is as follows:
wherein w is i+1 Weight of the (i+1) th iteration, w i Weighting the ith iteration, η i For the learning rate of the ith iteration, η i-1 Learning rate for the i-1 th iteration, J i Loss function for the ith iteration, J i-1 The loss function of the i-1 th iteration is shown, gamma is the proportionality coefficient, and ζ is the adjustment constant.
The weighted formula of the past second derivative and the current second derivative of the loss function is designed, the larger the second derivative of the loss function is, the larger the change rate of the gradient of the loss function is represented, the weighted sum of the past gradient change rate and the past gradient change rate is used for regulating and controlling the weight after being subjected to smooth filtering, so that the step length of the weight iteration and the gradient change rate are hooked, the overshoot of the weight iteration is prevented, and the slow of the weight iteration is avoided; in the design of learning rate parameters, the degree of the decline of the loss function is considered, when the decline degree of the loss function is larger, J i-1 -J i Is larger, thereby leading to a learning rate eta i The speed change is increased, the step length adjusting force of weight updating iteration is increased, the reduction degree of a loss function is smaller, J i-1 -J i The difference of (2) is smaller, so that the learning rate eta i The speed change is reduced, the step length adjusting force of the weight updating iteration is reduced, and finally the weight can be self-adaptively and rapidly and stably iterated to reach the optimal value rapidly.
According to the invention, the feature extraction subsystem is used for extracting the microseism feature data in the microseism detection data, so that the number is reduced, the data features are reserved, the P-wave first-arrival time and the S-wave first-arrival time are respectively extracted through the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem, the speeds of the P-wave and the S-wave can be measured through the sensor, and the microseism focus position can be estimated by combining the time difference of the P-wave first-arrival time and the S-wave first-arrival time and the speeds of the P-wave and the S-wave.

Claims (4)

1. A deep learning chip suitable for real-time seismic data processing, comprising: the system comprises a feature extraction subsystem, a P-wave first-arrival induction subsystem, an S-wave first-arrival induction subsystem and a microseism estimation subsystem;
the feature extraction subsystem is used for extracting microseism detection data to obtain microseism feature data; the P-wave first arrival induction subsystem is used for extracting P-wave first arrival time in the microseism characteristic data; the S-wave first arrival induction subsystem is used for extracting S-wave first arrival time in the microseism characteristic data; the micro-seismic estimation subsystem is used for estimating a micro-seismic source according to the P-wave first arrival time and the S-wave first arrival time;
the feature extraction subsystem includes: a CNN unit, a first BiLSTM unit, a second BiLSTM unit, and a first global attention unit;
the input end of the CNN unit is used as the input end of the feature extraction subsystem and is used for inputting microseism detection data; the input end of the first BiLSTM unit is connected with the output end of the CNN unit, and the output end of the first BiLSTM unit is connected with the input end of the second BiLSTM unit; the input end of the first global attention unit is connected with the output end of the second BiLSTM unit, and the output end of the first global attention unit is used as the output end of the feature extraction subsystem;
the P-wave first arrival induction subsystem comprises: a third BiLSTM unit, a second global attention unit, and a first full connection layer unit;
the input end of the third BiLSTM unit is used as the input end of the P-wave first arrival induction subsystem; the input end of the second global attention unit is connected with the output end of the third BiLSTM unit, and the output end of the second global attention unit is connected with the input end of the first full-connection layer unit; the output end of the first full-connection layer unit is used as the output end of the P-wave first arrival induction subsystem;
the S-wave first arrival induction subsystem comprises: a fourth BiLSTM unit, a third global attention unit, and a second full connection layer unit;
the input end of the fourth BiLSTM unit is connected with the input end of the S-wave first arrival induction subsystem;
the input end of the third global attention unit is connected with the output end of the fourth BiLSTM unit, and the output end of the third global attention unit is connected with the input end of the second full connection layer unit; the output end of the second full-connection layer unit is used as the output end of the S-wave first arrival induction subsystem;
the characteristic extraction subsystem, the P-wave first-arrival induction subsystem or the LSTM module of the BiLSTM unit in the S-wave first-arrival induction subsystem has the following input-output relationship:
f t =σ[(W f ·(y t-1 ,x t ,C t-1 )+b f ]
i t =tanh[W i ·(y t-1 ,x t ,C t-1 )+b i ]
h t =σ[W h ·(y t-1 ,x t ,C t-1 )+b h ]
C t =(C t-1 ⊙f t +(1-f t )⊙i t )⊙((1-i t )⊙h t )
y t =σ[W o ·(y t-1 ,x t ,C t-1 ,C t )+b o ]⊙tanh[C t ]
wherein f t For the output of the forgetting gate at time t, sigma]To activate a function, W f Weight of forgetting gate b f For forgetting the bias of the door, y t-1 For the output of the cell at time t-1, x t For t-time cell input, C t-1 Is the state of the cell at time t-1, i t For the output of the input gate at time t, tanh [ []For hyperbolic tangent activation function, W i B is the weight of the input gate i For biasing the input gate, h t For the output of the candidate gate at time t, W h Weights for candidate gates, b h Bias for candidate gate, C t As the state of the cell at time t, as the Hadamard product, y t For outputting the output of the gate at time t, W o To output the weight of the door, b o Offset for the output gate;
the global attention unit in the feature extraction subsystem, the P-wave first-arrival induction subsystem or the S-wave first-arrival induction subsystem comprises: a sixth convolution layer, a Softmax layer, a multiplier, a seventh convolution layer, a ReLU layer, an eighth convolution layer, and an adder;
the input end of the sixth convolution layer is connected with the first input end of the multiplier and the first input end of the adder respectively and is used as the input end of the global attention unit; the input end of the Softmax layer is connected with the output end of the sixth convolution layer, and the output end of the Softmax layer is connected with the second input end of the multiplier; the input end of the seventh convolution layer is connected with the output end of the multiplier, and the output end of the seventh convolution layer is connected with the input end of the ReLU layer; the input end of the eighth convolution layer is connected with the output end of the ReLU layer, and the output end of the eighth convolution layer is connected with the second input end of the adder; the output of the adder acts as the output of the global attention unit.
2. The deep learning chip adapted for real-time seismic data processing of claim 1, wherein the CNN unit comprises: a first convolution layer, a first maximum pooling layer, a second convolution layer, a second maximum pooling layer, a third convolution layer, a third maximum pooling layer, a fourth convolution layer, a fourth maximum pooling layer, a fifth convolution layer, and a fifth maximum pooling layer;
the input end of the first convolution layer is used as the input end of the CNN unit, and the output end of the first convolution layer is connected with the input end of the first maximum pooling layer; the input end of the second convolution layer is connected with the output end of the first maximum pooling layer, and the output end of the second convolution layer is connected with the input end of the second maximum pooling layer; the input end of the third convolution layer is connected with the output end of the second maximum pooling layer, and the output end of the third convolution layer is connected with the input end of the third maximum pooling layer; the input end of the fourth convolution layer is connected with the output end of the third maximum pooling layer, and the output end of the fourth convolution layer is connected with the input end of the fourth maximum pooling layer; the input end of the fifth convolution layer is connected with the output end of the fourth maximum pooling layer, and the output end of the fifth convolution layer is connected with the input end of the fifth maximum pooling layer; the output end of the fifth maximum pooling layer is used as the output end of the CNN unit.
3. The deep learning chip of claim 1, wherein the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem are configured by microseism detection data and data labels, a training data set is configured, the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem are trained by the training data set, the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem are obtained after training, and the feature extraction subsystem, the P-wave first-arrival induction subsystem and the S-wave first-arrival induction subsystem after training are arranged in the processor.
4. A deep learning chip suitable for real time seismic data processing according to claim 3, wherein the weight update formula for the training process is:
wherein w is i+1 Weight of the (i+1) th iteration, w i Weighting the ith iteration, η i For the learning rate of the ith iteration, η i-1 Learning rate for the i-1 th iteration, J i Loss function for the ith iteration, J i-1 The loss function of the i-1 th iteration is shown, gamma is the proportionality coefficient, and ζ is the adjustment constant.
CN202211556940.XA 2022-12-06 2022-12-06 Deep learning chip suitable for real-time seismic data processing Active CN115903022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211556940.XA CN115903022B (en) 2022-12-06 2022-12-06 Deep learning chip suitable for real-time seismic data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211556940.XA CN115903022B (en) 2022-12-06 2022-12-06 Deep learning chip suitable for real-time seismic data processing

Publications (2)

Publication Number Publication Date
CN115903022A CN115903022A (en) 2023-04-04
CN115903022B true CN115903022B (en) 2023-10-31

Family

ID=86489479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211556940.XA Active CN115903022B (en) 2022-12-06 2022-12-06 Deep learning chip suitable for real-time seismic data processing

Country Status (1)

Country Link
CN (1) CN115903022B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017289A (en) * 2020-08-31 2020-12-01 电子科技大学 Well-seismic combined initial lithology model construction method based on deep learning
CN112068195A (en) * 2019-06-10 2020-12-11 中国石油化工股份有限公司 Automatic first arrival picking method for microseism P & S wave matching event and computer storage medium
CN113158792A (en) * 2021-03-15 2021-07-23 辽宁大学 Microseismic event identification method based on improved model transfer learning
CN114660656A (en) * 2022-03-17 2022-06-24 中国科学院地质与地球物理研究所 Seismic data first arrival picking method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11947061B2 (en) * 2019-10-18 2024-04-02 Korea University Research And Business Foundation Earthquake event classification method using attention-based convolutional neural network, recording medium and device for performing the method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112068195A (en) * 2019-06-10 2020-12-11 中国石油化工股份有限公司 Automatic first arrival picking method for microseism P & S wave matching event and computer storage medium
CN112017289A (en) * 2020-08-31 2020-12-01 电子科技大学 Well-seismic combined initial lithology model construction method based on deep learning
CN113158792A (en) * 2021-03-15 2021-07-23 辽宁大学 Microseismic event identification method based on improved model transfer learning
CN114660656A (en) * 2022-03-17 2022-06-24 中国科学院地质与地球物理研究所 Seismic data first arrival picking method and system

Also Published As

Publication number Publication date
CN115903022A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
CN108596327B (en) Seismic velocity spectrum artificial intelligence picking method based on deep learning
CN111723329B (en) Seismic phase feature recognition waveform inversion method based on full convolution neural network
CN106407649B (en) Microseismic signals based on time recurrent neural network then automatic pick method
CN109308522B (en) GIS fault prediction method based on recurrent neural network
CN107544904B (en) Software reliability prediction method based on deep CG-LSTM neural network
CN110705743A (en) New energy consumption electric quantity prediction method based on long-term and short-term memory neural network
CN111538076A (en) Earthquake magnitude rapid estimation method based on deep learning feature fusion
CN111144542A (en) Oil well productivity prediction method, device and equipment
CN112819136A (en) Time sequence prediction method and system based on CNN-LSTM neural network model and ARIMA model
CN110632662A (en) Algorithm for automatically identifying microseism signals by using DCNN-inclusion network
CN111126132A (en) Learning target tracking algorithm based on twin network
CN112836802A (en) Semi-supervised learning method, lithology prediction method and storage medium
CN114723095A (en) Missing well logging curve prediction method and device
CN112257847A (en) Method for predicting geomagnetic Kp index based on CNN and LSTM
CN115983465A (en) Rock burst time sequence prediction model construction method based on small sample learning
CN114818579A (en) Analog circuit fault diagnosis method based on one-dimensional convolution long-short term memory network
CN112539054A (en) Production optimization method for ground pipe network and underground oil reservoir complex system
CN115903022B (en) Deep learning chip suitable for real-time seismic data processing
CN115640526A (en) Drilling risk identification model, building method, identification method and computer equipment
CN110223342B (en) Space target size estimation method based on deep neural network
CN113156492B (en) Real-time intelligent early warning method applied to TBM tunnel rockburst disasters
CN114862015A (en) Typhoon wind speed intelligent prediction method based on FEN-ConvLSTM model
Xu et al. An automatic P-wave onset time picking method for mining-induced microseismic data based on long short-term memory deep neural network
CN113158792B (en) Microseism event identification method based on improved model transfer learning
CN117877587A (en) Deep learning algorithm of whole genome prediction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant