CN114167487A - Seismic magnitude estimation method and device based on characteristic waveform - Google Patents
Seismic magnitude estimation method and device based on characteristic waveform Download PDFInfo
- Publication number
- CN114167487A CN114167487A CN202111457319.3A CN202111457319A CN114167487A CN 114167487 A CN114167487 A CN 114167487A CN 202111457319 A CN202111457319 A CN 202111457319A CN 114167487 A CN114167487 A CN 114167487A
- Authority
- CN
- China
- Prior art keywords
- data
- feature
- layer
- earthquake
- waveform
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000000605 extraction Methods 0.000 claims abstract description 61
- 238000013528 artificial neural network Methods 0.000 claims abstract description 32
- 238000003062 neural network model Methods 0.000 claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 10
- 230000001133 acceleration Effects 0.000 claims description 38
- 230000001186 cumulative effect Effects 0.000 claims description 27
- 238000006073 displacement reaction Methods 0.000 claims description 27
- 230000002457 bidirectional effect Effects 0.000 claims description 24
- 238000012545 processing Methods 0.000 claims description 22
- 230000007246 mechanism Effects 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 10
- 210000004027 cell Anatomy 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 7
- 210000002569 neuron Anatomy 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 9
- 238000013136 deep learning model Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 5
- 125000004122 cyclic group Chemical group 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000001066 destructive effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V1/00—Seismology; Seismic or acoustic prospecting or detecting
- G01V1/01—Measuring or predicting earthquakes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V1/00—Seismology; Seismic or acoustic prospecting or detecting
- G01V1/28—Processing seismic data, e.g. for interpretation or for event detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V1/00—Seismology; Seismic or acoustic prospecting or detecting
- G01V1/28—Processing seismic data, e.g. for interpretation or for event detection
- G01V1/282—Application of seismic models, synthetic seismograms
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V1/00—Seismology; Seismic or acoustic prospecting or detecting
- G01V1/28—Processing seismic data, e.g. for interpretation or for event detection
- G01V1/30—Analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Geology (AREA)
- Environmental & Geological Engineering (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Geophysics (AREA)
- Theoretical Computer Science (AREA)
- Acoustics & Sound (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Geophysics And Detection Of Objects (AREA)
Abstract
The application provides a method and a device for estimating seismic magnitude based on characteristic waveforms, electronic equipment and a storage medium. The seismic magnitude estimation method based on the characteristic waveform comprises the following steps: acquiring characteristic waveform data of an earthquake; performing feature extraction on the feature waveform data by using a feature extraction module in the neural network model to obtain first feature data; performing characteristic sequence learning on the first characteristic data by using a sequence learning module in the neural network to obtain second characteristic data; and performing feature fusion on the second feature data by using an output module of the neural network to obtain the earthquake magnitude of the earthquake. In the implementation process, the characteristic waveform data is subjected to characteristic extraction and learning through the deep learning model, so that the seismic magnitude can be accurately estimated, and the accuracy of seismic magnitude estimation is improved.
Description
Technical Field
The application relates to the technical field of seismic estimation, in particular to a method and a device for estimating seismic magnitude based on characteristic waveforms, electronic equipment and a storage medium.
Background
The earthquake early warning is to estimate the earthquake size and the damage caused by the earthquake by using the initial P wave signal of the earthquake wave after the earthquake occurs, and to issue early warning information to a target field before the destructive earthquake wave arrives. The earthquake magnitude estimation is an important component of a regional earthquake early warning system, and the release of earthquake early warning information and the estimation of an earthquake damaged region both depend on accurate and quick earthquake magnitude estimation, so that the improvement of the rapidity and the accuracy of the earthquake magnitude estimation is one of the key scientific and technical problems to be solved by the earthquake early warning system.
The existing method for estimating earthquake magnitude estimates the magnitude by utilizing a single characteristic parameter calculated by waveform data of seconds after P waves arrive, and because only a single characteristic of the initial stage of earthquake waves is used, the estimation error of the earthquake magnitude is larger; in addition, the existing method for estimating the seismic magnitude by machine learning utilizes the original seismic waveform to calculate the seismic magnitude, and the original seismic waveform cannot directly reflect seismic characteristics such as amplitude, frequency spectrum and energy characteristics of seismic motion, so that the estimation error of the seismic magnitude is large.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for estimating seismic magnitude based on a characteristic waveform, an electronic device, and a storage medium, so as to solve the above-mentioned technical problem of large error in estimating seismic magnitude.
In a first aspect, an embodiment of the present application provides a method for estimating seismic magnitude based on a characteristic waveform, where the method includes: acquiring characteristic waveform data of an earthquake; performing feature extraction on the feature waveform data by using a feature extraction module in the neural network model to obtain first feature data; performing characteristic sequence learning on the first characteristic data by using a sequence learning module in the neural network to obtain second characteristic data; and performing feature fusion on the second feature data by using an output module of the neural network to obtain the earthquake magnitude of the earthquake. In the implementation process, the characteristic extraction and learning are carried out on the characteristic waveform data through the deep learning model, so that the seismic magnitude is obtained, and the accuracy of seismic magnitude estimation is improved.
Optionally, in an embodiment of the present application, the characteristic waveform data includes: amplitude characteristic waveform data, period characteristic waveform data, energy characteristic waveform data, acceleration waveform, speed waveform and displacement waveform; wherein, the amplitude class characteristic waveform data comprises: peak displacement P of amplitude-like characteristic waveformdPeak velocity PvAnd peak acceleration Pa(ii) a Periodic-like signature data comprising: average period tau of period-like characteristic waveformcStructural parameter TP, peak value ratio TvaAnd instantaneous frequency ω (n); energy-like signature data comprising: cumulative energy change rate PI of energy-like characteristic waveformvVelocity square integral IV2, cumulative absolute velocity CVA, cumulative vertical absolute displacement cvad, cumulative vertical absolute velocity cvav, cumulative vertical absolute acceleration cvaa. In the implementation process, firstly, the amplitude characteristic waveform data, the period characteristic waveform data and the energy characteristic waveform data can directly reflect the amplitude, the frequency spectrum and the energy characteristics of earthquake motion, and the characteristics have important significance on the estimation of earthquake magnitude, so that the accuracy of the earthquake magnitude estimation can be improved through the learning of the characteristic waveform data by the neural network model. Secondly, the waveform is more interpretable, so that the neural network is more interpretable. Interpretability, that is, the neural network has clear symbolized internal knowledge expression to match the knowledge framework of human, so that people can diagnose and modify the neural network at semantic level. The innovation of the technical scheme provided by the application is that the interpretable characteristic waveforms are applied to specific estimation of earthquake magnitude by combining with a neural network, and the method is different from the prior art that the interpretable characteristic waveforms are calculated through experience or selected through experience. Therefore, the technical scheme and the related algorithm for earthquake prediction provided by the application have repeatability, universality and later maintainability and upgradability, which cannot be achieved by other technical schemes in the prior art.
Optionally, in an embodiment of the present application, the feature extraction module includes: one-level feature extraction unit and second grade feature extraction unit, wherein, each feature extraction unit includes: a convolutional layer, a batch normalization layer and a maximum pooling layer; using a feature extraction module in the neural network model to perform feature extraction on the feature waveform data to obtain first feature data, wherein the feature extraction module comprises: performing feature extraction on the feature waveform data by using a primary feature extraction unit to obtain first sub-feature data; and performing feature extraction on the first sub-feature data by using a secondary feature extraction unit to obtain first feature data. In the implementation process, the characteristics of the characteristic waveform data are extracted through the primary characteristic unit and the secondary characteristic unit, and the detail characteristics in the characteristic waveform data can be obtained through the characteristic extraction, so that the accuracy of earthquake magnitude estimation is improved.
Optionally, in an embodiment of the present application, the sequence learning module includes a first bidirectional gating cell layer, a second bidirectional gating cell layer, and an attention mechanism layer; wherein, every two-way gate unit layer includes a plurality of two-way gate units, uses the sequence learning module in the neural network to carry out the characteristic sequence study to first characteristic data, obtains second characteristic data, includes: performing feature sequence learning on the first feature data by using a plurality of bidirectional gating units in the first bidirectional gating unit layer to obtain first feature learning data; performing feature sequence learning on the first feature learning data by using a plurality of bidirectional gating units in a second bidirectional gating unit layer to obtain second feature learning data; and performing weight calculation on the second feature learning data by using an attention mechanism layer to obtain second feature data. In the implementation process, the bidirectional gating unit layer in the sequence learning module can learn the dynamic characteristic relationship from the time sequence characteristics extracted from the convolutional layer, and perform deeper characteristic sequence learning. Meanwhile, an attention mechanism layer is also used in the sequence learning module, so that the efficiency of the cyclic neural network on time sequence feature mining can be effectively improved, and the attention mechanism layer is provided with a weight optimization network, so that the attention on high-correlation features extracted by the cyclic layer can be improved.
Optionally, in an embodiment of the present application, the output module includes: a flatten layer, a dropout layer and a full-connection submodule; and performing feature fusion on the second feature data by using an output module of the neural network to obtain the magnitude of the earthquake, wherein the step comprises the following steps: using a flatten layer to perform one-dimensional processing on the second characteristic data to obtain one-dimensional data; randomly selecting the one-dimensional data by using a dropout layer to obtain randomly selected data; and carrying out full connection processing on the randomly selected data of the dropout layer by using a full connection submodule to obtain the seismic magnitude of the earthquake.
Optionally, in an embodiment of the present application, the fully-connected sub-module includes: a linear fully-connected layer and a plurality of non-linear fully-connected layers; wherein the number of neurons of the plurality of nonlinear fully-connected layers is different, and the linear fully-connected layer comprises one neuron; and carrying out full-connection processing on the randomly selected data of the dropout layer by using a full-connection submodule to obtain the seismic magnitude of the earthquake, wherein the full-connection submodule comprises the following steps: carrying out nonlinear processing on the data randomly selected by the dropout layer by using nonlinear activation functions of a plurality of nonlinear full-connection layers to obtain nonlinear data; and performing linear processing on the nonlinear data by using the linear activation function of the linear full-connection layer to obtain the seismic magnitude of the earthquake.
Optionally, in this embodiment of the present application, acquiring characteristic waveform data of an earthquake includes: collecting three-direction acceleration waveform records of the earthquake; and the characteristic waveform data is obtained by recording and calculating the three-direction acceleration waveform.
In a second aspect, an embodiment of the present application further provides a characteristic waveform-based seismic magnitude estimation apparatus, including: the data acquisition module is used for acquiring characteristic waveform data of an earthquake; the data feature extraction module is used for extracting features of the feature waveform data by using the feature extraction module in the neural network model to obtain first feature data; the data sequence learning module is used for performing feature sequence learning on the first feature data by using a sequence learning module in the neural network to obtain second feature data; and the data output module is used for performing feature fusion on the second feature data by using the output module of the neural network to obtain the magnitude of the earthquake.
Optionally, in this embodiment of the present application, the data obtaining module is specifically configured to obtain amplitude-class characteristic waveform data, period-class characteristic waveform data, and energy-class characteristic waveform data, and accelerateDegree, velocity and displacement waveforms; wherein, the amplitude class characteristic waveform data comprises: peak displacement P of amplitude-like characteristic waveformdPeak velocity PvAnd peak acceleration Pa(ii) a Periodic-like signature data comprising: average period tau of period-like characteristic waveformcStructural parameter TP, peak value ratio TvaAnd instantaneous frequency ω (n); energy-like signature data comprising: cumulative energy change rate PI of energy-like characteristic waveformvVelocity square integral IV2, cumulative absolute velocity CVA, cumulative vertical absolute displacement cvad, cumulative vertical absolute velocity cvav, cumulative vertical absolute acceleration cvaa.
Optionally, in this embodiment of the present application, the data acquisition module is further configured to acquire a three-way acceleration waveform record of an earthquake; and the characteristic waveform data is obtained by recording and calculating the three-direction acceleration waveform.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor, performing the signature-based seismic magnitude estimation method provided in the first aspect of the application.
In a fourth aspect, the present application further provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to execute the method for estimating seismic magnitude based on a characteristic waveform provided in the first aspect of the present application.
According to the earthquake magnitude estimation method based on the characteristic waveforms, amplitude characteristic waveform data, period characteristic waveform data, energy characteristic waveform data, acceleration waveforms, speed waveforms and displacement waveforms are learned through a neural network model, and the earthquake magnitude is estimated according to the waveform data. In addition, the seismic waveform data has interpretability, so that the interpretability of the neural network can be increased.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
FIG. 1 is a schematic flow chart of a method for estimating seismic magnitude based on a signature according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a preferred embodiment of the present application;
FIG. 3 is a schematic diagram of a model of a neural network provided by an embodiment of the present application;
FIG. 4 is a comparison chart of magnitude estimates provided by the embodiments of the present application;
FIG. 5 is a schematic structural diagram of a device for seismic magnitude estimation based on a signature according to an embodiment of the present application; and
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The terms "first," "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily being construed as indicating or implying any actual such relationship or order between such entities or actions.
It should be noted that the method for estimating seismic magnitude based on a characteristic waveform provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal or a server having a function of executing a computer program, and the device terminal includes, for example: personal Computers (PCs), tablet computers, Personal Digital Assistants (PDAs), Mobile Internet Devices (MIDs), network switches or network routers, etc.
Before introducing the method for estimating seismic magnitude based on the characteristic waveform provided by the embodiment of the application, an application scenario applicable to the method for estimating seismic magnitude based on the characteristic waveform is introduced, where the application scenario is as follows: after an earthquake occurs, before destructive seismic waves reach a target field, the earthquake magnitude needs to be estimated so as to issue early warning information to the target field. Therefore, the earthquake magnitude estimation method based on the characteristic waveform can be used for estimating the earthquake magnitude so as to improve the accuracy of estimating the earthquake magnitude, and the accuracy of earthquake early warning is improved by accurately estimating the earthquake magnitude.
Referring to fig. 1, fig. 1 is a schematic flow chart of a method for estimating seismic magnitude based on a signature provided in an embodiment of the present application, where the method for estimating seismic magnitude based on a signature includes the following steps:
step S100: and acquiring seismic characteristic waveform data.
Step S200: and performing feature extraction on the feature waveform data by using a feature extraction module in the neural network model to obtain first feature data.
Step S300: and performing characteristic sequence learning on the first characteristic data by using a sequence learning module in the neural network to obtain second characteristic data.
Step S400: and performing feature fusion on the second feature data by using an output module of the neural network to obtain the earthquake magnitude of the earthquake.
In the implementation process, the characteristic extraction and learning are carried out on the characteristic waveform data through the deep learning model, so that the seismic magnitude is obtained, and the accuracy of seismic magnitude estimation is improved.
In some optional embodiments, in step S100, the characteristic waveform data includes: amplitude characteristic waveform data, period characteristic waveform data, energy characteristic waveform data, acceleration waveform, speed waveform and displacement waveform; wherein, the amplitude class characteristic waveform data comprises: peak displacement P of amplitude-like characteristic waveformdPeak velocity PvAnd peak acceleration Pa(ii) a Periodic-like signature data comprising: average period tau of period-like characteristic waveformcStructural parameter TP, peak value ratio TvaAnd instantaneous frequency ω (n); energy-like signature data comprising: cumulative energy change rate PI of energy-like characteristic waveformvVelocity square integral IV2, cumulative absolute velocity CVA, cumulative vertical absolute displacement cvad, cumulative vertical absolute velocity cvav, cumulative vertical absolute acceleration cvaa. The waveform data can directly reflect the amplitude, frequency spectrum and energy characteristics of earthquake motion, and the characteristics have important significance on the estimation of earthquake magnitude, so that the accuracy of earthquake magnitude estimation can be improved through the learning of the characteristic waveform data by the neural network model.
Referring to fig. 2, fig. 2 is a schematic diagram of a preferred embodiment of the present application; the figure shows the signature data obtained under three stations of different magnitude, wherein the three stations are: mJMA3.0, station AIC001 (distance source 22.19km), MJMA5.0, station FKS010 (22.56 km from seismic source), and MJMA7.2 station IWT010 (24.40 km from source). In FIG. 2, the characteristic waveform data of three stations are respectively passed through different curvesThe lines indicate. Please refer to the first diagram in fig. 2, wherein P of the station AIC001aThe data of the station AIC001, the station FKS010, and the station IWT010 are indicated by solid lines, broken lines, and dotted lines in other figures in fig. 2, as in the first figure, and the data of the station FKS010, the station IWT010, and the data of the station AIC001, the station FKS010, and the data of the station IWT010 are indicated by solid lines, broken lines, and dotted lines, respectively. The waveform data includes peak displacement P of amplitude-like characteristic waveformdPeak velocity PvAnd peak acceleration Pa(ii) a Periodic-like signature data comprising: average period tau of period-like characteristic waveformcStructural parameter TP, peak value ratio TvaAnd instantaneous frequency ω (n); energy-like signature data comprising: cumulative energy change rate PI of energy-like characteristic waveformvThe velocity square integral IV2, the accumulated absolute velocity CVA, the accumulated vertical absolute displacement cvad, the accumulated vertical absolute velocity cvav and the accumulated vertical absolute acceleration cvaa can directly reflect the amplitude, frequency spectrum and energy characteristics of earthquake motion, and the characteristics have important significance for the estimation of earthquake magnitude, so that the accuracy of the earthquake magnitude estimation can be improved through the learning of the characteristic waveform data by the neural network model.
Referring to fig. 3, fig. 3 is a schematic diagram of a neural network model provided in an embodiment of the present application; in some optional embodiments, in step S200 (using a feature extraction module in the neural network model to perform feature extraction on the feature waveform data to obtain the first feature data), the feature extraction module may be used to perform the step. As shown in fig. 3, the feature extraction module includes: a primary feature extraction unit and a secondary feature extraction unit; wherein each feature extraction unit comprises: convolutional layers, batch normalization layers, and max pooling layers. Specifically, step S200 is implemented by the following steps.
Step S210: and performing feature extraction on the feature waveform data by using a primary feature extraction unit to obtain first sub-feature data.
Step S220: and performing feature extraction on the first sub-feature data by using a secondary feature extraction unit to obtain first feature data.
Wherein, step S210 includes the following steps:
step S211: and carrying out convolution operation on the characteristic waveform data by using the convolution layer of the primary characteristic extraction unit to obtain convolution characteristic data.
Step S212: and normalizing the feature data after convolution by using a batch normalization layer of the primary feature extraction unit to obtain normalized feature data.
In step S212, the normalization layer performs normalization processing on the output result of the convolutional layer, so that the hyper-parameter setting of the network is more free, including learning rate and parameter initialization, and the convergence speed of the network is increased, thereby improving the performance of the network.
Step S213: and performing pooling treatment on the normalized feature data by using the maximum pooling layer of the primary feature extraction unit to obtain first sub-feature data.
In step S213, the maximum pooling layer is to reduce the size of the convolutional layer output matrix in the length and width directions, so that the number of parameters in the network can be reduced and the computation time of the network can be shortened, and at the same time, overfitting can be prevented.
Referring to fig. 3, in step S220 (using the two-level feature extraction unit to perform feature extraction on the first sub-feature data to obtain first feature data), the first sub-feature data is taken as input data, and then the first sub-feature data is processed through the convolutional layer, the batch normalization layer, and the max pooling layer in the same manner as in steps S211 to S213, so as to obtain the first feature data.
In a preferred embodiment, the convolutional layers of the primary feature extraction unit have 25 convolutional kernels with the size of 4, and the convolutional layers of the secondary feature extraction unit have 50 convolutional kernels with the size of 4; the kernel size of the normalization layer in the first-level feature extraction unit and the second-level feature extraction unit is 2, and the kernel size of the pooling layer in the first-level feature extraction unit and the second-level feature extraction unit is 2.
In some alternative embodiments, step S300 (using a sequence learning module in the neural network to perform feature sequence learning on the first feature data to obtain the second feature data) can be implemented based on the schematic diagram of the model of the neural network shown in fig. 3 by the following steps:
step S310: and performing feature sequence learning on the first feature data by using a plurality of bidirectional gating units in the first bidirectional gating unit layer to obtain first feature learning data.
Step S320: and performing feature sequence learning on the first feature learning data by using a plurality of bidirectional gating units in the second bidirectional gating unit layer to obtain second feature learning data.
Step S330: and performing weight calculation on the second feature learning data by using an attention mechanism layer to obtain second feature data.
In the above steps S310 to S330, the bidirectional gating unit layer in the sequence learning module may learn the dynamic feature relationship from the time-series features extracted from the convolutional layer, so as to perform deeper feature sequence learning. Meanwhile, an attention mechanism layer is also used in the sequence learning module, so that the efficiency of the cyclic neural network on time sequence feature mining can be effectively improved, and the attention mechanism layer is provided with a weight optimization network, so that the attention on high-correlation features extracted by the cyclic layer can be improved.
In a preferred embodiment, the first layer of bidirectional gating cells comprises 50 bidirectional gating cells and the second layer of bidirectional gating cells comprises 25 layers of bidirectional gating cells.
Referring to fig. 3, in some alternative embodiments, the output module includes: a flatten layer, a dropout layer and a full-connection submodule; specifically, step S400 (using the output module of the neural network to perform feature fusion on the second feature data to obtain the magnitude of the earthquake) may be implemented by the following steps based on the schematic diagram of the model of the neural network shown in fig. 3:
step S410: and performing one-dimensional processing on the second characteristic data by using a flatten layer to obtain one-dimensional data.
In step S410, the flatten layer is to perform one-dimensional processing on the features extracted by the feature sequence learning module, and then is used as the input of the full connection layer.
Step S420: and (4) carrying out random selection processing on the one-dimensional data by using a dropout layer to obtain randomly selected data.
In the step S420, it should be noted that the dropout layer reduces the number of parameters of the model actually participating in calculation during each training by randomly disconnecting the number of connections between the flatten layer and the full-connection layer network, but all connections are recovered during testing, so as to ensure that a better performance can be obtained during testing the model, and improve the generalization capability of the model;
step S430: and carrying out full connection processing on the randomly selected data of the dropout layer by using a full connection submodule to obtain the seismic magnitude of the earthquake.
In step S430, the fully connected layer performs feature classification on the output result of the dropout layer and integrates local information with category distinction in the convolutional layer or the pooling layer to obtain the seismic magnitude.
Referring to fig. 3, in some alternative embodiments, the fully connected sub-module includes: a linear fully-connected layer and a plurality of non-linear fully-connected layers; wherein the number of neurons in the plurality of nonlinear fully-connected layers is different, the linear fully-connected layer includes one neuron, and step S430 (the fully-connected layer integrates local information with category distinction in the convolutional layer or the pooling layer to obtain seismic magnitude) may be based on a schematic diagram of a model of the neural network shown in fig. 3, and specifically includes the following steps:
step S431: and carrying out nonlinear processing on the data randomly selected by the dropout layer by using nonlinear activation functions of a plurality of nonlinear full-connection layers to obtain nonlinear data.
Referring to fig. 3, in a preferred embodiment, there are 5 non-linear fully-connected layers and the non-linear activation function of the non-linear fully-connected layer is the ReLU function.
Step S432: and performing linear processing on the nonlinear data by using the linear activation function of the linear full-connection layer to obtain the seismic magnitude of the earthquake.
Referring to fig. 4, fig. 4 is a comparison graph of magnitude estimates provided by the embodiments of the present application; in fig. 4, (a) is the comparison of the estimation result of the seismic magnitude with the true seismic magnitude by the mean period τ C method, (B) is the comparison of the estimation result of the seismic magnitude with the actual seismic magnitude by the peak displacement Pd method, and (C) is the comparison of the estimation result of the seismic magnitude with the true seismic magnitude by the method of the present invention. In the figure, the ordinate is the estimated magnitude and the abscissa is the catalogued magnitude. The bubble point is the estimated seismic magnitude, the straight line is the true seismic magnitude of the multiple earthquakes, the dotted line is the standard deviation of the error between the predicted seismic magnitude and the actual seismic magnitude, and the sigma refers to the standard deviation of the error between the estimated seismic magnitude and the actual seismic magnitude. According to fig. 4, it can be intuitively seen that the magnitude of the earthquake estimated by the magnitude estimation method provided by the embodiment of the application is closer to the actual magnitude of the earthquake than the average period τ c method and the peak displacement Pd method.
In some alternative embodiments, step S100 (acquiring seismic signature data) includes the steps of:
step S1: and collecting the three-direction acceleration waveform record of the earthquake.
In the above step S1, the ground vibration tripartite acceleration waveform monitored by the seismic monitoring station is recorded (a)UD、aEW、aNS) And recording (a) the vertical acceleration monitored by the station by an algorithm based on long-short time average (STA/LTA) and pool-red criterion (AIC)UD) And automatically picking up the P waves when the P waves arrive. It should be noted that the algorithm based on long-time-averaged (STA/LTA) and battery-red criterion (AIC) is only illustrative, and the present application is directed to how to record the vertical acceleration (a) monitored by the stationUD) The method for automatically picking up the P-waves when arriving is not limited, and other methods can be adopted to automatically pick up the P-waves when arriving in an actual scene.
Step S2: and (4) calculating according to the three-direction acceleration waveform to obtain characteristic waveform data.
In step S2, the acceleration waveform, the velocity waveform, and the displacement waveform are calculated by:
first, velocity waveform records (v) are obtained by integrating acceleration waveform recordsUD、vEW、vNS) (ii) a Integrating the velocity waveform record to obtain a displacement waveform record (d)UD、dEW、dNS) And 4-order 0.075Hz Butterworth high-pass filtering is carried out on the recording after integration so as to eliminate low frequency brought by integrationDrift effects; then by the formula:
the acceleration waveform a (t), the velocity waveform v (t) and the displacement waveform d (t) are obtained by recording and synthesizing the waveforms of the three directions.
For amplitude characteristic waveform data, cycle characteristic waveform data and energy characteristic waveform data, the time of arrival of a P wave at a station is taken as a starting point, and the calculation is carried out according to the following formula:
for amplitude class signature data, including: peak displacement PdPeak velocity PvAnd peak acceleration PaThe above peak displacement PdPeak velocity PvAnd peak acceleration PaAccording to the formula:and (6) calculating.
For the period-class characteristic waveform data, the method comprises the following steps: average period taucStructural parameter TP, peak value ratio TvaAnd instantaneous frequency ω (n), the above-mentioned average period τcStructural parameter TP, peak value ratio TvaAnd the instantaneous frequency ω (n) are respectively according to the formula:
TP=τc*Pd
Tva=2π(Pv/Pa)
Xg(n)=αXg(n-1)+(x(n-1)+x(n+1)2
X(n)=αX(n-1)+(2x(n))2
and (6) calculating.
For energy class signature data, including: cumulative energy change rate PI of energy-like characteristic waveformvVelocity square integral IV2, accumulated absolute velocity CVA, accumulated vertical absolute displacement cvad, accumulated vertical absolute velocity cvav, and accumulated vertical absolute acceleration cvaa, which are respectively calculated according to the following formula:
and (6) calculating.
Wherein d (t), v (t) and a (t) in the above formula are displacement, velocity and acceleration synthesized by three-component seismic record respectively; 0 is the P-wave arrival time, and T is the time window length after the P-wave arrival. X (n) is the vertical acceleration record at time n; α is a recursive smoothing coefficient, which is 0.99.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a characteristic waveform-based seismic magnitude estimation apparatus 500 according to an embodiment of the present application, including:
and the data acquisition module 501 is used for acquiring the characteristic waveform data of the earthquake.
The data feature extraction module 502 is configured to perform feature extraction on the feature waveform data by using a feature extraction module in the neural network model to obtain first feature data.
And the data sequence learning module 503 is configured to perform feature sequence learning on the first feature data by using a sequence learning module in the neural network to obtain second feature data.
And the data output module 504 is configured to perform feature fusion on the second feature data by using an output module of the neural network, so as to obtain the magnitude of the earthquake.
In some optional embodiments, the data obtaining module 501 is specifically configured to obtain amplitude-type characteristic waveform data, period-type characteristic waveform data, energy-type characteristic waveform data, an acceleration waveform, a velocity waveform, and a displacement waveform; wherein, the amplitude class characteristic waveform data comprises: peak displacement P of amplitude-like characteristic waveformdPeak velocity PvAnd peak acceleration Pa(ii) a Periodic-like signature data comprising: average period tau of period-like characteristic waveformcStructural parameter TP, peak value ratio TvaAnd instantaneous frequency ω (n); energy-like signature data comprising: cumulative energy change rate PI of energy-like characteristic waveformvVelocity square integral IV2, cumulative absolute velocity CVA, cumulative vertical absolute displacement cvad, cumulative vertical absolute velocity cvav, cumulative vertical absolute acceleration cvaa.
In some optional embodiments, the data acquisition module 501 is further configured to acquire a three-way acceleration waveform record of an earthquake; and the characteristic waveform data is obtained by calculation according to the three-direction acceleration waveform record.
The implementation principle and the resulting technical effect of the characteristic waveform-based earthquake magnitude estimation apparatus 500 provided in the embodiment of the present application have been described in the foregoing embodiment of the method, and for the sake of brief description, reference may be made to the corresponding contents in the embodiment of the method where no part of the embodiment of the apparatus is mentioned.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 6, the electronic device 600 includes: a processor 601, a memory 602, which are interconnected and in communication with each other via a communication bus 603 and/or other form of connection mechanism (not shown).
The Memory 602 includes one or more (Only one is shown in the figure), which may be, but not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an electrically Erasable Programmable Read-Only Memory (EEPROM), and the like. The processor 601, as well as possibly other components, may access, read and/or write data to the memory 602.
The processor 601 includes one or more (only one shown) which may be an integrated circuit chip having signal processing capabilities. The Processor 601 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Micro Control Unit (MCU), a Network Processor (NP), or other conventional processors; the Processor may also be a dedicated Processor, including a Neural-Network Processing Unit (NPU), a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, and a discrete hardware component. Also, when there are a plurality of processors 601, some of them may be general-purpose processors, and the other may be special-purpose processors.
One or more computer program instructions may be stored in the memory 602, and may be read and executed by the processor 601 to implement a signature-based seismic magnitude estimation method provided by the embodiments of the present application.
It will be appreciated that the configuration shown in fig. 6 is merely illustrative, and that electronic device 600 may include more or fewer components than shown in fig. 6, or have a different configuration than shown in fig. 6. The components shown in fig. 6 may be implemented in hardware, software, or a combination thereof. The electronic device 600 may be a physical device, such as a PC, a laptop, a tablet, a cell phone, a server, an embedded device, etc., or may be a virtual device, such as a virtual machine, a virtualized container, etc. The electronic device 4 is not limited to a single device, and may be a combination of a plurality of devices or a cluster including a large number of devices.
The embodiment of the present application further provides a computer-readable storage medium, where computer program instructions are stored on the computer-readable storage medium, and when the computer program instructions are read and executed by a processor of a computer, the method for estimating seismic magnitude based on a characteristic waveform provided in the embodiment of the present application is executed. For example, the computer-readable storage medium may be embodied as memory 602 in electronic device 600 in FIG. 6.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and system may be implemented in other ways. The above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and there may be other divisions in actual implementation, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A method for seismic magnitude estimation based on signature, the method comprising:
acquiring characteristic waveform data of an earthquake;
performing feature extraction on the feature waveform data by using a feature extraction module in a neural network model to obtain first feature data;
performing feature sequence learning on the first feature data by using a sequence learning module in the neural network to obtain second feature data; and
and performing feature fusion on the second feature data by using an output module of the neural network to obtain the magnitude of the earthquake.
2. The method of claim 1, wherein the signature data comprises: amplitude characteristic waveform data, period characteristic waveform data, energy characteristic waveform data, acceleration waveform, speed waveform and displacement waveform;
wherein the amplitude class signature data comprises: peak displacement P of the amplitude-like characteristic waveformdPeak velocity PvAnd peak acceleration Pa;
The period-like characteristic waveform data includes: average period tau of said periodic-like characteristic waveformcStructural parameter TP, peak value ratio TvaAnd instantaneous frequency ω (n); and
the energy class signature data comprises: cumulative energy change rate PI of the energy-like characteristic waveformvVelocity square integral IV2, cumulative absolute velocity CVA, cumulative vertical absolute displacement cvad, cumulative vertical absolute velocity cvav, cumulative vertical absolute acceleration cvaa.
3. The method of claim 1, wherein the feature extraction module comprises: one-level feature extraction unit and second grade feature extraction unit, wherein, each feature extraction unit includes: a convolutional layer, a batch normalization layer and a maximum pooling layer;
the feature extraction module in the neural network model is used for extracting features of the feature waveform data to obtain first feature data, and the method comprises the following steps:
performing feature extraction on the feature waveform data by using the primary feature extraction unit to obtain first sub-feature data; and
and performing feature extraction on the first sub-feature data by using the secondary feature extraction unit to obtain first feature data.
4. The method of claim 1, wherein the sequence learning module comprises a first bidirectional gating cell layer, a second bidirectional gating cell layer, and an attention mechanism layer; wherein each bidirectional gating unit layer comprises a plurality of bidirectional gating units;
the using a sequence learning module in the neural network to perform feature sequence learning on the first feature data to obtain second feature data includes:
performing feature sequence learning on the first feature data by using a plurality of bidirectional gating units in the first bidirectional gating unit layer to obtain first feature learning data;
performing feature sequence learning on the first feature learning data by using a plurality of bidirectional gating units in the second bidirectional gating unit layer to obtain second feature learning data; and
and performing weight calculation on the second feature learning data by using the attention mechanism layer to obtain second feature data.
5. The method of claim 1, wherein the output module comprises: a flatten layer, a dropout layer and a full-connection submodule;
performing feature fusion on the second feature data by using an output module of the neural network to obtain the magnitude of the earthquake, including:
performing one-dimensional processing on the second characteristic data by using the flatten layer to obtain one-dimensional data;
randomly selecting the one-dimensional data by using the dropout layer to obtain selected data; and
and carrying out full connection processing on the randomly selected data of the dropout layer by using the full connection sub-module to obtain the seismic magnitude of the earthquake.
6. The method of claim 5, wherein the fully connected sub-module comprises: a linear fully-connected layer and a plurality of non-linear fully-connected layers; wherein the number of neurons of the plurality of nonlinear fully-connected layers is different, the linear fully-connected layer comprising one neuron;
and the full-connection processing is carried out on the randomly selected data of the dropout layer by using the full-connection submodule to obtain the seismic magnitude of the earthquake, and the full-connection processing comprises the following steps:
carrying out nonlinear processing on the data randomly selected by the dropout layer by using nonlinear activation functions of the plurality of nonlinear full-connected layers to obtain nonlinear data; and
and performing linear processing on the nonlinear data by using the linear activation function of the linear full-connection layer to obtain the seismic magnitude of the earthquake.
7. The method of claim 1, wherein the acquiring seismic signature data comprises:
collecting the three-direction acceleration waveform record of the earthquake; and
and the characteristic waveform data is obtained by calculation according to the three-direction acceleration waveform record.
8. A multi-signature earthquake early warning magnitude estimation device is characterized by comprising:
the data acquisition module is used for acquiring characteristic waveform data of an earthquake;
the data feature extraction module is used for extracting features of the feature waveform data by using a feature extraction module in a neural network model to obtain first feature data;
the data sequence learning module is used for performing feature sequence learning on the first feature data by using a sequence learning module in the neural network to obtain second feature data; and
and the data output module is used for performing feature fusion on the second feature data by using the output module of the neural network to obtain the magnitude of the earthquake.
9. An electronic device, comprising: a processor and a memory, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the method of any of claims 1 to 7.
10. A storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111457319.3A CN114167487B (en) | 2021-12-02 | 2021-12-02 | Seismic magnitude estimation method and device based on characteristic waveform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111457319.3A CN114167487B (en) | 2021-12-02 | 2021-12-02 | Seismic magnitude estimation method and device based on characteristic waveform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114167487A true CN114167487A (en) | 2022-03-11 |
CN114167487B CN114167487B (en) | 2022-09-27 |
Family
ID=80482219
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111457319.3A Active CN114167487B (en) | 2021-12-02 | 2021-12-02 | Seismic magnitude estimation method and device based on characteristic waveform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114167487B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115291281A (en) * | 2022-09-30 | 2022-11-04 | 中国科学院地质与地球物理研究所 | Real-time micro-earthquake magnitude calculation method and device based on deep learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011002371A (en) * | 2009-06-19 | 2011-01-06 | Hakusan Kogyo Kk | Seismic intensity estimation method and device |
CN111538076A (en) * | 2020-05-13 | 2020-08-14 | 浙江大学 | Earthquake magnitude rapid estimation method based on deep learning feature fusion |
CN112782762A (en) * | 2021-01-29 | 2021-05-11 | 东北大学 | Earthquake magnitude determination method based on deep learning |
CN113514877A (en) * | 2021-07-07 | 2021-10-19 | 浙江大学 | Self-adaptive quick earthquake magnitude estimation method |
-
2021
- 2021-12-02 CN CN202111457319.3A patent/CN114167487B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011002371A (en) * | 2009-06-19 | 2011-01-06 | Hakusan Kogyo Kk | Seismic intensity estimation method and device |
CN111538076A (en) * | 2020-05-13 | 2020-08-14 | 浙江大学 | Earthquake magnitude rapid estimation method based on deep learning feature fusion |
CN112782762A (en) * | 2021-01-29 | 2021-05-11 | 东北大学 | Earthquake magnitude determination method based on deep learning |
CN113514877A (en) * | 2021-07-07 | 2021-10-19 | 浙江大学 | Self-adaptive quick earthquake magnitude estimation method |
Non-Patent Citations (4)
Title |
---|
杨黎薇 等: "基于人工神经元网络和多特征参数的预警震级估算", 《地震研究》 * |
杨黎薇 等: "基于人工神经元网络和多特征参数的预警震级估算", 《地震研究》, vol. 41, no. 02, 15 April 2018 (2018-04-15), pages 302 - 310 * |
胡安冬 等: "机器学习在地震紧急预警系统震级预估中的应用", 《地球物理学报》 * |
胡安冬 等: "机器学习在地震紧急预警系统震级预估中的应用", 《地球物理学报》, vol. 63, no. 07, 3 July 2020 (2020-07-03), pages 2617 - 2626 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115291281A (en) * | 2022-09-30 | 2022-11-04 | 中国科学院地质与地球物理研究所 | Real-time micro-earthquake magnitude calculation method and device based on deep learning |
CN115291281B (en) * | 2022-09-30 | 2022-12-20 | 中国科学院地质与地球物理研究所 | Real-time micro-earthquake magnitude calculation method and device based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN114167487B (en) | 2022-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Subspace network with shared representation learning for intelligent fault diagnosis of machine under speed transient conditions with few samples | |
Zou et al. | Integration of residual network and convolutional neural network along with various activation functions and global pooling for time series classification | |
Liang et al. | A deep learning model for transportation mode detection based on smartphone sensing data | |
CN112489677B (en) | Voice endpoint detection method, device, equipment and medium based on neural network | |
CN112799128B (en) | Method for seismic signal detection and seismic phase extraction | |
CN104063719A (en) | Method and device for pedestrian detection based on depth convolutional network | |
Jia et al. | Automatic event detection in low SNR microseismic signals based on multi-scale permutation entropy and a support vector machine | |
EP3671575A2 (en) | Neural network processing method and apparatus based on nested bit representation | |
EP3940600A1 (en) | Method and apparatus with neural network operation processing background | |
Jahanjoo et al. | Detection and multi-class classification of falling in elderly people by deep belief network algorithms | |
CN108847941B (en) | Identity authentication method, device, terminal and storage medium | |
CN115376518B (en) | Voiceprint recognition method, system, equipment and medium for real-time noise big data | |
CN113205820B (en) | Method for generating voice coder for voice event detection | |
WO2023274052A1 (en) | Image classification method and related device thereof | |
CN114167487B (en) | Seismic magnitude estimation method and device based on characteristic waveform | |
Giorgi et al. | Walking through the deep: Gait analysis for user authentication through deep learning | |
Jeong et al. | Sensor-data augmentation for human activity recognition with time-warping and data masking | |
CN114259255B (en) | Modal fusion fetal heart rate classification method based on frequency domain signals and time domain signals | |
CN111863276A (en) | Hand-foot-and-mouth disease prediction method using fine-grained data, electronic device, and medium | |
GB2617940A (en) | Spatiotemporal deep learning for behavioral biometrics | |
CN112418173A (en) | Abnormal sound identification method and device and electronic equipment | |
CN116705196A (en) | Drug target interaction prediction method and device based on symbolic graph neural network | |
CN116739154A (en) | Fault prediction method and related equipment thereof | |
Zhao et al. | LCANet: Lightweight context-aware attention networks for earthquake detection and phase-picking on IoT edge devices | |
CN115314239A (en) | Analysis method and related equipment for hidden malicious behaviors based on multi-model fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |