CN117970428A - Seismic signal identification method, device and equipment based on random forest algorithm - Google Patents
Seismic signal identification method, device and equipment based on random forest algorithm Download PDFInfo
- Publication number
- CN117970428A CN117970428A CN202410391433.8A CN202410391433A CN117970428A CN 117970428 A CN117970428 A CN 117970428A CN 202410391433 A CN202410391433 A CN 202410391433A CN 117970428 A CN117970428 A CN 117970428A
- Authority
- CN
- China
- Prior art keywords
- random forest
- seismic
- feature
- data
- decision tree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007637 random forest analysis Methods 0.000 title claims abstract description 182
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 32
- 238000003066 decision tree Methods 0.000 claims abstract description 108
- 238000012545 processing Methods 0.000 claims abstract description 13
- 238000012549 training Methods 0.000 claims description 47
- 238000012360 testing method Methods 0.000 claims description 44
- 238000011156 evaluation Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000012217 deletion Methods 0.000 claims description 9
- 230000037430 deletion Effects 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 7
- 238000001228 spectrum Methods 0.000 claims description 6
- 238000013075 data extraction Methods 0.000 claims description 5
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 abstract 1
- 238000010801 machine learning Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000007621 cluster analysis Methods 0.000 description 1
- 230000002860 competitive effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V1/00—Seismology; Seismic or acoustic prospecting or detecting
- G01V1/28—Processing seismic data, e.g. for interpretation or for event detection
- G01V1/30—Analysis
- G01V1/307—Analysis for determining seismic attributes, e.g. amplitude, instantaneous phase or frequency, reflection strength or polarity
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V1/00—Seismology; Seismic or acoustic prospecting or detecting
- G01V1/38—Seismology; Seismic or acoustic prospecting or detecting specially adapted for water-covered areas
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01V—GEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
- G01V2210/00—Details of seismic processing or analysis
- G01V2210/60—Analysis
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Acoustics & Sound (AREA)
- Environmental & Geological Engineering (AREA)
- Geology (AREA)
- General Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- Geophysics (AREA)
- Oceanography (AREA)
- Geophysics And Detection Of Objects (AREA)
Abstract
The application provides a seismic signal identification method, a device and equipment based on a random forest algorithm, wherein the method comprises the following steps: acquiring seismic data; processing the seismic data by using a short-time average/long-time average method, and extracting a seismic event; extracting the characteristics of the seismic events, obtaining the extracted characteristics of each seismic event, marking the extracted characteristics, and formulating a category label for each characteristic to obtain a data set and a characteristic set; and constructing a random forest classifier by using the data set and the feature set, and classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier. The method can effectively process multidimensional feature space and complex classification tasks through synthesizing the results of a plurality of decision trees by a random forest algorithm, has higher classification accuracy and robustness, can realize automatic identification and classification of different types of signals, and provides an effective method for event identification of marine seismology data.
Description
Technical Field
The invention relates to the technical field of marine geophysics, in particular to a seismic signal identification method, device and equipment based on a random forest algorithm.
Background
In the field of marine seismology, conventional event identification methods are typically based on short-term average/long-term average methods (STA/LTA algorithms) that identify seismic events by calculating the ratio of short-term average to long-term average of the signal to detect abrupt energy changes in the signal. However, conventional STA/LTA algorithms are less effective in marine seismology data due to the large amount of noise interference present in the marine environment. Therefore, there is a need for a more efficient and accurate event recognition method to improve the processing efficiency and accuracy of marine seismology data.
Machine learning is a method capable of learning and identifying patterns from data, and is widely applied in the field of marine seismology in recent years, so that a new thought and method are provided for improving event identification efficiency. Therefore, the former has made a lot of work, and the machine learning method in event recognition in the marine seismology field is developed from the initial unsupervised cluster analysis, the unsupervised self-organizing competitive neural network and the like to the subsequent BP neural network based on the supervised BP neural network, the convolutional neural network and the BP neural network based on the genetic algorithm, so that the accuracy is continuously improved, and the recognition effect is also more and more obvious. However, the above method still has some problems and disadvantages, such as: the neural network method has the problems of over fitting, slow convergence speed and the like; the support vector machine still depends on the problems of kernel function selection, penalty coefficient selection and the like to a large extent; the unsupervised learning method still has the problems that the classification category is difficult to control, and a large enough sample group is needed to ensure classification performance. Therefore, there is a need for a more robust, more practical machine learning method to address the signal identification problem in the marine seismic field.
The random forest algorithm is a machine learning algorithm based on decision trees, a plurality of decision trees are constructed by carrying out random sampling and feature selection on a training data set, and a final classification result is determined in a voting mode, so that the advantages of classification of the decision trees are reserved, better fault tolerance is achieved, high-dimensional data and noise interference can be effectively processed, the classification accuracy and the robustness are high, and the advantages of high efficiency, accuracy, easiness in interpretation and the like are achieved.
Chinese patent CN115079258a discloses an on-line recognition method for submarine seismic signals based on wavelet analysis, in which submarine acoustic signals are observed in real time, multiple layers of wavelet decomposition are performed to obtain wavelet coefficients of each layer, then denoising is performed through a soft threshold function threshold, then wavelet reconstruction is performed to obtain denoised reconstructed signals, then processes such as unfolding are performed to obtain relative powers of each layer and relative power distribution thereof, N layers of relative powers are traversed from the relative power distribution, whether the relative power distribution of the submarine acoustic signals is matched with seismic features is verified, and then the submarine seismic signals are judged. However, the above technical solution still has a defect of insufficient classification accuracy.
Therefore, it is highly desirable to apply the random forest machine learning algorithm to marine seismology data processing to solve the noise interference problem existing in the conventional method and improve the accuracy and efficiency of event recognition.
Disclosure of Invention
Based on the method, the device and the equipment for identifying the seismic signals based on the random forest algorithm are provided, and are used for identifying different types of signals in marine seismic data, including seismic signals, submarine seismic events and various noises. According to the invention, by extracting the signal characteristics and training and classifying by using the random forest classifier, signals of different types can be automatically identified and classified, so that the event identification capability of marine seismology data is improved. Meanwhile, the invention also discloses a method for improving the performance of the classifier by gradually increasing noise examples so as to solve the problem of noise interference in marine seismology data; and the number of noise examples is gradually increased in an iterative optimization mode, so that the classifier can be better adapted to noise interference, and the recognition and classification capacity of the noise is improved, thereby improving the overall classifier performance.
In a first aspect, the present invention provides a method for identifying seismic signals based on a random forest algorithm, the method comprising:
Acquiring seismic data;
processing the seismic data by using a short-time average/long-time average method, and extracting a seismic event;
Extracting the characteristics of the seismic events, obtaining the extracted characteristics of each seismic event, marking the extracted characteristics, and formulating a category label for each characteristic to obtain a data set and a characteristic set;
and constructing a random forest classifier by using the data set and the feature set, and classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier.
In one possible implementation, the processing the seismic data using a short-time average/long-time average method, extracting a seismic event, includes:
s2.1, setting a short-time window length, a long-time window length, a starting threshold value and a termination threshold value;
S2.2, calculating a signal average value STA in a short time window and a signal average value LTA in a long time window at a certain moment;
S2.3, calculating the ratio of the STA to the LTA, if the value of the STA/LTA exceeds an initial threshold, starting data extraction of the seismic data from the position until the value of the STA/LTA is lower than a termination threshold, and terminating the data extraction to obtain a seismic event; if the value of the STA/LTA does not exceed the threshold value, entering the next moment, calculating a signal average value STA in a short time window and a signal average value LTA in a long time window of the moment, calculating the ratio of the STA to the LTA, and extracting data until all the seismic events in the seismic data are extracted.
In one possible implementation, the short window length is set to 0.8 seconds, the long window length is set to 45 seconds, the start threshold is set to 7, and the end threshold is set to 1.5.
In one possible implementation, the features include waveforms, frequencies, and spectra.
In one possible implementation manner, the constructing a random forest classifier by using the data set and the feature set, classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier includes:
s4.1, dividing the data set into a training set and a testing set;
s4.2, constructing a first random forest model by utilizing the training set and the feature set;
s4.3, evaluating the first random forest model by using the test set;
s4.4, adding a noise example in the training set, and introducing more noise signals of different types to obtain a new training set;
s4.5, reconstructing a random forest model by using the new training set to obtain a second random forest model;
S4.6, performing performance evaluation on the second random forest model by using the test set, and comparing the performance difference between the second random forest model and the first random forest model;
S4.7, if the performance of the second random forest model is improved but still has room for improvement according to the evaluation result, continuing to increase noise examples, and repeatedly executing the steps S4.5 to S4.6 until the random forest model reaches a satisfactory performance level to obtain a random forest classifier, and classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier; and if the performance of the second random forest model reaches a satisfactory performance level, the second random forest model is a random forest classifier, and the random forest classifier is used for classifying the earthquake signals, the submarine earthquake events and the noises.
In one possible implementation, the evaluation indexes for evaluating the random forest model include accuracy rate, precision rate, recall rate.
In one possible implementation manner, the constructing a random forest classifier by using the data set and the feature set, classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier includes:
s4.1, dividing the data set into a training set and a testing set;
s4.2, sampling the training set with the replacement by using a Bootstrap method, and randomly generating T first sub-training sets;
s4.3, corresponding to each first sub-training set, randomly extracting M features from M features of the feature set, wherein M is less than M, and T first sub-feature sets are obtained;
S4.4, T decision trees are generated by one-to-one correspondence of the T first sub-training sets and the T first sub-feature sets, and a first random forest is formed;
S4.5, testing the first random forest according to the test set to obtain an output result of each decision tree of the first random forest;
s4.6, calculating the decision error of the output result of each decision tree ,/>;
S4.7, assigning a weight coefficient to the first sub-feature set corresponding to the output result of each decision tree,First sub-feature set weight coefficient/>The calculation formula of (2) is as follows:
;
S4.8, calculating the total weight of each feature in the feature set And all the features in the feature set are weighted according to total number/>Sequencing from big to small to obtain a sequenced feature set; the total number of weights/>The calculation formula of (2) is as follows:
;
Wherein, Represents the/>Features at/>Weight coefficients in the first subset of features; /(I)Serial number representing feature in feature set,/>;/>Sequence number representing the first sub-feature set,/>;/>The values of (2) are as follows:
;
s4.9, dividing the features in the sequenced feature set into a high-distinction-degree feature interval and a distinction-degree unobvious feature interval according to the proportion of 5:5;
s4.10, re-sampling the training set with the replacement by using a Bootstrap method, and randomly generating T second sub-training sets;
s4.11, randomly extracting S features from a high-discrimination feature interval of the feature set for each second sub-training set, randomly extracting M-S features from a discrimination unobvious feature interval of the feature set, wherein M is less than M, and generating T second sub-feature sets;
s4.12, T decision trees are generated by the T second sub-training sets and the T second sub-feature sets in one-to-one correspondence, and a second random forest is formed;
s4.13, testing the second random forest according to the test set to obtain an output result of each decision tree of the second random forest, and voting to detect the precision of the second random forest;
S4.14, calculating the similarity of decisions between every two decision trees in the second random forest Decision similarity/>The calculation formula of (2) is as follows:
;
Wherein, For/>Personal decision tree and/>The decision trees all sort the correct number of test data,/>For/>The individual decision tree is correctly classified, the/>Number of test data of individual decision tree classification errors,/>For/>Individual decision tree classification errors, th/>The number of test data with correct classification of the decision tree,/>For/>Personal decision tree and/>The decision trees all sort the wrong test data number,/>;
S4.15, selecting decision similarityThe two decision trees closest to 1 respectively calculate decision errors of the two decision trees, and delete the decision tree with small decision error to obtain a random forest after the decision tree is deleted; if the decision errors of the two decision trees are the same, randomly deleting one decision tree to obtain a random forest after deleting the decision tree;
S4.16, testing the random forest after the decision tree deletion according to the test set to obtain an output result of each decision tree in the random forest after the decision tree deletion, and voting to detect the precision of the random forest after the decision tree deletion;
S4.17, if the precision of the random forest after the decision tree is deleted is greater than that of the second random forest, repeating the steps S4.15-S4.16 until the precision of the random forest after the decision tree is deleted is less than that of the random forest before the decision tree is deleted, stopping circulation, and taking the random forest before the decision tree is deleted as a random forest classifier; if the accuracy of the random forest after the decision tree is deleted is smaller than that of the second random forest, the second random forest is used as a random forest classifier;
And S4.18, classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier.
In a second aspect, the present invention provides an earthquake signal identification device based on a random forest algorithm, including:
The data acquisition module is used for acquiring seismic data;
the event extraction module is used for processing the seismic data by using a short-time average/long-time average method and extracting seismic events;
The feature extraction module is used for extracting features of the seismic events, obtaining extracted features of each seismic event, marking the extracted features, and formulating a category label for each feature to obtain a data set and a feature set;
And the classification module is used for constructing a random forest classifier by utilizing the data set and the feature set, and classifying the seismic signals, the submarine seismic events and the noise by utilizing the random forest classifier.
In a third aspect, the present invention provides an electronic device, comprising:
A processor;
a memory;
And a computer program, wherein the computer program is stored in the memory, the computer program comprising instructions that, when executed by the processor, cause the electronic device to perform the method of any of the first aspects.
Based on the above summary, compared with the prior art, the present invention realizes the following technical effects:
The method and the device integrate the results of a plurality of decision trees through a random forest algorithm, can effectively process multidimensional feature space and complex classification tasks, and have higher classification accuracy and robustness. By carrying out random forest classification on the extracted signal characteristics, the automatic identification and classification of different types of signals can be realized, and an effective method is provided for event identification of marine seismology data.
And secondly, the method can help the classifier to better adapt to noise interference in marine seismology data by gradually increasing noise examples, and improve the recognition and classification capacity of noise, thereby improving the overall classifier performance. This process is an iterative optimization process aimed at enabling the classifier to more accurately identify different types of signals, including seismic signals, ocean bottom seismic events, and various noise.
And thirdly, the feature set of the random forest algorithm is divided into a high-distinction degree feature interval and a distinction degree unobvious feature interval, a balance point is found between the high-distinction degree feature and the distinction degree unobvious feature, the classification precision of the decision tree is improved, and the decision precision of the whole random forest is further improved. Meanwhile, the decision tree deleting method deletes the decision tree similar to the decision tree, so that the decision precision of the random forest is improved.
The random forest machine learning classifier can be helped to identify signals different from earthquakes, so that the problems of noise interference in marine seismology data and automatic identification and classification of signals of different types are solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a seismic signal identification method based on a random forest algorithm according to an embodiment of the invention;
FIG. 2 is a block diagram of a seismic signal identification device based on a random forest algorithm according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For a better understanding of the technical solution of the present invention, the following detailed description of the embodiments of the present invention refers to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this embodiment of the invention, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one way of describing an association of associated objects, meaning that there may be three relationships, e.g., a and/or b, which may represent: the first and second cases exist separately, and the first and second cases exist separately. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Example 1
Referring to fig. 1, a flow chart of a seismic signal identification method based on a random forest algorithm according to an embodiment of the present invention is shown. As shown in fig. 1, the method includes:
S1, acquiring seismic data;
s2, processing the seismic data by using a short-time average/long-time average method, and extracting seismic events;
S3, extracting features of the seismic events, obtaining extracted features of each seismic event, marking the extracted features, and formulating a category label for each feature to obtain a data set and a feature set;
s4, constructing a random forest classifier by using the data set and the feature set, and classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier.
In a specific embodiment, the step S2 specifically includes:
s2.1, setting a short-time window length, a long-time window length, a starting threshold value and a termination threshold value;
Short window lengths are typically within a few seconds and long window lengths are typically tens of seconds to a few minutes. Preferably, the short window length is set to 0.8 seconds, the long window length is set to 45 seconds, the start threshold is set to 7, and the end threshold is set to 1.5.
S2.2, calculating a signal average value STA in a short time window and a signal average value LTA in a long time window at a certain moment;
S2.3, calculating the ratio of the STA to the LTA, if the value of the STA/LTA exceeds an initial threshold, indicating that a burst point appears in the seismic data, and starting to extract the seismic data from the position of the burst point until the value of the STA/LTA is lower than a termination threshold, and ending the data extraction to obtain a seismic event; if the value of STA/LTA does not exceed the threshold, then the next time is entered, S2.2 is cycled until all seismic events in the seismic data are extracted.
In a specific embodiment, in step S3, the features include waveforms, frequencies, and spectrums. The method comprises the following steps:
Performing waveform analysis on each event, wherein the waveform analysis comprises characteristic extraction in aspects of amplitude, frequency, energy and the like;
Performing spectrum analysis on each event, and extracting spectrum characteristics such as spectrum shape, spectrum energy distribution and the like;
calculating kurtosis for each of the events to describe sharpness of the signal waveform;
Extracting other characteristics possibly related to the event type, such as signal duration, frequency components and the like;
for each of the events, its performance on different features is considered together to form a complete feature vector.
These features will be used as inputs to a machine-learned classifier for distinguishing between different types of signals, including seismic signals, ocean bottom seismic events, and noise. By extracting and analyzing the multi-aspect characteristics of the signals, the characteristics of different types of signals can be more fully described, and richer information is provided for subsequent classification and identification.
In a specific embodiment, step S4 specifically includes:
s4.1, dividing the data set into a training set and a testing set;
s4.2, constructing a random forest model by utilizing the training set and the feature set;
S4.3, evaluating the random forest model by using the test set;
and S4.4, optimizing parameters of the random forest model according to the evaluation result to obtain a random forest classifier, and classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier.
Alternatively, in another embodiment, step S4 specifically includes:
s4.1, dividing the data set into a training set and a testing set;
s4.2, constructing a first random forest model by utilizing the training set and the feature set;
s4.3, evaluating the first random forest model by using the test set;
s4.4, adding a noise example in the training set, and introducing more noise signals of different types to obtain a new training set;
Further, the noise examples may be outliers, false marks or other interfering signals in the data in order to increase the robustness of the model so that it can be better generalized to new, unseen data.
S4.5, reconstructing a random forest model by using the new training set to obtain a second random forest model;
S4.6, performing performance evaluation on the second random forest model by using the test set, and comparing the performance difference between the second random forest model and the first random forest model;
S4.7, if the performance of the second random forest model is improved but still has room for improvement according to the evaluation result, continuing to increase noise examples, and repeatedly executing the steps S4.5 to S4.6 until the random forest model reaches a satisfactory performance level to obtain a random forest classifier, and classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier; and if the performance of the second random forest model reaches a satisfactory performance level, the second random forest model is a random forest classifier, and the random forest classifier is used for classifying the earthquake signals, the submarine earthquake events and the noises.
Further, the evaluation indexes for evaluating the performance of the random forest model comprise accuracy, precision, recall rate and the like.
Alternatively, in another embodiment, step S4 specifically includes:
s4.1, dividing the data set into a training set and a testing set;
s4.2, sampling the training set with the replacement by using a Bootstrap method, and randomly generating T first sub-training sets;
s4.3, corresponding to each first sub-training set, randomly extracting M features from M features of the feature set, wherein M is less than M, and T first sub-feature sets are obtained;
S4.4, T decision trees are generated by one-to-one correspondence of the T first sub-training sets and the T first sub-feature sets, and a first random forest is formed;
S4.5, testing the first random forest according to the test set to obtain an output result of each decision tree of the first random forest;
s4.6, calculating the decision error of the output result of each decision tree ,/>;
S4.7, assigning a weight coefficient to the first sub-feature set corresponding to the output result of each decision tree,First sub-feature set weight coefficient/>The calculation formula of (2) is as follows:
;
S4.8, calculating the total weight of each feature in the feature set And all the features in the feature set are weighted according to total number/>Sequencing from big to small to obtain a sequenced feature set; the total number of weights/>The calculation formula of (2) is as follows:
,
Wherein, Represents the/>Features at/>Weight coefficients in the first subset of features; /(I)Serial number representing feature in feature set,/>;/>Sequence number representing the first sub-feature set,/>;/>The values of (2) are as follows:
;
S4.9, dividing the features in the sequenced feature set into a high-distinction-degree feature interval and a distinction-degree unobvious feature interval according to a certain proportion;
Preferably, the ratio may be 5:5, 6:4, 4:6, etc., and the ratio interval is selected by those skilled in the art according to actual needs.
It is particularly noted that features with insignificant degree of discrimination are not distinguished for a number of reasons in a single decision tree or in a forest of a certain scale, which does not necessarily mean that there is no useful classification information, and therefore these features cannot be simply deleted. In the embodiment, the feature set is divided into the high-distinction degree feature interval and the distinction degree unobvious feature interval, so that a balance point is found between the high-distinction degree feature and the distinction degree unobvious feature, the classification precision of the decision tree is improved, and the decision precision of the whole random forest is further improved.
S4.10, re-sampling the training set with the replacement by using a Bootstrap method, and randomly generating T second sub-training sets;
s4.11, randomly extracting S features from a high-discrimination feature interval of the feature set for each second sub-training set, randomly extracting M-S features from a discrimination unobvious feature interval of the feature set, wherein M is less than M, and generating T second sub-feature sets;
s4.12, T decision trees are generated by the T second sub-training sets and the T second sub-feature sets in one-to-one correspondence, and a second random forest is formed;
s4.13, testing the second random forest according to the test set to obtain an output result of each decision tree of the second random forest, and voting to detect the precision of the second random forest;
S4.14, calculating the similarity of decisions between every two decision trees in the second random forest Decision similarity/>The calculation formula of (2) is as follows:
,
Wherein, For/>Personal decision tree and/>The decision trees all sort the correct number of test data,/>For/>The individual decision tree is correctly classified, the/>Number of test data of individual decision tree classification errors,/>For/>Individual decision tree classification errors, th/>The number of test data with correct classification of the decision tree,/>For/>Personal decision tree and/>The decision trees all sort the wrong test data number,/>;
S4.15, selecting decision similarityThe two decision trees closest to 1 respectively calculate decision errors of the two decision trees, and delete the decision tree with small decision error to obtain a random forest after the decision tree is deleted; if the decision errors of the two decision trees are the same, randomly deleting one decision tree to obtain a random forest after deleting the decision tree;
S4.16, testing the random forest after the decision tree deletion according to the test set to obtain an output result of each decision tree in the random forest after the decision tree deletion, and voting to detect the precision of the random forest after the decision tree deletion;
S4.17, if the precision of the random forest after the decision tree is deleted is greater than that of the second random forest, repeating the steps S4.15-S4.16 until the precision of the random forest after the decision tree is deleted is less than that of the random forest before the decision tree is deleted, stopping circulation, and taking the random forest before the decision tree is deleted as a random forest classifier; if the accuracy of the random forest after the decision tree is deleted is smaller than that of the second random forest, the second random forest is used as a random forest classifier;
it should be specifically noted that, the similarity of the decisions of the two decision trees is approximately close to 1, which means that the more similar the decisions of the two decision trees, the easier the final voting result is affected.
And S4.18, classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier.
Corresponding to the embodiment, the invention also provides a seismic signal identification device based on the random forest algorithm.
Referring to fig. 2, a block diagram of an earthquake signal recognition device based on a random forest algorithm is provided in an embodiment of the present invention. As shown in fig. 2, it mainly includes the following modules.
A data acquisition module 201, configured to acquire seismic data;
an event extraction module 202, configured to process the seismic data using a short-time average/long-time average method, and extract a seismic event;
In this module, first, the seismic data will be received and processed. The data may be continuous time series data that is preprocessed to enhance the characteristics of the seismic event using short time averaging and long time averaging methods. Short-time averaging and long-time averaging methods are techniques commonly used for signal processing that can help extract specific patterns and events in the signal. Once the seismic data has been preprocessed, the event extraction module will identify and isolate the seismic event, extracting it from the background noise.
The feature extraction module 203 is configured to perform feature extraction on the seismic events, obtain an extracted feature of each seismic event, mark the extracted feature, and formulate a category label for each feature to obtain a data set and a feature set.
In this module, each event extracted from the seismic data is further processed. For each event, the feature extraction module calculates a series of features that may describe different aspects of the seismic event, such as amplitude, frequency, duration, etc. The selection of these features may be based on domain knowledge or machine learning techniques to ensure that the final extracted features adequately characterize the seismic event.
The classification module 204 is configured to construct a random forest classifier using the data set and the feature set, and classify the seismic signal, the ocean bottom seismic event, and the noise using the random forest classifier.
In this module, for each event's extracted features, classification is performed using a random forest algorithm. Random forests are an integrated learning method based on decision trees that are classified by constructing multiple decision trees and combining them. For classification of seismic events, a random forest algorithm may determine which category the event belongs to, such as type, intensity, etc., of the seismic based on the extracted features. By training the random forest model, the earthquake events can be effectively classified, and the method has higher accuracy and robustness.
Corresponding to the embodiment, the embodiment of the invention also provides electronic equipment.
Referring to fig. 3, a schematic structural diagram of an electronic device according to an embodiment of the present invention is provided. As shown in fig. 3, the electronic device 300 may include: a processor 301, a memory 302 and a communication unit 303. The components may communicate via one or more buses, and it will be appreciated by those skilled in the art that the electronic device structure shown in the drawings is not limiting of the embodiments of the invention, as it may be a bus-like structure, a star-like structure, or include more or fewer components than shown, or may be a combination of certain components or a different arrangement of components.
Wherein the communication unit 303 is configured to establish a communication channel, so that the electronic device may communicate with other devices.
The processor 301, which is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and/or processes data by running or executing software programs and/or modules stored in the memory 302, and invoking data stored in the memory. The processor may be comprised of integrated circuits (INTEGRATED CIRCUIT, ICs), such as a single packaged IC, or may be comprised of packaged ICs that connect multiple identical or different functions. For example, the processor 301 may include only a central processing unit (central processing unit, CPU). In the embodiment of the invention, the CPU can be a single operation core or can comprise multiple operation cores.
Memory 302 for storing instructions for execution by processor 301, memory 302 may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, or optical disk.
The execution of the instructions in memory 302, when executed by processor 301, enables electronic device 300 to perform some or all of the steps of the method embodiments described above.
Corresponding to the above embodiment, the embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium may store a program, where when the program runs, the device where the computer readable storage medium is located may be controlled to execute some or all of the steps in the above method embodiment. In particular, the computer readable storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random access memory (random access memory, RAM), or the like.
Corresponding to the above embodiments, the present invention also provides a computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform some or all of the steps of the above method embodiments.
In the embodiments of the present invention, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present invention, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory RAM), a magnetic disk, or an optical disk, etc., which can store program codes.
The foregoing is merely exemplary embodiments of the present invention, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present invention, which should be covered by the present invention.
Claims (9)
1. A method for identifying seismic signals based on a random forest algorithm, the method comprising:
Acquiring seismic data;
processing the seismic data by using a short-time average/long-time average method, and extracting a seismic event;
Extracting the characteristics of the seismic events, obtaining the extracted characteristics of each seismic event, marking the extracted characteristics, and formulating a category label for each characteristic to obtain a data set and a characteristic set;
and constructing a random forest classifier by using the data set and the feature set, and classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier.
2. The method for identifying seismic signals based on random forest algorithm according to claim 1, wherein the processing the seismic data using short-time average/long-time average method, extracting seismic events, comprises:
s2.1, setting a short-time window length, a long-time window length, a starting threshold value and a termination threshold value;
S2.2, calculating a signal average value STA in a short time window and a signal average value LTA in a long time window at a certain moment;
S2.3, calculating the ratio of the STA to the LTA, if the value of the STA/LTA exceeds an initial threshold, starting data extraction of the seismic data from the position until the value of the STA/LTA is lower than a termination threshold, and terminating the data extraction to obtain a seismic event; if the value of the STA/LTA does not exceed the threshold value, entering the next moment, calculating a signal average value STA in a short time window and a signal average value LTA in a long time window of the moment, calculating the ratio of the STA to the LTA, and extracting data until all the seismic events in the seismic data are extracted.
3. The method for identifying seismic signals based on a random forest algorithm according to claim 2, wherein the short-time window length is set to 0.8 seconds, the long-time window length is set to 45 seconds, the start threshold is set to 7, and the end threshold is set to 1.5.
4. A method of seismic signal identification based on a random forest algorithm according to claim 1, wherein the characteristics include waveform, frequency and spectrum.
5. The method for identifying seismic signals based on a random forest algorithm according to claim 1, wherein said constructing a random forest classifier using said data set and said feature set, classifying seismic signals, ocean bottom seismic events and noise using said random forest classifier, comprises:
s4.1, dividing the data set into a training set and a testing set;
s4.2, constructing a first random forest model by utilizing the training set and the feature set;
s4.3, evaluating the first random forest model by using the test set;
s4.4, adding a noise example in the training set, and introducing more noise signals of different types to obtain a new training set;
s4.5, reconstructing a random forest model by using the new training set to obtain a second random forest model;
S4.6, performing performance evaluation on the second random forest model by using the test set, and comparing the performance difference between the second random forest model and the first random forest model;
S4.7, if the performance of the second random forest model is improved but still has room for improvement according to the evaluation result, continuing to increase noise examples, and repeatedly executing the steps S4.5 to S4.6 until the random forest model reaches a satisfactory performance level to obtain a random forest classifier, and classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier; and if the performance of the second random forest model reaches a satisfactory performance level, the second random forest model is a random forest classifier, and the random forest classifier is used for classifying the earthquake signals, the submarine earthquake events and the noises.
6. The method for identifying seismic signals based on a random forest algorithm according to claim 5, wherein the evaluation indexes for evaluating the random forest model comprise accuracy, precision and recall.
7. The method for identifying seismic signals based on a random forest algorithm according to claim 1, wherein said constructing a random forest classifier using said data set and said feature set, classifying seismic signals, ocean bottom seismic events and noise using said random forest classifier, comprises:
s4.1, dividing the data set into a training set and a testing set;
s4.2, sampling the training set with the replacement by using a Bootstrap method, and randomly generating T first sub-training sets;
s4.3, corresponding to each first sub-training set, randomly extracting M features from M features of the feature set, wherein M is less than M, and T first sub-feature sets are obtained;
S4.4, T decision trees are generated by one-to-one correspondence of the T first sub-training sets and the T first sub-feature sets, and a first random forest is formed;
S4.5, testing the first random forest according to the test set to obtain an output result of each decision tree of the first random forest;
s4.6, calculating the decision error of the output result of each decision tree ,/>;
S4.7, assigning a weight coefficient to the first sub-feature set corresponding to the output result of each decision tree,/>First sub-feature set weight coefficient/>The calculation formula of (2) is as follows:
;
S4.8, calculating the total weight of each feature in the feature set And all the features in the feature set are weighted according to total number/>Sequencing from big to small to obtain a sequenced feature set; the total number of weights/>The calculation formula of (2) is as follows:
,
Wherein, Represents the/>Features at/>Weight coefficients in the first subset of features; /(I)Serial number representing feature in feature set,/>;/>Sequence number representing the first sub-feature set,/>;/>The values of (2) are as follows:
;
s4.9, dividing the features in the sequenced feature set into a high-distinction-degree feature interval and a distinction-degree unobvious feature interval according to the proportion of 5:5;
s4.10, re-sampling the training set with the replacement by using a Bootstrap method, and randomly generating T second sub-training sets;
S4.11, for each second sub-training set, randomly extracting S features from the high-discrimination feature interval of the feature set, and randomly extracting from the distinguishing unobvious feature interval of the feature set Features,/>Generating T second sub-feature sets;
s4.12, T decision trees are generated by the T second sub-training sets and the T second sub-feature sets in one-to-one correspondence, and a second random forest is formed;
s4.13, testing the second random forest according to the test set to obtain an output result of each decision tree of the second random forest, and voting to detect the precision of the second random forest;
S4.14, calculating the similarity of decisions between every two decision trees in the second random forest Decision similarity/>The calculation formula of (2) is as follows:
,
Wherein, For/>Personal decision tree and/>The decision trees all sort the correct number of test data,/>For/>The individual decision tree is correctly classified, the/>Number of test data of individual decision tree classification errors,/>For/>Individual decision tree classification errors, th/>The number of test data with correct classification of the decision tree,/>For/>Personal decision tree and/>The decision trees all sort the wrong test data number,/>;
S4.15, selecting decision similarityThe two decision trees closest to 1 respectively calculate decision errors of the two decision trees, and delete the decision tree with small decision error to obtain a random forest after the decision tree is deleted; if the decision errors of the two decision trees are the same, randomly deleting one decision tree to obtain a random forest after deleting the decision tree;
S4.16, testing the random forest after the decision tree deletion according to the test set to obtain an output result of each decision tree in the random forest after the decision tree deletion, and voting to detect the precision of the random forest after the decision tree deletion;
S4.17, if the precision of the random forest after the decision tree is deleted is greater than that of the second random forest, repeating the steps S4.15-S4.16 until the precision of the random forest after the decision tree is deleted is less than that of the random forest before the decision tree is deleted, stopping circulation, and taking the random forest before the decision tree is deleted as a random forest classifier; if the accuracy of the random forest after the decision tree is deleted is smaller than that of the second random forest, the second random forest is used as a random forest classifier;
And S4.18, classifying the seismic signals, the submarine seismic events and the noise by using the random forest classifier.
8. An earthquake signal identification device based on a random forest algorithm is characterized by comprising:
The data acquisition module is used for acquiring seismic data;
the event extraction module is used for processing the seismic data by using a short-time average/long-time average method and extracting seismic events;
The feature extraction module is used for extracting features of the seismic events, obtaining extracted features of each seismic event, marking the extracted features, and formulating a category label for each feature to obtain a data set and a feature set;
And the classification module is used for constructing a random forest classifier by utilizing the data set and the feature set, and classifying the seismic signals, the submarine seismic events and the noise by utilizing the random forest classifier.
9. An electronic device, comprising:
A processor;
a memory;
And a computer program, wherein the computer program is stored in the memory, the computer program comprising instructions that, when executed by the processor, cause the electronic device to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410391433.8A CN117970428B (en) | 2024-04-02 | 2024-04-02 | Seismic signal identification method, device and equipment based on random forest algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410391433.8A CN117970428B (en) | 2024-04-02 | 2024-04-02 | Seismic signal identification method, device and equipment based on random forest algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117970428A true CN117970428A (en) | 2024-05-03 |
CN117970428B CN117970428B (en) | 2024-06-14 |
Family
ID=90861373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410391433.8A Active CN117970428B (en) | 2024-04-02 | 2024-04-02 | Seismic signal identification method, device and equipment based on random forest algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117970428B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107766883A (en) * | 2017-10-13 | 2018-03-06 | 华中师范大学 | A kind of optimization random forest classification method and system based on weighted decision tree |
CN108874927A (en) * | 2018-05-31 | 2018-11-23 | 桂林电子科技大学 | Intrusion detection method based on hypergraph and random forest |
CN109934420A (en) * | 2019-04-17 | 2019-06-25 | 重庆大学 | A kind of method and system for predicting labor turnover |
CN111126434A (en) * | 2019-11-19 | 2020-05-08 | 山东省科学院激光研究所 | Automatic microseism first arrival time picking method and system based on random forest |
US20200219619A1 (en) * | 2018-12-20 | 2020-07-09 | Oregon Health & Science University | Subtyping heterogeneous disorders using functional random forest models |
CN111916066A (en) * | 2020-08-13 | 2020-11-10 | 山东大学 | Random forest based voice tone recognition method and system |
CN113095511A (en) * | 2021-04-16 | 2021-07-09 | 广东电网有限责任公司 | Method and device for judging in-place operation of automatic master station |
CN113239880A (en) * | 2021-06-02 | 2021-08-10 | 西安电子科技大学 | Radar radiation source identification method based on improved random forest |
CN113256066A (en) * | 2021-04-23 | 2021-08-13 | 新疆大学 | PCA-XGboost-IRF-based job shop real-time scheduling method |
CN114819369A (en) * | 2022-05-05 | 2022-07-29 | 国网吉林省电力有限公司 | Short-term wind power prediction method based on two-stage feature selection and random forest improvement model |
CN115481841A (en) * | 2021-06-15 | 2022-12-16 | 深圳供电局有限公司 | Material demand prediction method based on feature extraction and improved random forest |
WO2023138140A1 (en) * | 2022-01-19 | 2023-07-27 | 北京工业大学 | Soft-sensing method for dioxin emission during mswi process and based on broad hybrid forest regression |
CN117408699A (en) * | 2023-10-25 | 2024-01-16 | 西安石油大学 | Telecom fraud recognition method based on bank card data |
-
2024
- 2024-04-02 CN CN202410391433.8A patent/CN117970428B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107766883A (en) * | 2017-10-13 | 2018-03-06 | 华中师范大学 | A kind of optimization random forest classification method and system based on weighted decision tree |
CN108874927A (en) * | 2018-05-31 | 2018-11-23 | 桂林电子科技大学 | Intrusion detection method based on hypergraph and random forest |
US20200219619A1 (en) * | 2018-12-20 | 2020-07-09 | Oregon Health & Science University | Subtyping heterogeneous disorders using functional random forest models |
CN109934420A (en) * | 2019-04-17 | 2019-06-25 | 重庆大学 | A kind of method and system for predicting labor turnover |
CN111126434A (en) * | 2019-11-19 | 2020-05-08 | 山东省科学院激光研究所 | Automatic microseism first arrival time picking method and system based on random forest |
CN111916066A (en) * | 2020-08-13 | 2020-11-10 | 山东大学 | Random forest based voice tone recognition method and system |
CN113095511A (en) * | 2021-04-16 | 2021-07-09 | 广东电网有限责任公司 | Method and device for judging in-place operation of automatic master station |
CN113256066A (en) * | 2021-04-23 | 2021-08-13 | 新疆大学 | PCA-XGboost-IRF-based job shop real-time scheduling method |
CN113239880A (en) * | 2021-06-02 | 2021-08-10 | 西安电子科技大学 | Radar radiation source identification method based on improved random forest |
CN115481841A (en) * | 2021-06-15 | 2022-12-16 | 深圳供电局有限公司 | Material demand prediction method based on feature extraction and improved random forest |
WO2023138140A1 (en) * | 2022-01-19 | 2023-07-27 | 北京工业大学 | Soft-sensing method for dioxin emission during mswi process and based on broad hybrid forest regression |
CN114819369A (en) * | 2022-05-05 | 2022-07-29 | 国网吉林省电力有限公司 | Short-term wind power prediction method based on two-stage feature selection and random forest improvement model |
CN117408699A (en) * | 2023-10-25 | 2024-01-16 | 西安石油大学 | Telecom fraud recognition method based on bank card data |
Non-Patent Citations (3)
Title |
---|
HIBERT: "Event recognition in marine seismological data using Random Forest machine learning classifier", GEOPHYSICAL JOURNAL INTERNATIONAL, vol. 235, 30 July 2023 (2023-07-30), pages 589 - 609 * |
阳铖权: "矿山微震信号智能识别技术研究与应用", 中国优秀硕士论文全文数据库, 15 February 2024 (2024-02-15) * |
靖美元: "基于决策路径的决策树算法研究", 中国优秀硕士论文全文数据库, 15 February 2023 (2023-02-15) * |
Also Published As
Publication number | Publication date |
---|---|
CN117970428B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lee et al. | Convolutional neural net and bearing fault analysis | |
CN109116203A (en) | Power equipment partial discharges fault diagnostic method based on convolutional neural networks | |
CN111126471A (en) | Microseism event detection method and system | |
CN109065027B (en) | Voice distinguishing model training method and device, computer equipment and storage medium | |
CN107305774A (en) | Speech detection method and device | |
CN108303253A (en) | Bearing initial failure recognition methods based on long short-term memory Recognition with Recurrent Neural Network | |
CN111401507B (en) | Adaptive decision tree fall detection method and system | |
CN111160106B (en) | GPU-based optical fiber vibration signal feature extraction and classification method and system | |
CN110188196B (en) | Random forest based text increment dimension reduction method | |
CN117727307B (en) | Bird voice intelligent recognition method based on feature fusion | |
CN112464721A (en) | Automatic microseism event identification method and device | |
Wang et al. | Radar HRRP target recognition in frequency domain based on autoregressive model | |
CN104977602B (en) | A kind of control method and device of earthquake data acquisition construction | |
CN112235816B (en) | WIFI signal CSI feature extraction method based on random forest | |
CN117970428B (en) | Seismic signal identification method, device and equipment based on random forest algorithm | |
CN112560674A (en) | Method and system for detecting quality of sound signal | |
CN117262942A (en) | Elevator abnormality detection method, elevator abnormality detection device, computer equipment and storage medium | |
CN111880957A (en) | Program error positioning method based on random forest model | |
CN116541771A (en) | Unbalanced sample bearing fault diagnosis method based on multi-scale feature fusion | |
CN115563480A (en) | Gear fault identification method for screening octave geometric modal decomposition based on kurtosis ratio coefficient | |
CN112329535B (en) | CNN-based quick identification method for low-frequency oscillation modal characteristics of power system | |
CN114692693A (en) | Distributed optical fiber signal identification method, device and storage medium based on fractal theory | |
CN114860617A (en) | Intelligent pressure testing method and system | |
CN114626412A (en) | Multi-class target identification method and system for unattended sensor system | |
RU2090928C1 (en) | Object condition signal analyzing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |