CN112244863A - Signal identification method, signal identification device, electronic device and readable storage medium - Google Patents

Signal identification method, signal identification device, electronic device and readable storage medium Download PDF

Info

Publication number
CN112244863A
CN112244863A CN202011150006.9A CN202011150006A CN112244863A CN 112244863 A CN112244863 A CN 112244863A CN 202011150006 A CN202011150006 A CN 202011150006A CN 112244863 A CN112244863 A CN 112244863A
Authority
CN
China
Prior art keywords
neural network
trained
signal data
data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011150006.9A
Other languages
Chinese (zh)
Inventor
欧歌
唐大伟
马小惠
沈鸿翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202011150006.9A priority Critical patent/CN112244863A/en
Publication of CN112244863A publication Critical patent/CN112244863A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Physiology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Cardiology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to the technical field of data processing, and provides a signal identification method and device, a computer readable storage medium and an electronic device. Wherein, the method comprises the following steps: inputting signal data to be recognized into a pre-trained lightweight neural network model to obtain characteristic data of the signal data to be recognized; and inputting the characteristic data of the signal data to be recognized into a pre-trained integrated classifier to obtain a classification recognition result of the signal data to be recognized. The scheme is based on the lightweight neural network model, the model parameters can be reduced, the signal identification efficiency is improved, the classification result of the signal identification can be improved based on the integrated classifier, and the accuracy of the signal identification can be guaranteed while the signal identification efficiency is improved.

Description

Signal identification method, signal identification device, electronic device and readable storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a signal identification method, a signal identification device, an electronic device, and a computer-readable storage medium.
Background
In the existing signal identification method based on deep learning, taking heart rate signal identification as an example, although the heart rate signal identification method based on deep learning has higher accuracy, the number of network parameters is too large, the model complexity is higher, so that the calculation load of the heart rate identification algorithm based on deep learning is too large, the identification efficiency is reduced, the requirement on hardware equipment is higher, and the cost of the hardware equipment can be increased. And if the number of model parameters is reduced, the accuracy of identification is influenced.
Therefore, how to ensure the accuracy of signal identification while improving the efficiency of signal identification and reducing the hardware cost is a technical problem that needs to be solved urgently at present.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a signal identification method and apparatus, an electronic device, and a computer-readable storage medium, so as to improve the signal identification efficiency and ensure the accuracy of signal identification at least to some extent, so as to solve one or more of the above-mentioned technical problems.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a signal identification method, including:
inputting signal data to be recognized into a pre-trained lightweight neural network model to obtain characteristic data of the signal data to be recognized;
and inputting the characteristic data of the signal data to be recognized into a pre-trained integrated classifier to obtain a classification recognition result of the signal data to be recognized.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the depth-separable convolution sub-network includes a depth-separable convolution layer, a sub-network pooling layer, and a batch normalization layer.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the lightweight neural network model includes four depth-separable convolution sub-networks, and the number of convolution kernels of each depth-separable convolution sub-network is different.
In an exemplary embodiment of the disclosure, based on the foregoing scheme, the lightweight neural network model is obtained by training through the following steps:
acquiring sample signal data and a label corresponding to the sample signal data;
performing supervised learning training on the constructed lightweight neural network classification model by using the sample signal data and the labels to obtain a pre-trained lightweight neural network classification model, wherein the constructed lightweight neural network classification model comprises an input layer, one or more deep separable convolution sub-networks, a global pooling layer, a full connection layer, a classification layer and an output layer;
and extracting an input layer, one or more deep separable convolution sub-networks and a global pooling layer in the pre-trained lightweight neural network classification model to obtain the pre-trained lightweight neural network model.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the pre-trained ensemble classifier is trained by the following steps:
obtaining sample characteristic data output by a global pooling layer of the pre-trained lightweight neural network model;
and carrying out supervised learning training on the integrated classifier by using the sample characteristic data and the label to obtain a pre-trained integrated classifier.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the ensemble classifier includes an extreme gradient ascending XGBoost classifier using the classification regression tree model CART as a tree model.
In an exemplary embodiment of the present disclosure, based on the foregoing scheme, the signal data to be identified includes currently acquired heart rate signal data.
According to a second aspect of the present disclosure, there is provided an electronic device comprising:
one or more processors;
a storage device for storing model weights of a pre-trained lightweight neural network model and a pre-trained ensemble classifier, when the one or more processors process signal data to be recognized, the one or more processors are caused to execute the method of claim 1 to obtain a classification recognition result of the signal data according to the model weights of the pre-trained lightweight neural network model and the pre-trained ensemble classifier.
According to a third aspect of the present disclosure, there is provided a signal identifying apparatus comprising:
the characteristic data extraction module is configured to input signal data to be recognized into a pre-trained lightweight neural network model to obtain characteristic data of the signal data to be recognized;
the class identification module is configured to input the feature data of the signal data to be identified into a pre-trained integrated classifier to obtain a classification identification result of the signal data to be identified;
wherein the lightweight neural network model comprises an input layer, one or more deep separable convolution sub-networks, a global pooling layer.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the signal identification method as described in the first aspect of the embodiments above.
As can be seen from the foregoing technical solutions, the signal identification method, the signal identification apparatus, the electronic device, and the computer-readable storage medium for implementing the signal identification method in the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:
in the technical scheme provided by some embodiments of the present disclosure, firstly, signal data to be recognized is input into a pre-trained lightweight neural network model to obtain feature data of the signal data to be recognized; secondly, inputting the feature data of the signal data to be recognized into a pre-trained integrated classifier to obtain a classification recognition result of the signal data to be recognized, wherein the lightweight neural network model comprises an input layer, one or more deep separable convolution sub-networks and a global pooling layer. Compared with the prior art, on one hand, the light neural network model is constructed based on the deep separable convolution sub-network so as to extract the features, so that the parameter quantity of the feature extraction model is reduced, the calculation load is reduced, and the efficiency of signal identification is improved; on the other hand, the extracted feature data are classified and identified based on the integrated classifier, so that the accuracy of a classification result can be improved, and the accuracy of signal identification is ensured while the signal identification efficiency is improved; on the other hand, the model parameters are reduced and the calculation complexity is reduced based on the lightweight neural network model, so that the method can be used in equipment with lower configuration, and the hardware cost is saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
Fig. 1 shows a schematic flow diagram of a signal identification method in an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a method of deriving a pre-trained lightweight neural network model in an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a lightweight neural network classification model constructed in an exemplary embodiment of the present disclosure;
FIG. 4 shows a schematic structural diagram of a lightweight neural network classification model comprising four deep separable sub-networks constructed in an exemplary embodiment of the present disclosure;
FIG. 5 illustrates a flow diagram of a method of deriving a pre-trained ensemble classifier in an exemplary embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of a heart rate signal recognition model in an exemplary embodiment of the present disclosure;
FIG. 7 illustrates a flow pair diagram of a method of identifying currently acquired heart rate signal data in an exemplary embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of a heart rate signal identification device in an exemplary embodiment of the present disclosure;
FIG. 9 shows a schematic diagram of a structure of a computer storage medium in an exemplary embodiment of the disclosure; and the number of the first and second groups,
fig. 10 shows a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
In the related signal identification technology, for example, heart rate signal classification identification based on deep learning is taken, when the heart rate identification is performed by a deep learning algorithm, although the heart rate identification has high accuracy, the calculation load of the heart rate identification algorithm based on the deep learning is too large due to too large parameter quantity of a network and high model complexity, so that the identification efficiency is reduced, if the identification efficiency is to be ensured, the configuration of hardware equipment needs to be improved, the hardware cost can be increased, and if the model parameter quantity is reduced, the identification accuracy of the model can be influenced.
In an embodiment of the present disclosure, a signal identification method is first provided, which overcomes, at least to some extent, the above-mentioned drawbacks in the related art.
Fig. 1 shows a schematic flow chart of a signal identification method in an exemplary embodiment of the present disclosure. Referring to fig. 1, the method includes:
step S110, inputting signal data to be recognized into a pre-trained lightweight neural network model to obtain characteristic data of the signal data to be recognized;
and step S120, inputting the characteristic data of the signal data to be recognized into a pre-trained integrated classifier to obtain a classification recognition result of the signal data to be recognized.
In the technical solution provided in the embodiment shown in fig. 1, first, signal data to be recognized is input into a pre-trained lightweight neural network model to obtain feature data of the signal data to be recognized; secondly, inputting the feature data of the signal data to be recognized into a pre-trained integrated classifier to obtain a classification recognition result of the signal data to be recognized, wherein the lightweight neural network model comprises an input layer, one or more deep separable convolution sub-networks and a global pooling layer. Compared with the prior art, on one hand, the light neural network model is constructed based on the deep separable convolution sub-network so as to extract the features, so that the parameter quantity of the feature extraction model is reduced, the calculation load is reduced, and the efficiency of signal identification is improved; on the other hand, the extracted feature data are classified and identified based on the integrated classifier, so that the accuracy of a classification result can be improved, and the accuracy of signal identification is ensured while the signal identification efficiency is improved; on the other hand, the model parameters are reduced and the calculation complexity is reduced based on the lightweight neural network model, so that the method can be used in equipment with lower configuration, and the hardware cost is saved.
The following detailed description of the various steps in the example shown in fig. 1:
in step S110, the signal data to be recognized is input into a lightweight neural network model trained in advance, and feature data of the signal data to be recognized is obtained.
Wherein the signal data to be identified comprises one-dimensional sequence data, for example one-dimensional heart rate signal sequence data. The heart rate signal data can be currently acquired heart rate signal data, and specifically, the heart rate signal data acquired by the electronic equipment or the terminal in real time can be classified and identified.
In an exemplary embodiment, the lightweight neural network model includes an input layer, one or more deep separable convolution sub-networks, a global pooling layer.
Specifically, the lightweight neural network model may include four depth-separable convolution sub-networks, and the number of convolution kernels of each depth-separable convolution sub-network is different. Wherein each depth-separable convolution sub-network may include a depth-separable convolution layer, a sub-network pooling layer, a batch normalization layer.
In an exemplary embodiment, the lightweight neural network classification model may be trained to obtain the above-described pre-trained lightweight neural network model. Illustratively, fig. 2 shows a flow diagram of a method for obtaining a pre-trained lightweight neural network model in an exemplary embodiment of the disclosure. Referring to fig. 2, the method may include steps S210 to S230.
In step S210, sample signal data and a label corresponding to the sample signal data are acquired.
The sample signal data may include the obtained signal data, and the label corresponding to the sample signal data may include the actual category corresponding to the obtained signal data.
In an exemplary embodiment, sample signal data and a label corresponding to the sample signal data can be obtained in an open source dataset.
Taking the heart rate signal data as an example, the sample heart rate signal data and the label corresponding to the sample heart rate signal data can be acquired from the data set of the heart rate identification disclosed in physioset challenge 2017. Specifically, the Physionet dataset already contains training samples and labels for heart rate signal data, which contains 8528 ECG (electrocardiogram) signals, 7528 as training set and 1000 as test set, and 4 labels for heart rate signal data, respectively for normal, atrial fibrillation, other abnormal rhythms, and noise.
In another exemplary embodiment, when the signal data to be identified has no corresponding open source data set, manual labeling may be performed. Specifically, a certain amount of signal data may be randomly selected from the obtained signal data as sample signal data, and then the category corresponding to each sample signal data is labeled to generate the sample signal data and the label corresponding to the sample signal data. Wherein the randomly selected number can be customized.
In general, the greater the number of sample signal data, the more accurate the training results of the model. But the longer the training time, the more the user can determine the appropriate amount of sample signal data according to the needs of the user. For example 1000.
Taking the heart rate signal data as an example, a certain amount of heart rate signal data can be randomly selected from the existing heart rate signal data to serve as sample heart rate signal data, and the actual category corresponding to each sample heart rate signal data is manually marked to generate the sample heart rate signal data and the label corresponding to the sample heart rate signal data. Wherein the existing heart rate signal data may be a one-dimensional sequence of heart rate data over a period of time that has been acquired according to a certain acquisition frequency.
Next, in step S220, supervised learning training is performed on the constructed lightweight neural network classification model using the sample signal data and the labels to obtain a pre-trained lightweight neural network classification model.
In an exemplary embodiment, the constructed lightweight neural network classification model may include an input layer, one or more deep separable convolution sub-networks, a global pooling layer, a fully connected layer, a classification layer, an output layer. The global pooling layer may include a global average pooling layer or a global maximum pooling layer, for example, fig. 3 shows a schematic structure of a light-weighted neural network classification model including an input layer 31, a deep separable convolution sub-network 32, a global pooling layer 33, a fully-connected layer 34, a classification layer 35, and an output layer 36.
Specifically, the deep separable convolution sub-networks in the lightweight neural network classification model include a deep separable convolution layer, a sub-network pooling layer, and a batch normalization layer. The sub-network pooling layer may be a maximum value pooling layer or an average value pooling layer, which is not particularly limited in this exemplary embodiment.
Further, the lightweight neural network model can include four depth-separable convolution sub-networks, wherein the number of convolution kernels of each depth-separable convolution sub-network is different.
For example, fig. 4 shows a schematic structural diagram of a lightweight neural network classification model including four deep separable sub-networks constructed in an exemplary embodiment of the present disclosure.
In fig. 4, a first depth-separable convolution sub-network 41, a second depth-separable convolution sub-network 42, a third depth-separable convolution sub-network 43, and a third depth-separable convolution sub-network 44 are included. Among them, the sub-network pooling layer in each depth separable convolution sub-network is a maximum value pooling layer MaxPooling1D (3). In fig. 4, the global pooling layer is global average pooling layer GlobalAveragePooling 1D. The number in parentheses after each depth-separable convolutional layer SeparableConv1D represents the number of convolution kernels of that depth-separable convolutional layer, e.g., the number 32 in depth-separable convolutional layer SeparableConv1D (32) indicates that the number of convolution kernels of that depth-separable convolutional layer is 32. The number in parentheses after the maximum pooling layer MaxPooling1D indicates the size of the window of maximum pooling, for example, the number 3 in the maximum pooling layer MaxPooling1D (3) indicates that the window size of maximum pooling is 3. The number in parentheses after the fully connected layer Dense indicates the number of nodes of the fully connected layer, e.g., fully connected layer Dense (128) indicates the number of nodes of the fully connected layer is 128, which can map the output value of global average pooling layer globalaveragepoling 1D to a vector of size 128.
After the lightweight neural network classification model is constructed, the constructed lightweight neural network classification model can be trained by using the obtained sample signal data and the labels corresponding to the sample signal data.
Taking the identification and classification of the heart rate signal data as an example, the supervised learning training can be performed on the constructed lightweight neural network classification model by using the training samples and labels of the heart rate signal data in the acquired opening data set Physionet challenge 2017.
As mentioned previously, the Physionet data set contains 8528 ECG signals, 7528 as training set and 1000 as test set, and the label categories of the heart rate signal data are 4, normal, atrial fibrillation, other abnormal rhythms, and noise. The node number of a classification layer softmax in the lightweight neural network classification model is 4, in the training process, the classification layer softmax can map feature data output by a full connection layer density (64) into probabilities that input heart rate data respectively belong to 4 categories of normal, atrial fibrillation, other abnormal rhythms and noise, and then the output layer output can select the category with the highest probability as a prediction category of the input heart rate data.
For example, the heart rate signal data in the physioset data set may be pre-processed prior to training the lightweight neural network classification model. Specifically, the length interval of each heart rate data in the physioset data set is [2000, 18000], and the purpose of preprocessing is to process the length of each heart rate data into the same data length.
For example, the lengths of all the heart rate data in the physioset data set can be unified into 18000 by complementing 0 at the end of data with a length less than 18000. Of course, the heart rate signal data may be preprocessed to other same lengths, and this is not limited in this exemplary embodiment.
After the data preprocessing is completed, 7528 sample heart rate data after preprocessing in the training set can be input into the lightweight neural network classification model shown in fig. 4, and supervised learning training is performed on the lightweight neural network classification model. Specifically, the loss of each iterative training is calculated according to a cross entropy loss function, the weight of the network is updated through a back propagation algorithm until a preset iteration number is reached, the training is finished, and a pre-trained neural network classification model is obtained.
Wherein, the cross entropy loss function is shown as formula (1):
Figure BDA0002740882050000091
in the formula (1), y represents a predicted value of the lightweight neural network classification model, namely a predicted class of the input sample heart rate data, a is an actual output value of the lightweight neural network model, namely a label class of the input sample heart rate data, and n is the number of training samples.
The depth separable convolutional layer in fig. 4 decomposes the conventional convolution process into a depth convolution, which is a channel-based convolution operation, in which each convolution kernel corresponds to one input channel, and a point-by-point convolution, which combines the input channels using a 1 × 1 convolution kernel.
Each sample heart rate data in the above Physionet data set is one-dimensional sequence data, and the corresponding channel number is 1. Therefore, in the depth-separable convolution layer separable conv1D (32) of the first depth-separable convolution sub-network 41, the depth convolution operation is performed first on a single channel, and then the point-by-point convolution is performed, and since the number of input channels is 1, the feature data before and after the point-by-point convolution is completely the same.
In the depth-separable convolutional layer SeparableConv1D (32), depth-separable convolutional layer SeparableConv1D (64), depth-separable convolutional layer SeparableConv1D (128), and depth-separable convolutional layer SeparableConv1D (256), a depth convolution operation based on channels is first performed, each convolution kernel corresponds to one input channel, and then, a 1x1 convolution kernel is used to perform point-by-point convolution to combine the characteristic values of the input channels. The number of input channels of depth-separable convolutional layer SeparableConv1D (64) is the number of output channels of the largest pooling layer of the first depth-separable convolutional subnetwork 41. And by analogy, the number of input channels of the subsequent depth-separable convolution layer is the number of output channels of the maximum pooling layer of the previous separable convolution sub-network.
The depth separable convolution reduces the amount of computation and the amount of parameters of the network model without affecting the effect of feature extraction by changing the convolution computation method, and therefore, a light neural network can be generated by using the depth separable convolution layer. Table 1 shows a comparison of network parameters when constructing a signal data classification model using conventional convolution and using deep separable convolution.
From the comparison of the network parameters in table 1, it can be seen that the total parameter of the conventional convolution is 2.96 times the total parameter of the deep separable convolution. In the exemplary embodiment, because the depth separable convolution layer is used, the parameter number of the constructed lightweight neural network classification model can be greatly reduced, the calculation complexity of the model is further reduced, and the occupied space of the model for the device resource memory is saved.
TABLE 1
Depth separable convolution Conventional convolution
Total parameter number Total params 354873 1050820
Trainable params capable of training parameters 352857 1048804
Number of untrained parameters Non-Trainable params 2016 2016
After the lightweight neural network classification model is obtained through training, in step S230, the input layer, one or more deep separable convolution sub-networks, and the global pooling layer in the pre-trained lightweight neural network classification model are extracted to obtain the pre-trained lightweight neural network model.
Taking fig. 4 as an example, after the training of the lightweight neural network classification model shown in fig. 4 is completed, the input layer, the first depth-separable convolution sub-network 41, the second depth-separable convolution sub-network 42, the third depth-separable convolution sub-network 43, the fourth depth-separable convolution sub-network 44, and the global average pooling layer GlobalAveragePooling1D in fig. 4 may be extracted, so as to obtain a lightweight neural network model trained in advance. The pre-trained lightweight neural network model may be used to extract feature data of the input signal data.
The function of the pre-trained lightweight neural network model is to extract the feature data of the input signal data. Therefore, after the lightweight neural network classification model trained in advance is obtained, the output data of any one of the fully connected layers can be used as the extracted feature data.
That is, after the training of the network model in fig. 4 is completed, the input layer, the first depth-separable convolution sub-network 41, the second depth-separable convolution sub-network 42, the third depth-separable convolution sub-network 43, the fourth depth-separable convolution sub-network 44, the global average pooling layer globalaveragePooling1D, and the full connection layer Dense (128) in fig. 4 may be extracted to obtain a pre-trained lightweight neural network model, or the classification layer softmax and the output layer in fig. 4 may be deleted to obtain a pre-trained lightweight neural network model, which is not particularly limited in the present exemplary embodiment.
Through the steps S210 to S230, a lightweight neural network model trained in advance can be obtained. And inputting the currently acquired signal data to be identified into a pre-trained lightweight neural network model to obtain the characteristic data of the currently acquired signal data to be identified. When signal data to be recognized are collected, in order to ensure the effect of feature extraction of the model, the collection frequency and the length of the obtained signal data to be recognized should be consistent with the collection frequency in the training data set and the length of the preprocessed data.
For example, feature extraction may be performed on the currently acquired heart rate signal data according to the pre-trained neural network model obtained in fig. 4 to obtain feature data of the currently acquired heart rate signal data. Since the acquisition frequency of the heart rate signal data in the above Physionet data set is 300HZ, the length of the preprocessed heart rate signal training data is 18000. When acquiring current heart rate signal data, a one-dimensional heart rate signal data sequence of length 18000 should be obtained at an acquisition frequency of 300 HZ.
After the feature data of the signal data to be recognized is obtained, in step S120, the feature data of the signal data to be recognized is input into the pre-trained integrated classifier, so as to obtain the classification recognition result of the signal data to be recognized.
In an exemplary embodiment, the feature data of the signal to be recognized may include feature data output from the global pooling layer described above, for example, feature data of the heart rate signal data to be recognized output from the global average pooling layer globalaveragepoolic 1D in the model in fig. 4 after training.
When the feature data of the signal data to be recognized is input into the pre-trained integrated classifier, the integrated classifier may be trained according to the sample feature data to obtain the pre-trained integrated classifier. FIG. 5 is a flow chart illustrating a method for deriving a pre-trained base classifier in an exemplary embodiment of the present disclosure. Referring to fig. 5, the method may include steps S510 to S520.
In step S510, sample feature data output by the global pooling layer of the pre-trained lightweight neural network model is obtained.
In an exemplary embodiment, the pre-trained lightweight neural network model in step S510 may include the pre-trained lightweight neural network model obtained in step S230 described above. For example, after the training of the lightweight neural network classification model shown in fig. 4 is completed, the input layer, the first depth-separable convolution sub-network 41, the second depth-separable convolution sub-network 42, the third depth-separable convolution sub-network 43, the fourth depth-separable convolution sub-network 44, and the global average pooling layer GlobalAveragePooling1D in fig. 4 are extracted, and the obtained pre-trained lightweight neural network model is obtained.
For example, after the lightweight neural network classification model shown in fig. 4 is trained using the sample heart rate signal data to obtain a lightweight neural network model trained in advance, the feature data of the sample heart rate signal data output from the global average pooling layer of the lightweight neural network model may be obtained.
Next, in step S520, supervised learning training is performed on the ensemble classifier using the sample feature data and the labels to obtain a pre-trained ensemble classifier.
In an exemplary embodiment, the ensemble classifier includes an extreme gradient ascent XGBoost classifier using the classification regression tree model CART as a tree model. Wherein, XGboost is called eXtreme Gradient Boosting, i.e. eXtreme Gradient rising. Of course, other integrated classifiers may be used, such as a random forest or a GDBT (Gradient Boost Decision Tree) and the like, which is not limited in this exemplary embodiment.
Taking the integrated classifier XGBoost as an example, the XGBoost may combine a plurality of tree models together to form a strong classifier, And the used tree model is the cart (classification And Regression tree) classification Regression tree model.
The XGBoost classifier may be expressed as the following formula (2):
Figure BDA0002740882050000121
in formula (2), K is the total number of trees, fkThe k-th tree is represented by,
Figure BDA0002740882050000122
represents a sample xiThe predicted result of (1).
In an exemplary embodiment, the number K of trees may be determined by a random grid algorithm according to the super-parameters such as the amount of training data, the depth of the trees, the number of leaf nodes, and the like.
The loss function during XGBoost classifier training may be expressed as formula (3) as follows:
Figure BDA0002740882050000123
in the formula (3), yiIn order to train the labels of the samples,
Figure BDA0002740882050000131
for training sample xiWhich may be a training error obtained from a squared loss function, omega (f)k) Regular terms representing the kth tree.
The idea of XGBoost is to continuously add tree models, and when adding a tree, it is to learn a new function to fit the residual predicted last time. Specifically, the CART tree is used as a base classifier, and the next tree (kth tree) is established by taking the error generated by the last prediction of the model (the model formed by combining k-1 trees) as a reference. Thus, the penalty function is reduced each time a tree is added.
For example, when the XGBoost classifier is trained, the iterative model boost may be configured as a tree-based model, the number of categories corresponding to each tree model is configured as 4, the feature data of the sample heart rate signal data output by the global averaging pooling layer is used as the training data of the XGBoost classifier, and the label of each feature data corresponds to the label of the input heart rate signal data, that is, the label of each heart rate signal data in the Physionet data set.
And after the feature data of each heart rate signal data in the Physioet data set and the label corresponding to each heart rate signal data are obtained, inputting the feature data into the XGboost classifier, and performing supervised learning training on the XGboost classifier according to the preset iteration times. And in the training process, according to the true value corresponding to the label, continuously adjusting the model weight by taking the error between the predicted value and the true value as a target until the preset iteration times are reached, and finishing the training. The preset iteration number can be set in a user-defined manner according to actual conditions, for example, 30 times, 50 times, 300 times, 500 times, and the like.
Furthermore, after the training of the XGboost classifier is completed, in order to ensure the accuracy of a prediction result, a test set can be used for carrying out model verification on the lightweight neural network model obtained by training and the XGboost classifier.
For example, model validation can be performed using 1000 test samples in the above described Physionet dataset. Specifically, each heart rate signal data test sample can be input into a lightweight neural network model obtained through training to obtain characteristic data corresponding to each test sample, then the characteristic data is input into an XGboost classifier to obtain a prediction category of the input test heart rate signal data, the prediction category of each test sample is compared with a real category in a corresponding label to obtain the heart rate signal identification accuracy rate of the model obtained through training on a test set, and then the identification performance of the model can be verified.
If the recognition accuracy reaches a preset threshold value, the model is considered to be effective, and the training result is the final model; and if the recognition accuracy is smaller than the preset threshold, retraining the model, specifically, adjusting the iteration number, retraining the model until the recognition accuracy on the test set reaches the preset threshold, and determining the final model.
Through the steps S510 to S520, a pre-trained ensemble classifier can be obtained, and then, the classification result is determined according to the ensemble classifier. Since the integrated classifier can integrate the results of multiple classifiers, it can improve the accuracy of classification recognition.
After the integrated classifier is trained, when the category of the signal data to be recognized needs to be predicted, the feature data of the signal data to be recognized is actually input into the XGBoost classifier, the feature data falls to a corresponding leaf node in each tree model, each leaf node corresponds to a category score, and then, the corresponding scores of each tree only need to be added up to obtain the corresponding prediction score of the input data in each category. Finally, the category with the largest sum of the prediction scores can be determined as the category corresponding to the signal data to be identified.
In the exemplary embodiment, the lightweight neural network model is constructed by the deep separable convolutional layer to extract the feature data of the signal data to be recognized, so that the network parameters of the feature extraction model can be reduced, the computational complexity can be reduced, the configuration requirement on hardware equipment can be reduced, and the model can be transplanted to equipment with lower hardware configuration, for example, wearable electronic equipment with a processor, such as a smart watch. Meanwhile, the extracted feature data are classified through the integrated classifier, and the classification accuracy can be guaranteed.
Therefore, the signal identification method provided by the exemplary embodiment can reduce the number of network parameters, save the occupied space of the model for the memory resource, reduce the computational complexity, improve the computational efficiency, and ensure the accuracy of signal identification.
Further, taking heart rate signal data identification as an example, a final lightweight heart rate signal identification model can be determined according to a lightweight neural network model obtained through training and the XGBoost. Specifically, as shown in fig. 6, the finally determined lightweight heart rate signal recognition model may include a heart rate signal input layer 61, a depth-separable convolution sub-network 62 including depth-separable convolution layers, a global average pooling layer 63, an XGBoost classifier 64, and an output layer 65. Among other things, depth-separable convolution sub-networks 62 may include first depth-separable convolution sub-network 41, second depth-separable convolution sub-network 42, third depth-separable convolution sub-network 43, and fourth depth-separable convolution sub-network 44 of fig. 4, described above.
The heart rate signal identification model shown in fig. 6 may be used to directly classify and identify the input heart rate data, and determine the final heart rate signal category according to the score output by the XGBoost.
Fig. 7 shows a flow chart of a method of identifying currently acquired heart rate signal data in an exemplary embodiment of the disclosure. The method may include steps S701 to S711.
Specifically, the training and evaluation of the heart rate signal recognition model can be completed in the server 71 through steps S701 to S706, for example, the classification model and the XGBoost integrated classifier shown in fig. 4 are trained and model validity verified by using a physioset data set according to the methods shown in steps S210 to S230 and steps S510 to S520.
After the validity verification is passed, the heart rate signal recognition model having the structure shown in fig. 6 may be obtained, and in step S707, the weight of the trained heart rate signal recognition model may be derived in the server 71. Then, in step S708, the derived heart rate signal recognition model weights are imported into the hardware device 72 to implement model migration.
Further, after the model transplantation is completed, in steps S709 to S711, the hardware device 72 may classify and identify the currently acquired heart rate signal data according to the imported model weight.
Among other things, the hardware devices 72 may include wearable electronic devices, smart phones, portable computers, and the like. The hardware device 72 may collect and identify heart rate signal data by itself, or may identify heart rate signal data transmitted by other devices.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. The computer program, when executed by the CPU, performs the functions defined by the method provided by the present invention. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Fig. 8 shows a schematic structural diagram of a heart rate signal identification device in an exemplary embodiment of the disclosure. Referring to fig. 8, the apparatus 800 may include a feature data extraction module 810 and a category identification module 820. Wherein:
the above-mentioned feature data extraction module 810 is configured to input the signal data to be recognized into a pre-trained lightweight neural network model, so as to obtain feature data of the signal data to be recognized
The class identification module 820 is configured to input the feature data of the signal data to be identified into a pre-trained integrated classifier, so as to obtain a classification identification result of the signal data to be identified.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the lightweight neural network model in the feature data extraction module 810 described above may be determined by:
acquiring sample signal data and a label corresponding to the sample signal data;
carrying out supervised learning training on the constructed lightweight neural network classification model by using sample signal data and labels to obtain a pre-trained lightweight neural network classification model, wherein the constructed lightweight neural network classification model comprises an input layer, one or more deep separable convolution sub-networks, a global pooling layer, a full-connection layer, a classification layer and an output layer;
and extracting an input layer, one or more deep separable convolution sub-networks and a global pooling layer in the pre-trained lightweight neural network classification model to obtain the pre-trained lightweight neural network model.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiments, the depth-separable convolution sub-network described above includes a depth-separable convolution layer, a sub-network pooling layer, and a batch normalization layer.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiments, the lightweight neural network model includes four depth-separable convolution sub-networks, each of which has a different number of convolution kernels.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the integrated classifier in the category identification module 820 may be determined by:
obtaining sample characteristic data output by a global pooling layer of the pre-trained lightweight neural network model;
and carrying out supervised learning training on the integrated classifier by using the sample characteristic data and the label to obtain a pre-trained integrated classifier.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the integrated classifier in the category identification module 820 includes an extreme gradient ascending XGBoost classifier using the classification regression tree model CART as a tree model.
The specific details of each unit in the heart rate signal identification device have been described in detail in the corresponding heart rate signal identification method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer storage medium capable of implementing the above method. On which a program product capable of implementing the above-described method of the present specification is stored. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
Referring to fig. 9, a program product 900 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1000 according to this embodiment of the disclosure is described below with reference to fig. 10. The electronic device 1000 shown in fig. 10 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general purpose computing device. The components of the electronic device 1000 may include, but are not limited to: the at least one processing unit 1010, the at least one memory unit 1020, a bus 1030 connecting different system components (including the memory unit 1020 and the processing unit 1010), and a display unit 1040.
When the processing unit 1010 processes signal data to be recognized, the processing unit 1010 is enabled to execute the steps shown in fig. 1, so as to obtain a classification recognition result of the signal data according to the pre-trained lightweight neural network model and the model weight of the pre-trained ensemble classifier.
The display unit 1040 may display the classification recognition result of the signal data to be recognized.
The storage unit 1020 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)10201 and/or a cache memory unit 10202, and may further include a read-only memory unit (ROM) 10203.
The memory unit 1020 may also include a program/utility 10204 having a set (at least one) of program modules 10205, such program modules 10205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1030 may be any one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, and a local bus using any of a variety of bus architectures.
The electronic device 1000 may also communicate with one or more external devices 1100 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1000, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1000 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interfaces 1050. Also, the electronic device 1000 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet) via the network adapter 1060. As shown, the network adapter 1060 communicates with the other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1000, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A signal identification method, comprising:
inputting signal data to be recognized into a pre-trained lightweight neural network model to obtain characteristic data of the signal data to be recognized;
inputting the characteristic data of the signal data to be recognized into a pre-trained integrated classifier to obtain a classification recognition result of the signal data to be recognized;
wherein the lightweight neural network model comprises an input layer, one or more deep separable convolution sub-networks, a global pooling layer.
2. The signal identification method of claim 1, wherein the depth-separable convolution sub-networks comprise a depth-separable convolution layer, a sub-network pooling layer, and a batch normalization layer.
3. The signal identification method of claim 1, wherein the lightweight neural network model comprises four depth-separable convolution sub-networks, each having a different number of convolution kernels.
4. The signal recognition method according to any one of claims 1 to 3, wherein the lightweight neural network model is trained by:
acquiring sample signal data and a label corresponding to the sample signal data;
performing supervised learning training on the constructed lightweight neural network classification model by using the sample signal data and the labels to obtain a pre-trained lightweight neural network classification model, wherein the constructed lightweight neural network classification model comprises an input layer, one or more deep separable convolution sub-networks, a global pooling layer, a full connection layer, a classification layer and an output layer;
and extracting an input layer, one or more deep separable convolution sub-networks and a global pooling layer in the pre-trained lightweight neural network classification model to obtain the pre-trained lightweight neural network model.
5. The signal recognition method of claim 4, wherein the pre-trained ensemble classifier is trained by:
obtaining sample characteristic data output by a global pooling layer of the pre-trained lightweight neural network model;
and carrying out supervised learning training on the integrated classifier by using the sample characteristic data and the label to obtain a pre-trained integrated classifier.
6. The signal identification method of claim 1, wherein the ensemble classifier comprises an extreme gradient ascent XGBoost classifier using a classification regression tree model CART as a tree model.
7. The signal identification method according to claim 1, wherein the signal data to be identified includes currently acquired heart rate signal data.
8. An electronic device, comprising:
one or more processors;
a storage device for storing model weights of a pre-trained lightweight neural network model and a pre-trained ensemble classifier, when the one or more processors process signal data to be recognized, the one or more processors are caused to execute the method of claim 1 to obtain a classification recognition result of the signal data according to the model weights of the pre-trained lightweight neural network model and the pre-trained ensemble classifier.
9. A signal identifying apparatus, comprising:
the characteristic data extraction module is configured to input signal data to be recognized into a pre-trained lightweight neural network model to obtain characteristic data of the signal data to be recognized;
the class identification module is configured to input the feature data of the signal data to be identified into a pre-trained integrated classifier to obtain a classification identification result of the signal data to be identified;
wherein the lightweight neural network model comprises an input layer, one or more deep separable convolution sub-networks, a global pooling layer.
10. A computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, carries out the signal identification method according to any one of claims 1 to 7.
CN202011150006.9A 2020-10-23 2020-10-23 Signal identification method, signal identification device, electronic device and readable storage medium Pending CN112244863A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011150006.9A CN112244863A (en) 2020-10-23 2020-10-23 Signal identification method, signal identification device, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011150006.9A CN112244863A (en) 2020-10-23 2020-10-23 Signal identification method, signal identification device, electronic device and readable storage medium

Publications (1)

Publication Number Publication Date
CN112244863A true CN112244863A (en) 2021-01-22

Family

ID=74262350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011150006.9A Pending CN112244863A (en) 2020-10-23 2020-10-23 Signal identification method, signal identification device, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112244863A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362070A (en) * 2021-06-03 2021-09-07 中国工商银行股份有限公司 Method, apparatus, electronic device, and medium for identifying operating user
CN113712571A (en) * 2021-06-18 2021-11-30 陕西师范大学 Abnormal electroencephalogram signal detection method based on Rinyi phase transfer entropy and lightweight convolutional neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898579A (en) * 2018-05-30 2018-11-27 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device and storage medium
CN109770861A (en) * 2019-03-29 2019-05-21 广州视源电子科技股份有限公司 The training of electrocardio rhythm and pace of moving things model and detection method, device, equipment and storage medium
CN109977904A (en) * 2019-04-04 2019-07-05 成都信息工程大学 A kind of human motion recognition method of the light-type based on deep learning
CN110037684A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 Device based on the identification rhythm of the heart type for improving convolutional neural networks
CN110163275A (en) * 2019-05-16 2019-08-23 西安电子科技大学 SAR image objective classification method based on depth convolutional neural networks
CN110263684A (en) * 2019-06-06 2019-09-20 山东省计算中心(国家超级计算济南中心) Electrocardiogram classification method based on lightweight neural network
CN110367967A (en) * 2019-07-19 2019-10-25 南京邮电大学 A kind of pocket lightweight human brain condition detection method based on data fusion
CN111261283A (en) * 2020-01-21 2020-06-09 浙江理工大学 Electrocardiosignal deep neural network modeling method based on pyramid type convolution layer

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898579A (en) * 2018-05-30 2018-11-27 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device and storage medium
CN109770861A (en) * 2019-03-29 2019-05-21 广州视源电子科技股份有限公司 The training of electrocardio rhythm and pace of moving things model and detection method, device, equipment and storage medium
CN110037684A (en) * 2019-04-01 2019-07-23 上海数创医疗科技有限公司 Device based on the identification rhythm of the heart type for improving convolutional neural networks
CN109977904A (en) * 2019-04-04 2019-07-05 成都信息工程大学 A kind of human motion recognition method of the light-type based on deep learning
CN110163275A (en) * 2019-05-16 2019-08-23 西安电子科技大学 SAR image objective classification method based on depth convolutional neural networks
CN110263684A (en) * 2019-06-06 2019-09-20 山东省计算中心(国家超级计算济南中心) Electrocardiogram classification method based on lightweight neural network
CN110367967A (en) * 2019-07-19 2019-10-25 南京邮电大学 A kind of pocket lightweight human brain condition detection method based on data fusion
CN111261283A (en) * 2020-01-21 2020-06-09 浙江理工大学 Electrocardiosignal deep neural network modeling method based on pyramid type convolution layer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李云: "基于XGBoost模型的心律失常分类算法研究", 中国医疗设备, vol. 34, no. 7, pages 24 - 28 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113362070A (en) * 2021-06-03 2021-09-07 中国工商银行股份有限公司 Method, apparatus, electronic device, and medium for identifying operating user
CN113712571A (en) * 2021-06-18 2021-11-30 陕西师范大学 Abnormal electroencephalogram signal detection method based on Rinyi phase transfer entropy and lightweight convolutional neural network

Similar Documents

Publication Publication Date Title
US10303978B1 (en) Systems and methods for intelligently curating machine learning training data and improving machine learning model performance
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN110472675B (en) Image classification method, image classification device, storage medium and electronic equipment
CN107221320A (en) Train method, device, equipment and the computer-readable storage medium of acoustic feature extraction model
CN110147878B (en) Data processing method, device and equipment
CN112732871B (en) Multi-label classification method for acquiring client intention labels through robot induction
EP3620982B1 (en) Sample processing method and device
CN111368878B (en) Optimization method based on SSD target detection, computer equipment and medium
CN112244863A (en) Signal identification method, signal identification device, electronic device and readable storage medium
CN111950294A (en) Intention identification method and device based on multi-parameter K-means algorithm and electronic equipment
CN111553186A (en) Electromagnetic signal identification method based on depth long-time and short-time memory network
CN112560993A (en) Data screening method and device, electronic equipment and storage medium
CN114781611A (en) Natural language processing method, language model training method and related equipment
CN114358257A (en) Neural network pruning method and device, readable medium and electronic equipment
CN113870863A (en) Voiceprint recognition method and device, storage medium and electronic equipment
CN110705279A (en) Vocabulary selection method and device and computer readable storage medium
CN111403028B (en) Medical text classification method and device, storage medium and electronic equipment
CN113569018A (en) Question and answer pair mining method and device
CN111966798A (en) Intention identification method and device based on multi-round K-means algorithm and electronic equipment
CN116467461A (en) Data processing method, device, equipment and medium applied to power distribution network
CN113590774B (en) Event query method, device and storage medium
CN115587616A (en) Network model training method and device, storage medium and computer equipment
CN114118411A (en) Training method of image recognition network, image recognition method and device
CN114119972A (en) Model acquisition and object processing method and device, electronic equipment and storage medium
CN111950615A (en) Network fault feature selection method based on tree species optimization algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination