CN115526249A - Time series early classification method, terminal device and storage medium - Google Patents

Time series early classification method, terminal device and storage medium Download PDF

Info

Publication number
CN115526249A
CN115526249A CN202211163048.5A CN202211163048A CN115526249A CN 115526249 A CN115526249 A CN 115526249A CN 202211163048 A CN202211163048 A CN 202211163048A CN 115526249 A CN115526249 A CN 115526249A
Authority
CN
China
Prior art keywords
classification
convolution
probability
data
early
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211163048.5A
Other languages
Chinese (zh)
Inventor
侯毅
安玮
陈慧玲
盛卫东
马超
林再平
曾瑶源
李振
李骏
周石琳
黄源
乔木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202211163048.5A priority Critical patent/CN115526249A/en
Publication of CN115526249A publication Critical patent/CN115526249A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a time sequence early classification method, terminal equipment and a storage medium, wherein a training set is constructed by utilizing time sequence data of human body actions; training the neural network by using a training set; inputting the training set into a trained neural network to obtain the classification probability of all data of the training set at all times, and calculating a probability exit threshold value by using the classification probability; inputting the observable data at the moment t into a trained neural network to obtain the classification probability Pt at the moment t, taking the maximum value of Pt, stopping inputting the observable data if the maximum value is greater than the probability exit threshold, and taking the classification result at the moment t as the final classification result of the observable data at the moment t. The invention can be self-adaptive to new data which is continuously added, and more distinctive features are extracted, so that the accuracy of early classification of time data is improved; the method can be self-adaptive to sample content and difficulty, and extracts more specific class characteristics so as to improve the accuracy of early classification.

Description

Time series early classification method, terminal device and storage medium
Technical Field
The invention relates to the technical field of time series data classification, in particular to a time series early classification method, terminal equipment and a storage medium.
Background
In recent years, with the development of intelligent wearable devices, time series data of human body actions can be acquired everywhere for personal health monitoring, intelligent home control and the like, and time series classification tasks attract wide attention. However, for some time-sensitive specific applications, such as elderly falls, it is desirable to classify the time series as soon as possible, and furthermore, early classification of human activities helps to minimize the response time of the system, thereby improving the user experience. Therefore, it is of great research interest to classify time series data as quickly and as accurately as possible.
Early classification continuously inputs data along with the time, the data length is continuously changed, and the feature difference of different moments is large, so that a classifier is difficult to classify time series with any length.
Traditional early time series classification methods can be classified into prefix-based methods, shapets-based methods, and posterior probability-based methods. However, these methods typically require a significant amount of time spent training multiple classifiers for different lengths of time series data, requiring a significant amount of expert experience in designing manual features or setting exit thresholds. Compared with the traditional method, in the big data era, the method based on deep learning can automatically extract more effective features.
The current early time series classification method based on deep learning can be divided into a one-stage method and a two-stage method. The two-stage method usually trains a classification model by using a training set in the first stage, then makes a certain exit rule or sets a fixed exit threshold value in the second stage, and obtains a classification result in advance when the classification probability meets the exit condition. The method of the first stage generally establishes a classification subnet and an exit subnet at the same time, trains the classification subnet, obtains a classification result by the classification subnet, and the exit subnet is used for indicating whether the moment exits or not.
Since the input data for early classification is constantly changing, deep learning based methods typically also adapt to data of ever-increasing length using recurrent neural networks. The recurrent neural network cannot classify longer time series well due to the forgetting defect of its recurrent structure, and its local feature extraction capability is poor. Some methods combine convolutional neural networks to extract local features, unfortunately, the parameters of the conventional convolutional kernel are fixed, and have fixed and identical feature matching templates for any time and any sample, and do not fully consider intra-class differences and inter-class differences.
Disclosure of Invention
The invention aims to solve the technical problem that the prior art is insufficient, and provides a time sequence early classification method, terminal equipment and a storage medium, wherein intra-class differences and inter-class differences are fully considered, and the accuracy of human body action time data classification is improved.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: a time series early classification method comprises the following steps:
s1, constructing a training set by using time series data of human body actions;
s2, training a neural network by using the training set;
s3, inputting the training set into the trained neural network to obtain the classification probability of all data of the training set at all times, and calculating a probability exit threshold value by using the classification probability;
s4, inputting the observable data at the moment t into the trained neural network to obtain the classification probability Pt at the moment t, taking the maximum value of Pt, stopping inputting the observable data if the maximum value is greater than the probability exit threshold, and taking the classification result at the moment t as the final classification result; otherwise, adding 1 to the value of t, and repeating the step S4 until the exit condition is met.
After the classification probability is obtained, the probability exit threshold is calculated by utilizing the classification probability, and whether the training exits or not is further determined according to the magnitude relation between the probability exit threshold and the classification probability. The threshold of the invention is not fixed, but calculated according to the classification probability, thus being capable of adapting to the characteristics of the continuous change of the input data of early classification. In the invention, if the exit condition is not met, data is continuously input into a classifier (a trained neural network) for classification along with the time until the exit condition is met, intra-class difference and inter-class difference are fully considered, and the accuracy of early time series classification is greatly improved.
In the present invention, the neural network includes:
the system comprises a plurality of cascaded first convolution modules and a plurality of cascade second convolution modules, wherein the plurality of cascaded first convolution modules are used for extracting characteristics of input data to obtain first characteristics;
a plurality of cascaded second convolution modules, which input the first features and are used for extracting high-level features of the input data;
the average pooling layer inputs the high-level features and outputs fusion features corresponding to different time length sequences;
the linear layer inputs the fusion characteristics and outputs prediction scores of different categories;
and the index normalization layer is used for normalizing the prediction scores and outputting classification probabilities.
In the invention, input data is input into a convolution block (a first convolution module) to extract bottom layer characteristics, then the bottom layer characteristics are input into a dynamic convolution block (a second convolution module) to extract time-adaptive high layer characteristics, then the high layer characteristics are input into an average pooling layer to obtain fusion characteristics corresponding to different time length sequences, the fusion characteristics are subjected to linear layer to obtain different types of predicted scores, and finally the scores are subjected to exponential normalization layer to obtain output probability.
In the invention, the first convolution module is a bottom layer convolution module used for extracting bottom layer characteristics (first characteristics or primary characteristics), and the second convolution module is a dynamic convolution module used for extracting high layer characteristics.
In the present invention, cascade means connecting in sequence, for example, the output of a first convolution module is connected to the input of a second convolution module, the output of the second convolution module is connected to the input of a third convolution module, and so on.
In the invention, the first convolution module comprises a first cavity cause and effect convolution layer, a first normalization layer, a second cavity cause and effect convolution layer, a second normalization layer and a first linear activation unit which are connected in sequence; and the input characteristic and the output characteristic residual error of the first convolution module are connected.
In the invention, the second convolution module comprises a dynamic convolution layer, a third normalization layer and a second linear activation unit; the dynamic convolution layer comprises a convolution kernel generation module, wherein the convolution kernel generation module is used for generating a convolution kernel corresponding to each time by using input data; and performing feature extraction on the input data by using the convolution kernel.
The convolution parameters of the conventional convolution block are fixed and are irrelevant to the content and the length of the sample, which is not beneficial to extracting the characteristics of early classification stream data.
In the invention, the convolution kernel generation module comprises a first convolution layer, a correction linear unit, a batch normalization layer and a second convolution layer which are connected in sequence.
For the same sample, the conventional convolution kernel adopts the convolution kernel with the same full time period, the characteristics are not distinctive, and the information gain is limited along with the increase of the data volume.
The implementation process of calculating the probability exit threshold by using the classification probability comprises the following steps:
sorting the classification probability, and removing repeated items in the classification probability;
taking the median of the adjacent classification probabilities to obtain a series of threshold candidate values;
and selecting the threshold candidate value with the minimum cost as a probability exit threshold.
Directly specifying the threshold for a data set requires a great deal of expert experience, and the threshold is not applicable to all data sets and has poor generalization performance. The invention adopts a cost formula method, can automatically obtain the threshold values suitable for different data sets, and can adjust the formula according to the requirements to obtain the threshold values which are in accordance with the actual requirements (adjusting the value of alpha, for example, when the requirement on the classification accuracy rate is higher, the value of alpha needs to be increased, and when the requirement on the early stage of quitting is higher, the value of alpha needs to be reduced). The method of calculating the threshold value using the cost formula also has the advantage of interpretability.
The cost of the threshold candidate is calculated using: cost β =α*(1-Acc β )+(1-α)·Earliness β (ii) a Wherein Acc β The calculated accuracy rate is exited when the maximum value of the classification probability is larger than a threshold candidate value beta; earliness β As the maximum value of the classification probabilityThe calculated early is exited above a threshold candidate value β, α being a weighting factor.
In the invention, a large number of experimental researches and analyses prove that the value of alpha is 0.8.
As an inventive concept, the present invention also provides a terminal device comprising a memory, a processor and a computer program stored on the memory; the processor executes the computer program to implement the steps of the above-described method of the present invention.
A computer readable storage medium having stored thereon a computer program/instructions; which when executed by a processor implement the steps of the above-described method of the present invention.
Compared with the prior art, the invention has the beneficial effects that:
1. for data with continuously-lengthened length, the method can be self-adaptive to the continuously-lengthened new data, and more distinctive features are extracted, so that the accuracy of early classification of time data is improved;
2. for different types of data, the method can be adaptive to sample content and difficulty, and extracts more specific characteristics so as to improve the accuracy of early classification.
Drawings
FIG. 1 is a flowchart of a method of example 1 of the present invention;
FIG. 2 is a diagram showing a structure of a neural network according to embodiment 1 of the present invention;
fig. 3 is a structural diagram of a dynamic convolution module in embodiment 1 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As used herein, the terms "first," "second," and the like, are not intended to imply any order, quantity, or importance, but rather are used to distinguish one element from another. As used herein, the terms "a," "an," and other similar terms are not intended to mean that there is only one of the referenced item, but rather that the pertinent description is directed to only one of the referenced items 2, which may have one or more of those items. As used herein, the terms "comprises," "comprising," and other similar words are intended to refer to logical interrelationships, and are not to be construed as referring to spatial structural relationships. For example, "a includes B" is intended to mean that logically B belongs to a, and not that spatially B is located inside a. Furthermore, the terms "comprising," "including," and other similar words are to be construed as open-ended, rather than closed-ended. For example, "a includes B" is intended to mean that B belongs to a, but B does not necessarily constitute all of a, and a may also include other elements such as C, D, E, and the like.
Example 1
As shown in fig. 1, in this embodiment, in the training phase, the network designed by the present invention (DTCN, the specific structure of the network is in the next subsection) is first trained, specifically, all the time series data of the training set is input into the network designed by the present invention, and the loss function is utilized
Figure BDA0003860949230000051
And (3) training all parameters theta of the training model, wherein N is the number of samples of a training set in a training set, and T is the length of complete time sequence data. Then, inputting the training set into the trained DTCN to obtain the classification probability of all the data of the training set at all the moments, and calculating the probability exit threshold value under the data set according to the exit rule formulated by the invention. The specific process of calculating the threshold value by the exit rule is as follows: the classification probabilities are firstly sorted, and repeated items in the classification probabilities are removed to obtain { P } 1 ,P 2 ,…,P T H, then taking the median of the adjacent classification probabilities, e.g. beta 1 =(P 1 +P 2 ) Per 2, get a series of threshold candidates { beta } 1 ,β 2 ,…,β T }。For each candidate threshold β, the invention defines a cost formula:
Cost β =α*(1-Acc β )+(1-α)*Earliness β
wherein, acc refers to the accuracy rate of quitting the calculation when the prediction probability P of the training set is greater than the threshold, and Earliness refers to the early stage of quitting the calculation when the prediction probability P of the training set is greater than the threshold. And after calculating the costs of all the candidate thresholds, selecting the threshold with the minimum cost as the final exit threshold of the data. In this example, an α value of 0.8 was selected.
In the testing stage, taking sample a as an example, the observable data at time t (the observable data refers to data with length t that can be obtained by equipment and the like at time t, and data with length t +1 can be obtained at time t + 1. The observable data at different times gradually increase along with time) is directly input into the trained network model, so as to obtain the classification probability Pt at that time. And taking the maximum value of Pt, if the maximum value is greater than the exit threshold value calculated in the training stage, determining that the classification result at the moment is reliable, and exiting, wherein the classification result at the moment is used as the final classification result of the sample A. Otherwise, data continues to be input into the classifier for classification over time until an exit condition is satisfied.
The DTCN architecture proposed in this embodiment is mainly composed of a convolution block and a dynamic convolution block, as shown in fig. 2. Specifically, input data are input into a convolution block to extract bottom layer features, then the bottom layer features are input into a dynamic convolution block to extract time-adaptive high layer features, then the high layer features are input into an average pooling layer to obtain fusion features corresponding to different time length sequences, the fusion features are subjected to linear layer to obtain different types of prediction scores, and finally the scores are subjected to exponential normalization layer to obtain output probability.
Wherein the underlying convolution block (first convolution block) is a convolution module for extracting the underlying primary feature (first feature) that includes a dilated causal convolution, a normalization layer, a modified linear unit (ReLU). The cause of causal convolution is to avoid information leakage, so that the features at this time are independent of the features after this time; the expansion convolution effectively enlarges the receptive field; the normalization layer avoids an overfitting phenomenon in the training process; the ReLU enhances the non-linearity of the extracted features. The convolution parameters of the conventional convolution block are fixed and are irrelevant to the content and the length of the sample, which is not beneficial to extracting the characteristics of early classification stream data.
The design of the dynamic convolution module in this embodiment is specifically shown in fig. 3. And generating a convolution kernel specific to each time by the convolution kernel generation module according to the input features. During the training process, the convolution kernel generation module learns how to generate convolution kernels that are adaptive to the data content. The convolution kernel generation module consists of a convolution of size 1, a batch normalization layer, and a ReLU. The convolution kernel generation module generates a time-adaptive convolution kernel with a size of K and shared channels. And carrying out convolution operation on the input features by utilizing the newly generated time self-adaptive convolution kernel to obtain output features. For samples of different classes, through the design of the dynamic convolution generation module, a convolution kernel of self-adaptive content is generated, and the class-specific features can be extracted; for the same sample, the conventional convolution kernel adopts the convolution kernel with the same full time period, the characteristics are not distinctive, the information gain is limited along with the increase of the data amount, and in comparison, the designed time self-adaptive convolution module generates the convolution kernel specific to the increased data along with the time, so that the more distinctive characteristics can be extracted.
And finally, the fusion characteristics at different moments pass through a linear layer and a normalized index layer to obtain the classification probability of each moment.
Experiments are carried out on two commonly used human body motion recognition data sets, and an ideal effect is obtained (the index HM used in the following table is calculated through the early stage and the accuracy, and is a comprehensive index, and the method of the embodiment of the invention is DETSCN).
TABLE 1 comparison of classification results
Figure BDA0003860949230000061
In table 1 above, the ECLN, ETMD, earlie methods are respectively as follows:
ECLN:Ruβwurm M,Tavenard R,Lefèvre S,et al.Early classification for agricultural monitoring from satellite time series[J].arXiv preprint arXiv:1908.10283,2019.
ETMD:Sharma A,Singh S K,Udmale S S,et al.Early Transportation Mode Detection Using Smartphone Sensing Data[J].IEEE Sensors Journal,2021,21(14):15651-15659.
EARLIEST:Hartvigsen T,Sen C,Kong X,et al.Adaptive-halting policy network for early classification[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery&Data Mining.2019:101-1.
example 2
Embodiment 2 of the present invention provides a terminal device corresponding to embodiment 1, where the terminal device may be a processing device for a client, such as a mobile phone, a notebook computer, a tablet computer, a desktop computer, and the like, so as to execute the method of the above embodiment.
The terminal device of the embodiment comprises a memory, a processor and a computer program stored on the memory; the processor executes the computer program on the memory to implement the steps of the method of embodiment 1 described above.
In some implementations, the Memory may be a high-speed Random Access Memory (RAM), and may also include a non-volatile Memory, such as at least one disk Memory.
In other implementations, the processor may be various general-purpose processors such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), and the like, and is not limited herein.
Example 3
Embodiment 3 of the present invention provides a computer-readable storage medium corresponding to embodiment 1 above, on which a computer program/instructions are stored. The computer program/instructions, when executed by the processor, implement the steps of the method of embodiment 1 described above.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any combination of the foregoing.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The solution in the embodiment of the present application may be implemented by using various computer languages, for example, object-oriented programming language Java and transliteration scripting language JavaScript, etc.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A time series early classification method is characterized by comprising the following steps:
s1, constructing a training set by using time series data of human body actions;
s2, training a neural network by using the training set;
s3, inputting the training set into the trained neural network to obtain the classification probability of all data of the training set at all times, and calculating a probability exit threshold by using the classification probability;
s4, inputting the observable data at the moment t into a trained neural network to obtain the classification probability Pt at the moment t, taking the maximum value of Pt, stopping inputting the observable data continuously if the maximum value is greater than the probability exit threshold, and taking the classification result at the moment t as the final classification result of the observable data at the moment t; otherwise, the value of t is increased by 1 and step S4 is repeated.
2. The time series early classification method according to claim 1, characterized in that the neural network comprises:
the cascade connection first convolution modules are used for carrying out feature extraction on input data to obtain first features;
a plurality of cascaded second convolution modules, the input of which is the first feature, for extracting the high-level features of the input data;
the average pooling layer inputs the high-level features and outputs fusion features corresponding to different time length sequences;
the linear layer inputs the fusion characteristics and outputs prediction scores of different categories;
and the index normalization layer is used for normalizing the prediction scores and outputting classification probabilities.
3. The time series early classification method according to claim 2, characterized in that the first convolution module comprises a first hole causal convolution layer, a first normalization layer, a second hole causal convolution layer, a second normalization layer and a first linear activation unit which are connected in sequence; and the input characteristic and the output characteristic residual error of the first convolution module are connected.
4. The time series early classification method according to claim 2, characterized in that the second convolution module comprises a dynamic convolution layer, a third normalization layer, a second linear activation unit; the dynamic convolution layer comprises a convolution kernel generation module, wherein the convolution kernel generation module is used for generating a convolution kernel corresponding to each time by using input data; and performing feature extraction on the input data by using the convolution kernel.
5. The method for time-series early classification as claimed in claim 4, wherein the convolution kernel generation module includes a first convolution layer, a modified linear unit, a batch normalization layer and a second convolution layer connected in sequence.
6. The method for early classification of time series according to one of claims 1 to 4, wherein the implementation process of calculating the probability exit threshold using the classification probability comprises:
sorting the classification probabilities and removing repeated items in the classification probabilities;
taking the median of adjacent classification probabilities to obtain a series of threshold candidate values;
and selecting the threshold candidate value with the minimum cost as the probability exit threshold.
7. The method of early classification of time series according to claim 5, characterized in that the cost of the threshold candidate value is calculated using the following formula: cost β =α*(1-Acc β )+(1-α)*Earliness β (ii) a Wherein Acc β The calculated accuracy rate is exited when the maximum value of the classification probability is larger than a threshold candidate value beta; earliness β Alpha is a weighting factor, for the maximum value of the classification probability exiting the calculated early when it is greater than the threshold candidate value beta.
8. The method for early classification of time series according to claim 6, wherein α is 0.8.
9. A terminal device comprising a memory, a processor and a computer program stored on the memory; characterized in that said processor executes said computer program to implement the steps of the method according to one of claims 1 to 8.
10. A computer readable storage medium having stored thereon a computer program/instructions; characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method of one of claims 1 to 8.
CN202211163048.5A 2022-09-23 2022-09-23 Time series early classification method, terminal device and storage medium Pending CN115526249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211163048.5A CN115526249A (en) 2022-09-23 2022-09-23 Time series early classification method, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211163048.5A CN115526249A (en) 2022-09-23 2022-09-23 Time series early classification method, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN115526249A true CN115526249A (en) 2022-12-27

Family

ID=84699301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211163048.5A Pending CN115526249A (en) 2022-09-23 2022-09-23 Time series early classification method, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN115526249A (en)

Similar Documents

Publication Publication Date Title
CN109471938B (en) Text classification method and terminal
Du et al. Novel efficient RNN and LSTM-like architectures: Recurrent and gated broad learning systems and their applications for text classification
CN109961107B (en) Training method and device for target detection model, electronic equipment and storage medium
Patil et al. A perspective view of cotton leaf image classification using machine learning algorithms using WEKA
CN112488214A (en) Image emotion analysis method and related device
CN116596095B (en) Training method and device of carbon emission prediction model based on machine learning
US11893473B2 (en) Method for model adaptation, electronic device and computer program product
Freytag et al. Labeling examples that matter: Relevance-based active learning with gaussian processes
CN110147444A (en) Neural network language model, text prediction method, apparatus and storage medium
CN103927550A (en) Handwritten number identifying method and system
CN116451139B (en) Live broadcast data rapid analysis method based on artificial intelligence
EP4343616A1 (en) Image classification method, model training method, device, storage medium, and computer program
CN115294397A (en) Classification task post-processing method, device, equipment and storage medium
Liu et al. Modal-regression-based broad learning system for robust regression and classification
CN117523218A (en) Label generation, training of image classification model and image classification method and device
Yangzhen et al. A software reliability prediction model: Using improved long short term memory network
Rusiecki Standard dropout as remedy for training deep neural networks with label noise
Ma et al. Temporal pyramid recurrent neural network
CN111768803B (en) General audio steganalysis method based on convolutional neural network and multitask learning
CN111783688A (en) Remote sensing image scene classification method based on convolutional neural network
CN115526249A (en) Time series early classification method, terminal device and storage medium
Jakhar et al. Classification and Measuring Accuracy of Lenses Using Inception Model V3
CN114239750A (en) Alarm data processing method, device, storage medium and equipment
Zhou et al. Difficult novel class detection in semisupervised streaming data
CN112825118A (en) Rotation invariance face detection method and device, readable storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination