CN116842447A - Post-processing method, device and system for classified data and electronic device - Google Patents

Post-processing method, device and system for classified data and electronic device Download PDF

Info

Publication number
CN116842447A
CN116842447A CN202310827526.6A CN202310827526A CN116842447A CN 116842447 A CN116842447 A CN 116842447A CN 202310827526 A CN202310827526 A CN 202310827526A CN 116842447 A CN116842447 A CN 116842447A
Authority
CN
China
Prior art keywords
target
data
post
processing
prediction result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310827526.6A
Other languages
Chinese (zh)
Inventor
刘洪涛
苏毅
刘宇巍
李文佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Original Assignee
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen United Imaging Research Institute of Innovative Medical Equipment filed Critical Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority to CN202310827526.6A priority Critical patent/CN116842447A/en
Publication of CN116842447A publication Critical patent/CN116842447A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a post-processing method, a device, a system and an electronic device for classified data, wherein the post-processing method for the classified data comprises the following steps: acquiring data to be classified; inputting the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training; determining a target sample with the confidence coefficient larger than a preset confidence coefficient threshold value in the training data by using the target classification model, and acquiring a target prior distribution characteristic corresponding to the training data based on the target sample; and carrying out post-processing on the initial classification prediction result according to the target sample and the target prior distribution characteristics to obtain a target classification prediction result aiming at the data to be classified. The method solves the problem of low accuracy of the classification model based on deep learning, and realizes accurate and efficient data classification.

Description

Post-processing method, device and system for classified data and electronic device
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method, an apparatus, a system, and an electronic device for post-processing classified data.
Background
Classification algorithms are one of the most commonly used algorithms, play a vital role in many application scenarios, and at present, classification algorithms mainly comprise traditional machine learning classification and neural network-based classification algorithms. But whatever the type of algorithm, it is also affected when classifying a given sample that is close to its decision boundary; for example, with regard to the heart beat classification problem, when the class probability of the prediction output by the deep learning model is [0.5,0.5], since there are two largest and identical probability distributions, it is difficult to determine which class the heart beat data belongs to at this time, resulting in lower accuracy of the classification model based on the deep learning.
At present, no effective solution is proposed for the problem of low accuracy of a classification model based on deep learning in the related technology.
Disclosure of Invention
The embodiment of the application provides a post-processing method, device and system for classified data, an electronic device and a storage medium, which are used for at least solving the problem of low accuracy of a deep learning-based classification model in the related art.
In a first aspect, an embodiment of the present application provides a method for post-processing classified data, where the method includes:
Acquiring data to be classified;
inputting the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training;
determining a target sample with the confidence degree larger than a preset confidence degree threshold value in the training data by using the target classification model, and acquiring a target prior distribution characteristic corresponding to the training data based on the target sample;
and carrying out post-processing on the initial classification prediction result according to the target sample and the target prior distribution characteristics to obtain a target classification prediction result aiming at the data to be classified.
In some of these embodiments, the training data comprises a training set and a test set; the obtaining the target prior distribution feature corresponding to the training data based on the target sample includes:
acquiring a training set prediction result corresponding to the training set and a testing set prediction result corresponding to the testing set by using the target classification model;
determining initial category distribution characteristics of the training set according to the training set prediction result, and determining the target sample and the test category distribution characteristics of the target sample according to the test set prediction result;
And calculating the target prior distribution characteristics according to the initial category distribution characteristics and the test category distribution characteristics.
In some of these embodiments, the determining the target sample from the test set prediction results comprises:
traversing all test samples in the test set; when traversing to a current test sample, calculating to obtain a current entropy value according to a sample prediction result of the current test sample, and determining whether the current test sample is the target sample according to a comparison result between the current entropy value and the confidence threshold;
and traversing the next test sample, repeating the calculation steps until all the test samples are traversed, and determining all the target samples in the test samples.
In some embodiments, the calculating the target prior distribution feature according to the initial category distribution feature and the test category distribution feature includes:
calculating according to entropy values corresponding to all the target samples to obtain target entropy value information;
determining an association relation between the confidence threshold and the target entropy information, and respectively distributing weight values for the initial category distribution feature and the test category distribution feature based on the association relation;
And carrying out fusion processing on the initial category distribution characteristics and the test category distribution characteristics based on the weight values to obtain the target prior distribution characteristics.
In some embodiments, the post-processing the initial classification prediction result according to the target sample and the target prior distribution feature to obtain a target classification prediction result for the data to be classified includes:
acquiring test category distribution characteristics of the target sample;
splicing the test category distribution characteristics corresponding to all the target samples to generate a probability matrix, and generating a matrix to be processed according to the probability matrix and the initial classification prediction result;
acquiring a preset iteration constraint condition;
and carrying out normalized iterative processing on the matrix to be processed according to the target prior distribution characteristics based on the iterative constraint condition to obtain a post-processing matrix, and obtaining the target classification prediction result according to the post-processing matrix.
In some embodiments, the performing normalized iterative processing on the to-be-processed result according to the target prior distribution feature based on the iterative constraint condition to obtain a post-processing matrix includes:
Under the condition of carrying out current loop iteration, carrying out normalization post-processing on a column matrix in the matrix to be processed to obtain a current column normalization matrix, and carrying out normalization post-processing on a row matrix in the initial post-processing matrix according to the target prior distribution characteristics to obtain a current row normalization matrix;
and entering the next loop search, repeating the normalization step to perform alternate normalization iteration processing on the current line normalization matrix until the iteration constraint condition is reached, and obtaining the post-processing matrix.
In some of these embodiments, the method further comprises:
determining a category label of the training data;
inputting the training data into an initial classification model, and outputting an initial training prediction result; and calculating a loss function result according to the class label and the initial training prediction result, and reversely transmitting the gradient of the loss function result to the initial classification model to generate the target classification model with complete training.
In a second aspect, an embodiment of the present application provides a post-processing apparatus for classifying data, where the apparatus includes: the device comprises an acquisition module, an initial prediction module, a priori distribution module and a post-processing module;
The acquisition module is used for acquiring data to be classified;
the initial prediction module is used for inputting the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training;
the prior distribution module is used for determining a target sample with the confidence coefficient larger than a preset confidence coefficient threshold value in the training data by utilizing the target classification model, and acquiring target prior distribution characteristics corresponding to the training data based on the target sample;
and the post-processing module is used for carrying out post-processing on the initial classification prediction result according to the target sample and the target prior distribution characteristics to obtain a target classification prediction result aiming at the data to be classified.
In a third aspect, an embodiment of the present application provides a post-processing system for classifying data, the system including: a terminal device and a server device;
the terminal equipment is used for acquiring data to be classified;
the server device is configured to receive the data to be classified and perform the post-processing method of the classified data according to the first aspect.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the method for post-processing classification data according to the first aspect when the processor executes the computer program.
Compared with the related art, the post-processing method, the device, the system and the electronic device for the classified data provided by the embodiment of the application acquire the data to be classified; inputting the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training; determining a target sample with the confidence coefficient larger than a preset confidence coefficient threshold value in the training data by using the target classification model, and acquiring a target prior distribution characteristic corresponding to the training data based on the target sample; and carrying out post-processing on the initial classification prediction result according to the target sample and the target prior distribution characteristic to obtain a target classification prediction result aiming at the data to be classified, so that the data distribution is more close to sample data with high confidence, the phenomenon that a classification model is easy to make mistakes when classifying the data close to a decision boundary is effectively avoided, the problem that the accuracy of the classification model based on deep learning is low is solved, and accurate and efficient data classification is realized.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is an application environment diagram of a method of post-processing classified data according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of post-processing classification data according to an embodiment of the application;
FIG. 3 is a flow chart of another method of post-processing classification data according to an embodiment of the application;
FIG. 4 is a schematic diagram of heart beat data according to an embodiment of the application;
FIG. 5 is a block diagram of a post-processing apparatus for classifying data according to an embodiment of the present application;
fig. 6 is a block diagram of the interior of a computer device according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means greater than or equal to two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The post-processing method of the classified data provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal device 102 communicates with the server device 104 via a network. The data storage system may store data that the server device 104 needs to process. The data storage system may be integrated on the server device 104 or may be located on a cloud or other network server. The server equipment 104 acquires data to be classified transmitted by the terminal equipment 102, inputs the data to be classified into a target classification model generated according to training data iterative training for prediction processing, and outputs the target classification model to an initial classification prediction result; the server equipment 104 determines a target sample with the confidence degree larger than a preset confidence degree threshold value in the training data by utilizing a target classification model, and acquires a target prior distribution characteristic corresponding to the training data based on the target sample; the server device 104 performs post-processing on the initial classification prediction result according to the target sample and the target prior distribution feature to obtain a target classification prediction result for the data to be classified. The terminal device 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, which may be smart watches, smart bracelets, headsets, etc. The server device 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
The present embodiment provides a method for post-processing classified data, and fig. 2 is a flowchart of a method for post-processing classified data according to an embodiment of the present application, as shown in fig. 2, where the flowchart includes the following steps:
step S210, obtaining data to be classified.
The data to be classified refers to heart beat data input by a user, data including gesture images or commodity data and the like which need to be classified in a self-adaptive mode.
Step S220, inputting the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training.
The target classification model can be various trained neural network models which can be applied to classification scenes such as heart beat data classification, gesture classification or commodity classification. Further, the target classification model may be obtained by inputting training data into an initial classification model for iterative training, where the initial classification model may be a neural network model such as a depth residual network (Deep residual network, abbreviated as ResNet) model, a VGG-19 model, a DenseNet model, a High-Resolution network, abbreviated as HRNet, or a Bi-directional Long Short-Term Memory (BiLSTM) model. After the data to be classified is input into the target classification model, the target classification model can conduct prediction processing on the category attribute of the data to be classified so as to output the initial classification prediction result.
Step S230, determining a target sample with the confidence degree larger than a preset confidence degree threshold value in the training data by using the target classification model, and acquiring a target prior distribution feature corresponding to the training data based on the target sample.
The confidence threshold may be preset by a worker in combination with an actual situation. Specifically, after the target classification model is trained based on training data, a sample with a confidence greater than a confidence threshold in the training data can be determined through a training prediction result, output by the model, for the training data, namely, the determined sample is a high-confidence sample, and the target prior distribution characteristic is determined based on a test result corresponding to the high-confidence target sample, so that accuracy of prior distribution is improved.
And step S240, performing post-processing on the initial classification prediction result according to the target sample and the target prior distribution characteristics to obtain a target classification prediction result aiming at the data to be classified.
After the matrix alternate normalization processing is carried out on the initial classification prediction results through the steps, the initial classification prediction results can be unified to the range corresponding to the target priori distribution characteristics, so that the prediction results of the data to be classified can be closer to the classification distribution characteristics of the samples with high confidence, and the accuracy of the classified data is improved.
Through the steps S210 to S240, after the initial classification prediction result of the data to be classified is obtained, the target prior distribution feature obtained by predicting the training data by using the target classification model and the determined high-confidence sample in the training data are used for performing post-processing on the initial classification prediction result, and finally the target classification prediction result of the data to be classified is obtained, so that the data distribution can be more close to the sample data with high confidence, the phenomenon that the classification model is easy to make mistakes when classifying the data close to the decision boundary of the classification model is effectively avoided, the problem that the classification model based on deep learning is low in accuracy is solved, and the accurate and efficient classification data post-processing method is realized.
In some embodiments, a method for post-processing classified data is provided, fig. 3 is a flowchart of another method for post-processing classified data according to an embodiment of the present application, where, as shown in fig. 3, the training data includes a training set and a test set, and the flowchart includes all the steps shown in fig. 2, and further includes the following steps:
step S310, obtaining a training set prediction result corresponding to the training set and a testing set prediction result corresponding to the testing set by using the target classification model.
After the training data is used for training the target classification model, the training set prediction result of the category information of each training set and the test set prediction result of the category information of each test set can be obtained and stored.
Step S320, determining initial category distribution characteristics of the training set according to the training set prediction result, and determining the target sample and the test category distribution characteristics of the target sample according to the test set prediction result.
The initial category distribution feature is the proportion data of each category in the training set determined based on the training set prediction result, for example, the training set of the gesture image data is predicted by the classification model to obtain the training set prediction result including two categories of the fist-making gesture and the palm-spreading gesture, and the proportion is 1:9, and the initial category distribution feature stored in the form of a matrix can be obtained based on the training set prediction result, and the initial category distribution feature can be expressed as [0.1,0], [0,0.9]. Similarly, the above-mentioned test class distribution feature is the proportion data of each class for the target sample with higher confidence in the test set, and will not be described herein.
Step S330, calculating the target prior distribution feature according to the initial category distribution feature and the test category distribution feature.
After the initial category distribution characteristics and the test category distribution characteristics are determined, the two category distribution characteristics can be added and fused to obtain the adjusted target prior distribution characteristics.
Through the steps S310 to S330, the initial category distribution characteristics of the training set and the test category distribution characteristics of the test set are comprehensively analyzed, and the target prior distribution characteristics after dynamic adjustment are obtained, so that the problem of low prior distribution characteristics reliability caused by considering the category distribution of a single data set is avoided, and the accuracy and reliability of the classification model based on deep learning are effectively improved.
In some embodiments, the determining the target sample according to the test set prediction result further includes the following steps:
step S321, traversing all test samples in the test set; when traversing to the current test sample, calculating to obtain a current entropy value according to a sample prediction result of the current test sample, and determining whether the current test sample is the target sample according to a comparison result between the current entropy value and the confidence threshold.
Specifically, performing traversal calculation on all test samples in the test set to obtain a current entropy value corresponding to each test sample; further, the current entropy value can be obtained by shannon entropy calculation formula, as shown in the following formulas 1 and 2:
in the above-mentioned formula(s),represents the probability, p, after the normalization of the ith probability value in the selected top-k categories i Representing a probability prediction value of the model for the ith class, wherein i is a positive integer; h represents entropy value, which is calculated by selecting predictive probability of top-k categories, and k represents the number of selected categories. The embodiment can calculate the current entropy value result for measuring the confidence index of the current test sample, set the preset confidence threshold value as the corresponding current entropy value threshold value, and calculate the difference value or the ratio between the current entropy value result and the current entropy value threshold value to obtain the comparison result. If the current entropy value is larger than the current entropy value threshold value, the confidence of the current test sample is higher, and the corresponding conclusion is also relatively reliable, and then the current test sample can be selected as a target sample; if the current entropy value is less than or equal to the current entropy threshold, the method is as followsThe previous test sample is a low confidence sample, at which point the current test sample may be discarded and the next test sample traversed. Further, taking test sample data as heart beat data as an example, setting the time length of the self-adaptive adjustment window as T, if the entropy value of the sample data in the current T time window is smaller than or equal to the current entropy value threshold, treating the sample data as a high confidence sample, and if the entropy value of the sample data is larger than the current confidence threshold, treating the sample data as a low confidence sample.
Step S322, traversing the next test sample, repeating the above calculation steps until all the test samples are traversed, and determining all the target samples in the test samples.
Specifically, traversing all test samples, counting whether each sample belongs to a high-confidence sample, and counting the number of high-confidence target samples in all test samples; if the number of the target samples is smaller than or equal to a preset value, the prior sample self-adaptive adjustment is not needed, and the initial classification prediction data is directly subjected to post-processing by utilizing basic prior distribution; if the number of the target samples is greater than the preset threshold, determining a target prior distribution characteristic based on two types of distribution characteristics through the embodiment of the method; the preset threshold may be a value preset by a worker for evaluating the number of samples.
Through the steps S321 to S323, the traversal search is performed on all the test samples in the test set to determine all the target samples with high confidence, which is beneficial to improving the reliability of the samples, and further improves the accuracy and reliability of the classification model based on deep learning.
In some embodiments, the calculating the target prior distribution feature according to the initial category distribution feature and the test category distribution feature further includes the following steps:
Step S331, calculating to obtain target entropy information according to the entropy values corresponding to all the target samples.
The entropy value of each target sample can be calculated by the formula 1 and the formula 2; after the target samples are determined, the entropy values corresponding to all the target samples can be counted, and the results, such as an entropy value average value and the like, for indicating the entropy value characteristics of the target samples can be calculated, so that the target entropy value information can be obtained.
Step S332, determining an association relationship between the confidence threshold and the target entropy information, and respectively assigning weight values to the initial category distribution feature and the test category distribution feature based on the association relationship.
Step S333, fusion processing is carried out on the initial category distribution feature and the test category distribution feature based on the weight value, and the target prior distribution feature is obtained.
In the above steps S332 to S333, the association relationship between the confidence threshold value and the target entropy value information may be determined by calculating the ratio or the difference between the confidence threshold value and the target entropy value information. Specifically, a difference between the confidence threshold and the target entropy value information may be calculated, a weight value is assigned to the test class distribution feature based on a ratio between the difference and the confidence threshold, and a weight value is assigned to the initial class distribution feature based on a ratio between the target entropy value information and the confidence threshold, so that when the value of the target entropy value information is smaller, it is explained that the higher the confidence of the test sample, the higher the weight coefficient assigned to the test class distribution feature, and the lower the weight coefficient assigned to the initial class distribution feature, as shown in the following formula 3:
In the above formula, lambda Target object For representing a target prior distribution feature; threaded is used to represent the entropy threshold;an average entropy value for the target sample representing high confidence; inverted V q ' for representing test class distribution characteristics; inverted V q For representing the initial category distribution characteristics. Further, the following formula can be obtained by sorting the above formula 3:
through the steps S331 to S333, weight values are respectively allocated to the initial category distribution feature and the test category distribution feature based on the confidence threshold value and the target entropy value information of the target sample, and the weighted fusion processing is performed on the two category distribution features based on the weight values, so that the dynamic adjustment of the basic prior distribution by the test set sample and the training set sample can be integrated, and meanwhile, the adjusted prior distribution features are closer to the high confidence sample data, which is beneficial to further improving the accuracy of prior distribution, and further improving the accuracy of data classification.
In some embodiments, the post-processing the initial classification prediction result according to the target sample and the target prior distribution feature to obtain a target classification prediction result for the data to be classified, further includes the following steps:
Step S241, obtaining the test class distribution feature of the target sample.
Step S242, the test category distribution characteristics corresponding to all the target samples are spliced to generate a probability matrix, and a matrix to be processed is generated according to the probability matrix and the initial classification prediction result.
The probability matrix is a matrix generated by splicing category distribution characteristics of the target samples with high confidence in the test samples, the size of the probability matrix is n multiplied by m, n represents the number of the target samples, and m represents the number of categories of each sample. Then, the probability matrix and the initial classification prediction result stored in the form of matrix are spliced into the matrix to be processed, as shown in the following formula 5:
in the above formula, L 0 For representing the matrix to be processed, A 0 For representing probability matrices, b 0 For representing initial classification prediction junctionsThe matrix corresponding to the fruit is obtained,b is 0 Is a transposed matrix of (a).
Step S243, obtaining preset iteration constraint conditions.
The iteration constraint condition refers to condition information for limiting the matrix normalized iteration duration or number of times. Further, the iteration constraint condition may be preset to be condition information for controlling the normalized loop iteration d times, preferably, d may be set to 3 by default; or, the iteration constraint condition can be preset to control the iteration duration of the normalization loop to be t, or the iteration constraint condition can be set to be a condition that the normalization result tends to converge, and the like.
Step S244, based on the iteration constraint condition, carrying out normalized iteration processing on the matrix to be processed according to the target prior distribution characteristics to obtain a post-processing matrix, and obtaining the target classification prediction result according to the post-processing matrix.
Specifically, the matrix to be processed is subjected to normalization iterative processing until the iteration constraint condition is reached, the matrix is unified under the range of target priori distribution characteristics, a post-processing matrix is obtained, and new sample prediction probability is obtained based on a matrix block corresponding to the initial classification prediction result in the post-processing matrix, so that the target classification prediction result is obtained.
Through the steps S241 to S244, a matrix to be processed is generated through the probability matrix and the initial classification prediction result, and normalized iterative processing is performed on the matrix to be processed, so that a sample set is formed by using a plurality of high-confidence samples and the initial classification prediction result to be a distribution matrix which can correspond to prior distribution, the matrix to be processed is unified under the prior distribution characteristic range, and the accuracy and reliability of the classification model are effectively improved.
In some embodiments, the performing normalized iterative processing on the to-be-processed result according to the target prior distribution feature based on the iterative constraint condition to obtain a post-processing matrix, and further includes the following steps:
Under the condition of carrying out current loop iteration, carrying out normalization post-processing on a column matrix in the matrix to be processed to obtain a current column normalization matrix, and carrying out normalization post-processing on a row matrix in the initial post-processing matrix according to the target prior distribution characteristics to obtain the current row normalization matrix.
And entering the next loop search, repeating the normalization step to perform normalization iteration processing on the current line normalization matrix until the iteration constraint condition is reached, and obtaining the post-processing matrix.
Specifically, under the current loop iteration, column matrix normalization processing is performed on the matrix to be processed, as shown in the following formula 6 and formula 7:
in the above formula, a represents a power series for controlling b 0 In the present embodiment, the default value is 1, thenIs L d-1 Power series of (a); e represents a unit column vector; d is used to represent the conversion of a column vector into a diagonal matrix, s representing a column vector diagonal matrix of the current column vector after diagonalization; s is S d For representing the current column normalization matrix. Then, after the current column normalization matrix is obtained through the above formula, the row normalization processing is continuously performed on the current column normalization matrix, as shown in the following formula 8 and formula 9:
L =D(S dTarget object e) Equation 8
The above-mentioned maleIn the middle, the Λ L Representing a row vector diagonal matrix of the current row vector after diagonalization; l (L) d For representing a current row normalization matrix; and if the current loop is not finished, continuing to enter the next loop iteration process, and continuing to sequentially perform column normalization process and row normalization process on the current row normalization matrix through the formulas 6 to 9 until the iteration constraint condition is reached, so as to obtain the post-processing matrix, as shown in the following formula 10:
in the above formula, L d For representing post-processing matrices, A d For representing alternatively normalized probability matrices, b d The method is used for representing the prediction result after alternative normalization, namely, the new sample prediction probability obtained according to the target prior distribution characteristics, wherein the category with the maximum probability is the target classification prediction result.
Through the embodiment, the matrix to be processed is subjected to row-column alternate normalization processing, so that each element of the matrix is unified to the range corresponding to the target prior distribution characteristics, and the accuracy of the classification model based on deep learning is further improved.
In some embodiments, the method for post-processing classified data further includes the following steps:
Step S201, determining a category label of the training data.
Wherein the category label refers to known category information corresponding to the training data. Further, after a certain amount of basic data of known category information is collected as the training data, the training data may be preprocessed and the data set may be divided. Taking the training data as heart beat data to be trained as an example, the data preprocessing mode can be as follows: converting different heart beat data to be preprocessed into the same sampling frequency by a cubic spline sampling algorithm, and obtaining heart beat data to be trained with uniform data format if the sampling frequency is uniform to 250 Hz; filtering the heart beat data to be trained after the uniform format, for example, filtering the heart beat data by adopting a Butterworth band pass of 0.05 Hz-100 Hz; the filtering heart beat data is subjected to R wave detection by adopting a self-adaptive double-threshold QRS wave detection algorithm such as a Pan-Tompkins detection algorithm, the heart beat distance before and after each R wave distance is calculated according to the detected R wave position, as shown in fig. 4, the heart beat data in the time length can be taken forward for 0.25s and backward for 0.45s by taking the R wave position as the center, and the heart beat data in the time length is taken as single heart beat sample data to realize data segmentation, so that a plurality of preprocessed heart beat sample data are obtained. After preprocessing the heart beat data to obtain heart beat sample data, dividing each heart beat data into a training set, a verification set and a test set according to a certain proportion by taking a person as a unit, wherein the dividing proportion between the data sets can be set as the training set: verification set: test set = 8:1:1, a step of; normalized parameters of heart beat data are calculated from the training set and applied to all data sets as shown in equation 11 below:
In the formula, x is heart beat data, x' is normalized heart beat data to be trained, mu is a mean value, and sigma is a variance; the heart beat data are preprocessed and data sets are divided through the steps, so that the efficiency and accuracy of subsequent model training can be effectively improved.
Step S202, inputting the training data into an initial classification model, and outputting an initial training prediction result; and calculating a loss function result according to the class label and the initial training prediction result, and reversely transmitting the gradient of the loss function result to the initial classification model to generate the target classification model with complete training.
The training set in the training data is input into an initial classification model for training, an initial training prediction result corresponding to the training set is output, the initial training prediction result and the class labels are calculated based on a cross entropy loss function to obtain the loss function result, and the initial classification model is subjected to iterative training based on the loss function result until the number of iterative training times is met or the classification model converges to obtain a target classification model; meanwhile, the learning rate is set to be 0.001, learning rate attenuation is achieved by setting an initial value and an attenuation mode of the learning rate, and after training reaches a certain degree, the accuracy is improved by using a small learning rate, so that the training model can be converged more quickly, and the algorithm for training the classification model can be effectively optimized. Further, after training the model by using the above-mentioned divided training set, the model may be further verified by using a verification set to adjust parameters, to obtain an optimized target classification model, and evaluate the performance of the model by using a test set.
Through the steps S201 to S202, the loss function result is obtained by calculating the initial training prediction result output by the initial classification model, and the initial classification model is iteratively trained based on the loss function result to obtain the optimized target classification model, so that the accuracy of the output data of the target classification model is improved, and the accuracy of the data classification is further improved.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
The embodiment also provides a post-processing device for classified data, which is used for implementing the above embodiment and the preferred embodiment, and is not described in detail. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 5 is a block diagram of a post-processing apparatus for classifying data according to an embodiment of the present application, as shown in fig. 5, the apparatus including: an acquisition module 52, an initial prediction module 54, a prior distribution module 56, and a post-processing module 58;
the acquiring module 52 is configured to acquire data to be classified; the initial prediction module 54 is configured to input the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training; the prior distribution module 56 is configured to determine, using the target classification model, a target sample in the training data with a confidence coefficient greater than a preset confidence coefficient threshold, and obtain a target prior distribution feature corresponding to the training data based on the target sample; the post-processing module 58 is configured to post-process the initial classification prediction result according to the target sample and the target prior distribution feature, so as to obtain a target classification prediction result for the data to be classified.
Through the above embodiment, after the initial prediction module 54 obtains the initial classification prediction result of the data to be classified, the prior distribution module 56 predicts the training data by using the target classification model to obtain the target prior distribution feature, and the determined high confidence sample in the training data, and the post-processing module 58 performs post-processing on the initial classification prediction result to finally obtain the target classification prediction result of the data to be classified, so that the data distribution is more close to the sample data with high confidence, the phenomenon that the classification model is easy to make mistakes when classifying the data close to the decision boundary is effectively avoided, the problem that the classification model based on deep learning is low in accuracy is solved, and the accurate and efficient classified data post-processing device is realized.
In some of these embodiments, the training data includes a training set and a test set; the prior distribution module 56 is further configured to obtain a training set prediction result corresponding to the training set and a testing set prediction result corresponding to the testing set by using the target classification model; the prior distribution module 56 determines initial category distribution characteristics of the training set based on the training set prediction results, determines the target sample based on the testing set prediction results, and tests category distribution characteristics of the target sample; the prior distribution module 56 calculates the target prior distribution feature from the initial category distribution feature and the test category distribution feature.
In some of these embodiments, the prior distribution module 56 is also used to traverse all test samples in the test set; when traversing to the current test sample, the prior distribution module 56 calculates a current entropy value according to a sample prediction result of the current test sample, and determines whether the current test sample is the target sample according to a comparison result between the current entropy value and the confidence threshold; the prior distribution module 56 traverses the next test sample, repeats the above steps until all the test samples have been traversed, and determines all the target samples in the test samples.
In some embodiments, the prior distribution module 56 is further configured to calculate target entropy value information according to entropy values corresponding to all the target samples; the prior distribution module 56 obtains the confidence threshold, determines an association relationship between the confidence threshold and the target entropy information, and respectively assigns weight values to the initial category distribution feature and the test category distribution feature based on the association relationship; the prior distribution module 56 performs fusion processing on the initial category distribution feature and the test category distribution feature based on the weight value to obtain the target prior distribution feature.
In some embodiments, the post-processing module 58 is further configured to obtain a test class distribution characteristic of the target sample; the post-processing module 58 performs a stitching process on the test class distribution features corresponding to all the target samples to generate a probability matrix, and generates a matrix to be processed according to the probability matrix and the initial classification prediction result; the post-processing module 58 obtains preset iteration constraints; the post-processing module 58 performs normalized iterative processing on the matrix to be processed according to the target prior distribution feature based on the iterative constraint condition to obtain a post-processing matrix, and obtains the target classification prediction result according to the post-processing matrix.
In some embodiments, the post-processing module 58 is further configured to perform normalization post-processing on a column matrix in the matrix to be processed under the condition of performing a current loop iteration to obtain a current column normalization matrix, and perform normalization post-processing on a row matrix in the initial post-processing matrix according to the target prior distribution feature to obtain a current row normalization matrix; the post-processing module 58 proceeds to the next loop search and repeats the above steps to normalize the current row normalization matrix until the iteration constraint is reached and the post-processing matrix is obtained.
In some embodiments, the post-processing device of the classification data further includes a training module; the training module is used for determining the category label of the training data; the training module inputs the training data to an initial classification model and outputs an initial training prediction result; and calculating a loss function result according to the class label and the initial training prediction result, and reversely transmitting the gradient of the loss function result to the initial classification model to generate the target classification model with complete training.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
The embodiment also provides a post-processing system for classified data, which comprises: a terminal device and a server device; the terminal equipment is used for acquiring data to be classified; the server device is configured to receive the data to be classified, and perform the steps in any of the method embodiments described above. Further, data transmission can be performed between the server device and the terminal device through a transmission device; in one embodiment, the transmission device may include a network adapter (Network Interface Controller, simply referred to as NIC) that may connect to other network devices through the base station to communicate with the internet; in another embodiment, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
Through the embodiment, after the server equipment acquires the initial classification prediction result of the data to be classified, the target priori distribution characteristics obtained by predicting the training data by using the target classification model and the high-confidence samples in the determined training data are used for carrying out post-processing on the initial classification prediction result to finally obtain the target classification prediction result of the data to be classified, so that the data distribution can be more close to the sample data with high confidence, the phenomenon that the classification model is easy to make mistakes when the data close to the decision boundary is classified is effectively avoided, the problem that the classification model based on deep learning is low in accuracy is solved, and an accurate and efficient classified data post-processing system is realized.
In some of these embodiments, a computer device is provided, which may be a server, and fig. 6 is a block diagram of the interior of a computer device according to an embodiment of the present application, as shown in fig. 6. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing the target classification prediction result. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements the above-described post-processing method of classification data.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The present embodiment also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, obtaining data to be classified.
S2, inputting the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training.
S3, determining a target sample with the confidence degree larger than a preset confidence degree threshold value in the training data by utilizing the target classification model, and acquiring target prior distribution characteristics corresponding to the training data based on the target sample.
And S4, carrying out post-processing on the initial classification prediction result according to the target sample and the target prior distribution characteristics to obtain a target classification prediction result aiming at the data to be classified.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In addition, in combination with the post-processing method of the classification data in the above embodiment, the embodiment of the present application may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the methods of post-processing classification data of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that the technical features of the above-described embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above-described embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of post-processing classified data, the method comprising:
acquiring data to be classified;
inputting the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training;
Determining a target sample with the confidence degree larger than a preset confidence degree threshold value in the training data by using the target classification model, and acquiring a target prior distribution characteristic corresponding to the training data based on the target sample;
and carrying out post-processing on the initial classification prediction result according to the target sample and the target prior distribution characteristics to obtain a target classification prediction result aiming at the data to be classified.
2. The post-processing method according to claim 1, wherein the training data comprises a training set and a test set; the obtaining the target prior distribution feature corresponding to the training data based on the target sample includes:
acquiring a training set prediction result corresponding to the training set and a testing set prediction result corresponding to the testing set by using the target classification model;
determining initial category distribution characteristics of the training set according to the training set prediction result, and determining the target sample and the test category distribution characteristics of the target sample according to the test set prediction result;
and calculating the target prior distribution characteristics according to the initial category distribution characteristics and the test category distribution characteristics.
3. The post-processing method of claim 2, wherein said determining the target sample from the test set predictions comprises:
traversing all test samples in the test set; when traversing to a current test sample, calculating to obtain a current entropy value according to a sample prediction result of the current test sample, and determining whether the current test sample is the target sample according to a comparison result between the current entropy value and the confidence threshold;
and traversing the next test sample, repeating the calculation steps until all the test samples are traversed, and determining all the target samples in the test samples.
4. The post-processing method according to claim 2, wherein the calculating the target prior distribution feature according to the initial category distribution feature and the test category distribution feature comprises:
calculating according to entropy values corresponding to all the target samples to obtain target entropy value information;
determining an association relation between the confidence threshold and the target entropy information, and respectively distributing weight values for the initial category distribution feature and the test category distribution feature based on the association relation;
And carrying out fusion processing on the initial category distribution characteristics and the test category distribution characteristics based on the weight values to obtain the target prior distribution characteristics.
5. The post-processing method according to claim 1, wherein the post-processing the initial classification prediction result according to the target sample and the target prior distribution feature to obtain a target classification prediction result for the data to be classified comprises:
acquiring test category distribution characteristics of the target sample;
splicing the test category distribution characteristics corresponding to all the target samples to generate a probability matrix, and generating a matrix to be processed according to the probability matrix and the initial classification prediction result;
acquiring a preset iteration constraint condition;
and carrying out normalized iterative processing on the matrix to be processed according to the target prior distribution characteristics based on the iterative constraint condition to obtain a post-processing matrix, and obtaining the target classification prediction result according to the post-processing matrix.
6. The post-processing method according to claim 5, wherein the performing normalized iterative processing on the result to be processed according to the target prior distribution feature based on the iteration constraint condition to obtain a post-processing matrix includes:
Under the condition of carrying out current loop iteration, carrying out normalization post-processing on a column matrix in the matrix to be processed to obtain a current column normalization matrix, and carrying out normalization post-processing on a row matrix in the initial post-processing matrix according to the target prior distribution characteristics to obtain a current row normalization matrix;
and entering the next loop search, repeating the normalization step to perform alternate normalization iteration processing on the current line normalization matrix until the iteration constraint condition is reached, and obtaining the post-processing matrix.
7. The aftertreatment method of any of claims 1-6, wherein the method further comprises:
determining a category label of the training data;
inputting the training data into an initial classification model, and outputting an initial training prediction result; and calculating a loss function result according to the class label and the initial training prediction result, and reversely transmitting the gradient of the loss function result to the initial classification model to generate the target classification model with complete training.
8. A post-processing apparatus for classifying data, the apparatus comprising: the device comprises an acquisition module, an initial prediction module, a priori distribution module and a post-processing module;
The acquisition module is used for acquiring data to be classified;
the initial prediction module is used for inputting the data to be classified into a pre-trained target classification model to obtain an initial classification prediction result; the target classification model is generated according to training data through iterative training;
the prior distribution module is used for determining a target sample with the confidence coefficient larger than a preset confidence coefficient threshold value in the training data by utilizing the target classification model, and acquiring target prior distribution characteristics corresponding to the training data based on the target sample;
and the post-processing module is used for carrying out post-processing on the initial classification prediction result according to the target sample and the target prior distribution characteristics to obtain a target classification prediction result aiming at the data to be classified.
9. A post-processing system for classifying data, the system comprising: a terminal device and a server device;
the terminal equipment is used for acquiring data to be classified;
the server device is configured to receive the data to be classified and perform the post-processing method of the classified data according to any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of post-processing classification data according to any of claims 1 to 7.
CN202310827526.6A 2023-07-06 2023-07-06 Post-processing method, device and system for classified data and electronic device Pending CN116842447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310827526.6A CN116842447A (en) 2023-07-06 2023-07-06 Post-processing method, device and system for classified data and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310827526.6A CN116842447A (en) 2023-07-06 2023-07-06 Post-processing method, device and system for classified data and electronic device

Publications (1)

Publication Number Publication Date
CN116842447A true CN116842447A (en) 2023-10-03

Family

ID=88159694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310827526.6A Pending CN116842447A (en) 2023-07-06 2023-07-06 Post-processing method, device and system for classified data and electronic device

Country Status (1)

Country Link
CN (1) CN116842447A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117725231A (en) * 2024-02-08 2024-03-19 中国电子科技集团公司第十五研究所 Content generation method and system based on semantic evidence prompt and confidence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117725231A (en) * 2024-02-08 2024-03-19 中国电子科技集团公司第十五研究所 Content generation method and system based on semantic evidence prompt and confidence
CN117725231B (en) * 2024-02-08 2024-04-23 中国电子科技集团公司第十五研究所 Content generation method and system based on semantic evidence prompt and confidence

Similar Documents

Publication Publication Date Title
WO2021204272A1 (en) Privacy protection-based target service model determination
CN111507521B (en) Method and device for predicting power load of transformer area
CN108229667A (en) Trimming based on artificial neural network classification
US20210081763A1 (en) Electronic device and method for controlling the electronic device thereof
US20220383627A1 (en) Automatic modeling method and device for object detection model
CN112149797B (en) Neural network structure optimization method and device and electronic equipment
CN110555526B (en) Neural network model training method, image recognition method and device
JP2023523029A (en) Image recognition model generation method, apparatus, computer equipment and storage medium
CN113570029A (en) Method for obtaining neural network model, image processing method and device
CN116842447A (en) Post-processing method, device and system for classified data and electronic device
US10997528B2 (en) Unsupervised model evaluation method, apparatus, server, and computer-readable storage medium
CN114698395A (en) Quantification method and device of neural network model, and data processing method and device
CN114078195A (en) Training method of classification model, search method and device of hyper-parameters
GB2599137A (en) Method and apparatus for neural architecture search
CN115496144A (en) Power distribution network operation scene determining method and device, computer equipment and storage medium
JP7214863B2 (en) Computer architecture for artificial image generation
CN116188878A (en) Image classification method, device and storage medium based on neural network structure fine adjustment
CN113240194A (en) Energy storage battery capacity prediction method, server and computer readable storage medium
CN113065593A (en) Model training method and device, computer equipment and storage medium
CN112906883A (en) Hybrid precision quantization strategy determination method and system for deep neural network
US20200279148A1 (en) Material structure analysis method and material structure analyzer
CN116258923A (en) Image recognition model training method, device, computer equipment and storage medium
Joshi et al. Area efficient VLSI ASIC implementation of multilayer perceptrons
CN116245142A (en) System and method for hybrid precision quantization of deep neural networks
CN113822371A (en) Training packet model, and method and device for grouping time sequence data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination