CN115577245B - Data distribution balancing method and system for RUL prediction of rotating assembly - Google Patents
Data distribution balancing method and system for RUL prediction of rotating assembly Download PDFInfo
- Publication number
- CN115577245B CN115577245B CN202211547913.6A CN202211547913A CN115577245B CN 115577245 B CN115577245 B CN 115577245B CN 202211547913 A CN202211547913 A CN 202211547913A CN 115577245 B CN115577245 B CN 115577245B
- Authority
- CN
- China
- Prior art keywords
- domain
- training
- data
- rul
- classifier
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000009826 distribution Methods 0.000 title claims abstract description 56
- 238000012549 training Methods 0.000 claims abstract description 160
- 238000013508 migration Methods 0.000 claims abstract description 41
- 230000008569 process Effects 0.000 claims abstract description 38
- 238000000605 extraction Methods 0.000 claims abstract description 35
- 230000005012 migration Effects 0.000 claims abstract description 34
- 230000009466 transformation Effects 0.000 claims abstract description 16
- 230000009467 reduction Effects 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 78
- 230000036541 health Effects 0.000 claims description 26
- 238000005457 optimization Methods 0.000 claims description 25
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 15
- 230000015556 catabolic process Effects 0.000 claims description 15
- 238000006731 degradation reaction Methods 0.000 claims description 15
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 230000007246 mechanism Effects 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 7
- 238000013526 transfer learning Methods 0.000 claims description 6
- 230000005484 gravity Effects 0.000 claims description 4
- 125000004122 cyclic group Chemical group 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 4
- 238000003860 storage Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 206010033799 Paralysis Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Testing Of Balance (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a data distribution balancing method and system for RUL prediction of a rotating assembly, wherein the method comprises the following steps: and carrying out feature transformation on original signals of the source domain data set and the target domain data set, applying an Autoencoder feature extraction network to carry out feature dimension reduction, introducing a domain classifier, retraining the feature extraction network in the RUL prediction model according to the public property, the self-association and the correspondence requirements of the target features and combining the label data of the source domain and the label-free data of the target domain, and realizing the data distribution balance of the target domain and the source domain through iterative updating of the domain classifier and the feature extractor. According to the invention, self-correlation and correspondence constraint are added in the training process, so that the extraction capability of public features is improved, the capability of the model for processing complex working condition data sample distribution balancing is enhanced, and the migration application of the model under different working conditions is realized.
Description
Technical Field
The invention relates to the technical field of information, in particular to a data distribution balancing method and system for RUL prediction of a rotating assembly.
Background
The rotating component is an important component of high-end precise electronic manufacturing equipment, is critical to the normal operation of the equipment, has great influence on the performance of the equipment due to the normal working condition, and is easy to cause secondary damage to the associated component once fatigue failure occurs, thereby causing paralysis of the whole system. Therefore, it is of great practical importance to make predictions of remaining useful life for a rotating multi-component system.
Currently, most deep learning residual life (RUL) predictive models are based on a large number of data samples and the data distribution of the sample data training set and the test set are consistent. In an actual operation scene, the working conditions change frequently due to different environments such as the working load, the rotating speed and the temperature, the data distribution between the training data and the prediction data is unbalanced, and the labeled data in the actual scene is high in acquisition difficulty, small in sample size and incapable of acquiring a sufficient number of labeled data retraining models, so that the feature extractor and the RUL predictor obtained by training a source domain are directly used for a target domain prediction task, and the RUL prediction accuracy is low. Therefore, the capability of the model for processing the distribution balance of the complex working condition data sample is enhanced, and the prediction accuracy of the RUL model under the complex working condition is improved, so that the problem to be solved is urgent at present.
Disclosure of Invention
In order to solve the technical problems, the invention provides a data distribution balancing method and system for RUL prediction of a rotating assembly.
The first aspect of the present invention provides a data distribution balancing method for RUL prediction of a rotating assembly, comprising:
dividing an original signal of a rotating component into a source domain data set and a target domain data set, and carrying out feature transformation through short-time Fourier transformation;
introducing a domain classifier network into an Autoencoder feature extraction network, and constructing an countermeasure migration model with balanced data distribution by combining a circulating neural network based on attention mechanism intervention;
feature dimension reduction is carried out through an countermeasure migration model, a domain classifier and a feature extractor are subjected to countermeasure migration training, and a feature extraction network is retrained by combining tag data of a source domain data set and untagged data of a target domain data set;
and (3) realizing the data distribution balancing of the target domain and the source domain through the iterative updating of the domain classifier and the feature extractor, and predicting the residual service life of the rotating assembly through the data after the data distribution balancing.
In this scheme, the migration countermeasure model is composed of a feature extractor, a decoder, a domain classifier and an RUL predictor, specifically:
The feature extractorThe encoder part of the Autoencoder feature extraction network comprises a 3-layer convolution layer and a 2-layer full-connection layer, is a training object for transfer learning, and is used for extracting common features of a source domain and a target domain;
the decoderThe decoder part of the Autoencoder feature extraction network comprises 3 deconvolution layers and 2 full connection layers, and is used for reconstructing the features extracted by the encoder into time-frequency features, and the self-correlation of the features in the migration process is kept between the specific gravity features and the original input features;
the domain classifierIs a classification network comprising 2 layers of full connection layers and 1 layer of classification layers for distinguishing whether the extracted features belong to source domain or target domain, and a feature extractorThe method comprises the steps of countertraining, alternately and iteratively updating, wherein the countertraining is used for reducing the data distribution difference between a source domain and a target domain;
the RUL predictorThe RUL prediction network based on the Attention-GRU comprises an SVM classification layer, a 2-layer A-GRU network and a 2-layer full-connection layer, and is a backbone functional network for predicting the residual service life, and the backbone functional network is used for keeping the distinguishing property of public features extracted by a model.
In the scheme, the anti-migration training process is divided into two parts, namely target domain training and source domain training according to data sources, and specifically comprises the following steps:
The target domain training comprises three main body modules of a feature extractor, a decoder and a domain classifier, wherein the feature extractor adjusts network parameters according to errors of the decoder and the domain classifier, and when the feature extractor and the decoder finish updating, the other main body domain classifier of the countermeasure training is updated;
the source domain training comprises two steps of health data training and degradation data training, wherein in the source domain training process, a feature extractor respectively receives errors from an RUL predictor and a domain classifier to adjust network parameters, and the RUL predictor adjusts the network parameters of the RUL predictor;
and when the feature extractor and the RUL predictor finish optimization, the domain classifier is continuously updated and adjusted, and three networks gradually tend to be balanced through iterative training.
In this scheme, the target domain training specifically includes:
the target domain training comprises three main body modules of a feature extractor, a decoder and a domain classifier, wherein data of a target domain is projected to a new feature space in the target domain training, data alignment is carried out on the data of the source domain in the new feature space, and the common features of the source domain and the target domain are learned through the data alignment;
The feature extractor adjusts network parameters according to errors of the decoder and the domain classifier, the decoder receives the errors of the decoder to adjust, when the feature extractor and the decoder finish updating, the domain classifier of the other main body of the countermeasure training is updated along with the updating, and the domain classifier improves the classification accuracy through sample data with domain labels and training times;
the method comprises the steps of selecting and adopting earlier-stage operation data of a rotating assembly system and health data of a source domain to conduct data alignment, and using a loss function when a target domain is used for training to be:
wherein ,to use the target domain for training the loss function,respectively, feature extractorDecoderDomain classifierIs used for the network parameters of the (a),as a loss function of the decoder,defining domain labels as loss functions of domain classifiersThe source domain label is 0, the target domain label is 1, and the source domain labeled data set isThe target domain unlabeled dataset is。
In this scheme, the objective of the domain classifier is to minimize classification errors, and to train with the feature extractor alternately, specifically:
in the training of the domain classifier, the MSE is adopted as a loss function, the training process is optimized through an Adam algorithm, a gradient inversion layer is introduced in the back propagation, and the optimization process of network parameters is as follows:
wherein ,is the learning rate of the gradient descent algorithm,,,respectively, feature extractorEncoderDomain classifierIs an optimization of the Adam of (c),as a loss function of the decoder,as a loss function of the domain classifier,is thatA loss function after gradient inversion layer conversion.
In this scheme, the source domain training specifically includes:
the source domain training comprises three main body modules of a feature extractor, an RUL predictor and a domain classifier, and aims to improve RUL prediction capacity of common features, and is divided into health data training and degradation data training;
in the source domain training process, the feature extractor respectively receives errors from the RUL predictor and the domain classifier to adjust network parameters, the RUL predictor also adjusts the network parameters of the RUL predictor, and a mapping relation between new public features and RUL labels is searched;
after the feature extractor and the RUL predictor finish optimization, the domain classifier is continuously updated and adjusted, three networks gradually tend to be balanced through iterative training, and the optimal values of the overall loss function and the parameters of the source domain training are as follows:
wherein ,to use the source domain for training the loss function,respectively, feature extractorRUL predictorDomain classifier Is used for the network parameters of the (a),as a loss function of the RUL predictor,as a loss function of the domain classifier,the gradient inversion layer is represented by a gradient,anda health dataset and a degradation dataset of the source domain tagged dataset respectively,respectively, feature extractorRUL predictorDomain classifierIs used to determine the optimum value of the network parameter,as a function of the loss,feature extractors in source domain respectivelyDomain classifierOptimum values of network parameters;
the parameter updating and optimizing method comprises the following steps:
wherein ,is the learning rate of the gradient descent algorithm,,,respectively, feature extractorRUL predictorDomain classifierIs an optimization of the Adam of (c),as a loss function of the RUL predictor,is the loss function of the domain classifier.
The second aspect of the present invention also provides a data distribution balancing system for RUL prediction of a rotating assembly, the system comprising: the system comprises a memory and a processor, wherein the memory comprises a rotating assembly RUL predicted data distribution balance method program, and the rotating assembly RUL predicted data distribution balance method program realizes the following steps when being executed by the processor:
dividing an original signal of a rotating component into a source domain data set and a target domain data set, and carrying out feature transformation through short-time Fourier transformation;
Introducing a domain classifier network into an Autoencoder feature extraction network, and constructing an countermeasure migration model with balanced data distribution by combining a circulating neural network based on attention mechanism intervention;
feature dimension reduction is carried out through an countermeasure migration model, a domain classifier and a feature extractor are subjected to countermeasure migration training, and a feature extraction network is retrained by combining tag data of a source domain data set and untagged data of a target domain data set;
and (3) realizing the data distribution balancing of the target domain and the source domain through the iterative updating of the domain classifier and the feature extractor, and predicting the residual service life of the rotating assembly through the data after the data distribution balancing.
The invention discloses a data distribution balancing method and system for RUL prediction of a rotating assembly, wherein the method comprises the following steps: and carrying out feature transformation on original signals of the source domain data set and the target domain data set, applying an Autoencoder feature extraction network to carry out feature dimension reduction, introducing a domain classifier, retraining the feature extraction network in the RUL prediction model according to the public property, the self-association and the correspondence requirements of the target features and combining the label data of the source domain and the label-free data of the target domain, and realizing the data distribution balance of the target domain and the source domain through iterative updating of the domain classifier and the feature extractor. According to the invention, self-correlation and correspondence constraint are added in the training process, so that the extraction capability of public features is improved, the capability of the model for processing complex working condition data sample distribution balancing is enhanced, and the migration application of the model under different working conditions is realized.
Drawings
FIG. 1 is a flow chart illustrating a method of data distribution balancing for rotating assembly RUL prediction in accordance with the present invention;
FIG. 2 illustrates a block diagram of a data distribution balancing method of rotating assembly RUL prediction of the present invention;
FIG. 3 is a schematic diagram of a migration training process in accordance with the present invention;
FIG. 4 illustrates a block diagram of a data distribution balancing system of the rotating assembly RUL prediction of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
FIGS. 1 and 2 are a flow chart and block diagram illustrating a data distribution balancing method for rotating assembly RUL prediction according to the present invention.
As shown in fig. 1 and 2, a first aspect of the present invention provides a data distribution balancing method for RUL prediction of a rotating assembly, including:
S102, dividing an original signal of a rotating assembly into a source domain data set and a target domain data set, and performing feature transformation through short-time Fourier transformation;
s104, introducing a domain classifier network into an Autoencoder feature extraction network, and constructing an countermeasure migration model with balanced data distribution by combining a circulating neural network based on attention mechanism intervention;
s106, performing feature dimension reduction through an anti-migration model, performing anti-migration training on the domain classifier and the feature extractor, and retraining the feature extraction network by combining the label data of the source domain data set and the label-free data of the target domain data set;
s108, realizing data distribution balancing of the target domain and the source domain through iterative updating of the domain classifier and the feature extractor, and predicting the residual service life of the rotating assembly through data after data distribution balancing.
It should be noted that, firstly, the feature transformation is performed on the original signals (the source domain data set and the target domain data set) through short-time fourier transform (STFT), and then the Autoencoder feature extraction network is applied to further reduce the dimension of the features, so as to obtain the health index of the rotating multi-component system. The rotating component performance state evolution stage is divided by an SVM, and then each part of input data is given different weights by adopting a variant network A-GRU (Gated Recurrent Unit) of a circulating neural network with Attention mechanism (Attention) intervention, so that more key and more important information is extracted for the residual service life prediction of the multiple components.
In the process of reducing the dimension of the features by an Autoencoder feature extraction network, a domain classifier network (Domain Classifier) is introducedAnd retraining a feature extraction network in the RUL prediction model according to the public, self-association and correspondence requirements of the target features by combining the label data of the source domain and the label-free data of the target domain, and realizing the data distribution balance of the target domain and the source domain through iterative updating of the domain classifier and the feature extractor.
The anti-migration model is composed of a feature extractor, a decoder, a domain classifier and an RUL predictor, and specifically comprises the following steps: the feature extractorThe encoder part of the Autoencoder feature extraction network comprises a 3-layer convolution layer and a 2-layer full-connection layer, is a training object for transfer learning, and is used for extracting common features of a source domain and a target domain; the decoderThe decoder part of the Autoencoder feature extraction network comprises 3 deconvolution layers and 2 full connection layers, and is used for reconstructing the features extracted by the encoder into time-frequency features, and the self-correlation of the features in the migration process is kept between the specific gravity features and the original input features; the domain classifierIs a classification network comprising a 2-layer full connection layer and a 1-layer classification layer for distinguishing extraction The obtained features belong to the source domain or the target domain, and a feature extractorThe method comprises the steps of countertraining, alternately and iteratively updating, wherein the countertraining is used for reducing the data distribution difference between a source domain and a target domain; the RUL predictorThe RUL prediction network based on the Attention-GRU comprises an SVM classification layer, a 2-layer A-GRU network and a 2-layer full-connection layer, is a main functional network for predicting the residual service life, and aims to enable the network to obtain prediction capacity similar to that of a source domain in a target domain scene, and the role in a migration process is to keep the distinguishing property of public features extracted by the model.
Feature extractorThrough a decoderDomain classifierRUL predictorThe three networks are used for adjusting the countermeasure training, and the purposes of adjustment are three, firstly, the accurate classification of the source domain data set is realized, the distinguishing property of the characteristics is ensured, and the minimization of classification errors is realized; secondly, the public features of the source domain data and the target domain data are extracted, so that the maximization of domain classification errors is realized; thirdly, the corresponding relation between the extracted features and the original data is ensured, the minimization of the reconstruction error is realized, and the self-association of the features is ensured.
The self-association of the features means that the features extracted by the network can be reconstructed and correspond to the original data, and because the data of the target domain has no label, whether the extracted features correspond to the original data is difficult to judge, the constraint of the self-association can keep the structural information of the features, and the phenomenon that the feature extractor performs indiscriminate compression extraction on the data is avoided. The distinguishing of the features means that the extracted features do not lose the capability of classification prediction, and the association with the sample label is still reserved. The feature commonality means that the features extracted from the source domain and the target domain can remove domain bias information and have the same edge probability distribution.
Fig. 3 shows a schematic diagram of a migration training procedure in the present invention.
It should be noted that, the challenge migration training process is divided into two parts of target domain training and source domain training according to the data source, the target domain training includes three main body modules of a feature extractor, a decoder and a domain classifier, the feature extractor adjusts network parameters according to errors of the decoder and the domain classifier, and when the feature extractor and the decoder complete updating, another main body domain classifier of the challenge training is updated accordingly;
the source domain training comprises two steps of health data training and degradation data training, wherein in the source domain training process, a feature extractor respectively receives errors from an RUL predictor and a domain classifier to adjust network parameters, and the RUL predictor adjusts the network parameters of the RUL predictor;
and when the feature extractor and the RUL predictor finish optimization, the domain classifier is continuously updated and adjusted, and three networks gradually tend to be balanced through iterative training.
The target domain training comprises three main body modules of a feature extractor, a decoder and a domain classifier, wherein data of a target domain is projected to a new feature space in the target domain training, data alignment is carried out on the data of the source domain in the new feature space, and the common features of the source domain and the target domain are learned through the data alignment;
The feature extractor adjusts network parameters according to the errors of the decoder and the domain classifier, which means that the data distribution difference between the target domain and the source domain in the new feature mapping space is reduced, and the recognition degree of the data is maintained. The decoder receives the error of the decoder to adjust, namely, the decoder adapts to the reconstruction relation between the new public features and the original data by adjusting the network parameters of the network, and after the feature extractor and the decoder finish updating, the domain classifier of the other main body of the countermeasure training is updated accordingly, and the domain classifier improves the classification accuracy by the sample data with the domain labels and the training times;
in the migration scene, the target domain data are all unlabeled, and in order to ensure that the corresponding relation between the features obtained by the feature extractor and the original data exists, the features are reconstructed through a decoder, and an encoder-decoder structure is formed with the feature extractor to realize self-supervision constraint. If the full life cycle data of the rotating multi-component system is used for extracting the common features, due to the missing label, the mismatching with the source domain data can be caused, and the negative migration phenomenon occurs. The rotating multi-component system is in a healthy state at the initial stage of operation, so that the common characteristics are extracted more accurately, the convergence rate of training is accelerated, the data alignment is carried out by selecting the front-stage operation data of the rotating component system and the healthy data of the source domain, and the loss function when the target domain is used for training is as follows:
wherein ,to use the target domain for training the loss function,respectively, feature extractorDecoderDomain classifierIs used for the network parameters of the (a),as a loss function of the decoder,defining domain labels as loss functions of domain classifiersThe source domain label is 0, the target domain label is 1, and the source domain labeled data set isThe target domain unlabeled dataset is。
It should be noted that, the objective of the domain classifier is to minimize the classification error, and the feature extractor is to confuse the domain classifier to maximize the classification error, where the optimization paths of the two networks for the classification error are opposite, and usually, one network needs to be fixed first and the other network needs to be trained; in the training of the domain classifier, the MSE is adopted as a loss function, the training process is optimized through an Adam algorithm, a gradient inversion layer is introduced in the back propagation, and the optimization process of network parameters is as follows:
wherein ,is the learning rate of the gradient descent algorithm,,,respectively, feature extractorEncoderDomain classifierIs an optimization of the Adam of (c),as a loss function of the decoder,as a loss function of the domain classifier,is thatA loss function after gradient inversion layer conversion.
The source domain training comprises three main body modules of a feature extractor, an RUL predictor and a domain classifier, and aims to improve RUL prediction capacity of common features, and is divided into health data training and degradation data training; because the label data is trained, the health state of the data is not needed to be distinguished by a network model, the SVM classifier is skipped in the source domain training process, and the network parameters of the SVM classifier are updated after the training of the feature extractor and the RUL predictor is completed. In an actual scene, the target domain only selects the early-stage operation data of the rotary multi-component system for transfer learning, and the early-stage data is health data, so that the data of the health tag is also selected for data alignment in source domain training, and effective public features are extracted.
In the source domain training process, the feature extractor respectively receives errors from the RUL predictor and the domain classifier to adjust network parameters, the RUL predictor also adjusts the network parameters of the RUL predictor, and a mapping relation between new public features and RUL labels is searched; after the feature extractor and the RUL predictor finish optimization, the domain classifier is continuously updated and adjusted, three networks gradually tend to be balanced through iterative training, and the overall loss function of source domain training is as follows:
according to the objective function of the network parameter optimization of the feature extractor and the RUL predictor, the network parameter optimal value is obtained, and the objective function is as follows:
wherein ,to use the source domain for training the loss function,respectively, feature extractorRUL predictorDomain classifierIs used for the network parameters of the (a),as a loss function of the RUL predictor,as a loss function of the domain classifier,the gradient inversion layer is represented by a gradient,anda health dataset and a degradation dataset of the source domain tagged dataset respectively,respectively, feature extractorRUL predictorDomain classifierIs used to determine the optimum value of the network parameter,as a function of the loss,feature extractors in source domain respectivelyDomain classifierOptimum values of network parameters;
The parameter updating and optimizing method comprises the following steps:
wherein ,is the learning rate of the gradient descent algorithm,,,respectively, feature extractorRUL predictorDomain classifierIs an optimization of the Adam of (c),pre-preparation for RULThe loss function of the meter is determined,is the loss function of the domain classifier.
FIG. 4 illustrates a block diagram of a data distribution balancing system of the rotating assembly RUL prediction of the present invention.
The second aspect of the present invention also provides a data distribution balancing system for RUL prediction of a rotating assembly, the system comprising: the system comprises a memory and a processor, wherein the memory comprises a rotating assembly RUL predicted data distribution balance method program, and the rotating assembly RUL predicted data distribution balance method program realizes the following steps when being executed by the processor:
dividing an original signal of a rotating component into a source domain data set and a target domain data set, and carrying out feature transformation through short-time Fourier transformation;
introducing a domain classifier network into an Autoencoder feature extraction network, and constructing an countermeasure migration model with balanced data distribution by combining a circulating neural network based on attention mechanism intervention;
feature dimension reduction is carried out through an countermeasure migration model, a domain classifier and a feature extractor are subjected to countermeasure migration training, and a feature extraction network is retrained by combining tag data of a source domain data set and untagged data of a target domain data set;
And (3) realizing the data distribution balancing of the target domain and the source domain through the iterative updating of the domain classifier and the feature extractor, and predicting the residual service life of the rotating assembly through the data after the data distribution balancing.
It should be noted that, firstly, the feature transformation is performed on the original signals (the source domain data set and the target domain data set) through short-time fourier transform (STFT), and then the Autoencoder feature extraction network is applied to further reduce the dimension of the features, so as to obtain the health index of the rotating multi-component system. The rotating component performance state evolution stage is divided by an SVM, and then each part of input data is given different weights by adopting a variant network A-GRU (Gated Recurrent Unit) of a circulating neural network with Attention mechanism (Attention) intervention, so that more key and more important information is extracted for the residual service life prediction of the multiple components.
In the process of reducing the dimension of the features by an Autoencoder feature extraction network, a domain classifier network (Domain Classifier) is introducedAnd retraining a feature extraction network in the RUL prediction model according to the public, self-association and correspondence requirements of the target features by combining the label data of the source domain and the label-free data of the target domain, and realizing the data distribution balance of the target domain and the source domain through iterative updating of the domain classifier and the feature extractor.
The anti-migration model is composed of a feature extractor, a decoder, a domain classifier and an RUL predictor, and specifically comprises the following steps: the feature extractorThe encoder part of the Autoencoder feature extraction network comprises a 3-layer convolution layer and a 2-layer full-connection layer, is a training object for transfer learning, and is used for extracting common features of a source domain and a target domain; the decoderThe decoder part of the Autoencoder feature extraction network comprises 3 deconvolution layers and 2 full connection layers, and is used for reconstructing the features extracted by the encoder into time-frequency features, and the self-correlation of the features in the migration process is kept between the specific gravity features and the original input features; the domain classifierIs a classification network comprising 2 layers of full connection layers and 1 layer of classification layers for distinguishing whether the extracted features belong to source domain or target domain, and a feature extractorThe method comprises the steps of countertraining, alternately and iteratively updating, wherein the countertraining is used for reducing the data distribution difference between a source domain and a target domain; the RUL predictorThe RUL prediction network based on the Attention-GRU comprises an SVM classification layer, a 2-layer A-GRU network and a 2-layer full-connection layer, is a main functional network for predicting the residual service life, and aims to enable the network to obtain prediction capacity similar to that of a source domain in a target domain scene, and the role in a migration process is to keep the distinguishing property of public features extracted by the model.
Feature extractorThrough a decoderDomain classifierRUL predictorThe three networks are used for adjusting the countermeasure training, and the purposes of adjustment are three, firstly, the accurate classification of the source domain data set is realized, the distinguishing property of the characteristics is ensured, and the minimization of classification errors is realized; secondly, the public features of the source domain data and the target domain data are extracted, so that the maximization of domain classification errors is realized; thirdly, the corresponding relation between the extracted features and the original data is ensured, the minimization of the reconstruction error is realized, and the self-association of the features is ensured.
The self-association of the features means that the features extracted by the network can be reconstructed and correspond to the original data, and because the data of the target domain has no label, whether the extracted features correspond to the original data is difficult to judge, the constraint of the self-association can keep the structural information of the features, and the phenomenon that the feature extractor performs indiscriminate compression extraction on the data is avoided. The distinguishing of the features means that the extracted features do not lose the capability of classification prediction, and the association with the sample label is still reserved. The feature commonality means that the features extracted from the source domain and the target domain can remove domain bias information and have the same edge probability distribution.
It should be noted that, the challenge migration training process is divided into two parts of target domain training and source domain training according to the data source, the target domain training includes three main body modules of a feature extractor, a decoder and a domain classifier, the feature extractor adjusts network parameters according to errors of the decoder and the domain classifier, and when the feature extractor and the decoder complete updating, another main body domain classifier of the challenge training is updated accordingly;
the source domain training comprises two steps of health data training and degradation data training, wherein in the source domain training process, a feature extractor respectively receives errors from an RUL predictor and a domain classifier to adjust network parameters, and the RUL predictor adjusts the network parameters of the RUL predictor;
and when the feature extractor and the RUL predictor finish optimization, the domain classifier is continuously updated and adjusted, and three networks gradually tend to be balanced through iterative training.
The target domain training comprises three main body modules of a feature extractor, a decoder and a domain classifier, wherein data of a target domain is projected to a new feature space in the target domain training, data alignment is carried out on the data of the source domain in the new feature space, and the common features of the source domain and the target domain are learned through the data alignment;
The feature extractor adjusts network parameters according to the errors of the decoder and the domain classifier, which means that the data distribution difference between the target domain and the source domain in the new feature mapping space is reduced, and the recognition degree of the data is maintained. The decoder receives the error of the decoder to adjust, namely, the decoder adapts to the reconstruction relation between the new public features and the original data by adjusting the network parameters of the network, and after the feature extractor and the decoder finish updating, the domain classifier of the other main body of the countermeasure training is updated accordingly, and the domain classifier improves the classification accuracy by the sample data with the domain labels and the training times;
in the migration scene, the target domain data are all unlabeled, and in order to ensure that the corresponding relation between the features obtained by the feature extractor and the original data exists, the features are reconstructed through a decoder, and an encoder-decoder structure is formed with the feature extractor to realize self-supervision constraint. If the full life cycle data of the rotating multi-component system is used for extracting the common features, due to the missing label, the mismatching with the source domain data can be caused, and the negative migration phenomenon occurs. The rotating multi-component system is in a healthy state at the initial stage of operation, so that the common characteristics are extracted more accurately, the convergence rate of training is accelerated, the data alignment is carried out by selecting the front-stage operation data of the rotating component system and the healthy data of the source domain, and the loss function when the target domain is used for training is as follows:
wherein ,to use the target domain for training the loss function,respectively, feature extractorDecoderDomain classifierIs used for the network parameters of the (a),as a loss function of the decoder,defining domain labels as loss functions of domain classifiersThe source domain label is 0, the target domain label is 1, and the source domain labeled data set isThe target domain unlabeled dataset is。
It should be noted that, the objective of the domain classifier is to minimize the classification error, and the feature extractor is to confuse the domain classifier to maximize the classification error, where the optimization paths of the two networks for the classification error are opposite, and usually, one network needs to be fixed first and the other network needs to be trained; in the training of the domain classifier, the MSE is adopted as a loss function, the training process is optimized through an Adam algorithm, a gradient inversion layer is introduced in the back propagation, and the optimization process of network parameters is as follows:
wherein ,is the learning rate of the gradient descent algorithm,,,respectively are provided withIs a feature extractorEncoderDomain classifierIs an optimization of the Adam of (c),as a loss function of the decoder,as a loss function of the domain classifier,is thatA loss function after gradient inversion layer conversion.
The source domain training comprises three main body modules of a feature extractor, an RUL predictor and a domain classifier, and aims to improve RUL prediction capacity of common features, and is divided into health data training and degradation data training; because the label data is trained, the health state of the data is not needed to be distinguished by a network model, the SVM classifier is skipped in the source domain training process, and the network parameters of the SVM classifier are updated after the training of the feature extractor and the RUL predictor is completed. In an actual scene, the target domain only selects the early-stage operation data of the rotary multi-component system for transfer learning, and the early-stage data is health data, so that the data of the health tag is also selected for data alignment in source domain training, and effective public features are extracted.
In the source domain training process, the feature extractor respectively receives errors from the RUL predictor and the domain classifier to adjust network parameters, the RUL predictor also adjusts the network parameters of the RUL predictor, and a mapping relation between new public features and RUL labels is searched; after the feature extractor and the RUL predictor finish optimization, the domain classifier is continuously updated and adjusted, three networks gradually tend to be balanced through iterative training, and the overall loss function of source domain training is as follows:
According to the objective function of the network parameter optimization of the feature extractor and the RUL predictor, the network parameter optimal value is obtained, and the objective function is as follows:
wherein ,to use the source domain for training the loss function,respectively, feature extractorRUL predictorDomain classifierIs used for the network parameters of the (a),as a loss function of the RUL predictor,as a loss function of the domain classifier,the gradient inversion layer is represented by a gradient,anda health dataset and a degradation dataset of the source domain tagged dataset respectively,respectively, feature extractorRUL predictorDomain classifierIs used to determine the optimum value of the network parameter,as a function of the loss,feature extractors in source domain respectivelyDomain classifierOptimum values of network parameters;
the parameter updating and optimizing method comprises the following steps:
wherein ,is the learning rate of the gradient descent algorithm,,,respectively, feature extractorRUL predictorDomain classifierIs an optimization of the Adam of (c),as a loss function of the RUL predictor,is the loss function of the domain classifier.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (3)
1. A method for predicting the remaining useful life of a rotating assembly, comprising the steps of:
dividing an original signal of a rotating component into a source domain data set and a target domain data set, and carrying out feature transformation through short-time Fourier transformation;
introducing a domain classifier network into an Autoencoder feature extraction network, and constructing an countermeasure migration model with balanced data distribution by combining a circulating neural network based on attention mechanism intervention;
feature dimension reduction is carried out through an countermeasure migration model, a domain classifier and a feature extractor are subjected to countermeasure migration training, and a feature extraction network is retrained by combining tag data of a source domain data set and untagged data of a target domain data set;
the data distribution balance of the target domain and the source domain is realized through the iterative updating of the domain classifier and the feature extractor, and the residual service life of the rotating assembly is predicted through the data after the data distribution balance;
the anti-migration training process is divided into two parts, namely target domain training and source domain training according to data sources, and specifically comprises the following steps:
the target domain training comprises three main body modules of a feature extractor, a decoder and a domain classifier, wherein the feature extractor adjusts network parameters according to errors of the decoder and the domain classifier, and when the feature extractor and the decoder finish updating, the other main body domain classifier of the countermeasure training is updated;
The source domain training comprises two steps of health data training and degradation data training, wherein in the source domain training process, a feature extractor respectively receives errors from an RUL predictor and a domain classifier to adjust network parameters, and the RUL predictor adjusts the network parameters of the RUL predictor;
the method comprises the following steps of training a target domain, wherein after the feature extractor and the RUL predictor finish optimization, updating and adjusting the domain classifier are continuously carried out, and three networks gradually tend to be balanced through iterative training;
the target domain training specifically comprises the following steps:
the target domain training comprises three main body modules of a feature extractor, a decoder and a domain classifier, wherein data of a target domain is projected to a new feature space in the target domain training, data alignment is carried out on the data of the source domain in the new feature space, and the common features of the source domain and the target domain are learned through the data alignment;
the feature extractor adjusts network parameters according to errors of the decoder and the domain classifier, the decoder receives the errors of the decoder to adjust, when the feature extractor and the decoder finish updating, the domain classifier of the other main body of the countermeasure training is updated along with the updating, and the domain classifier improves the classification accuracy through sample data with domain labels and training times;
The method comprises the steps of selecting and adopting earlier-stage operation data of a rotating assembly system and health data of a source domain to conduct data alignment, and using a loss function when a target domain is used for training to be:
wherein ,loss function for training using the target domain, +.>Feature extractor->Decoder->Domain classifier->Network parameters of->For the loss function of the decoder, < >>For the loss function of the domain classifier, the domain label is defined as +.>The source domain label is 0, the target domain label is 1, and the target domain unlabeled dataset is +.>;
The objective of the domain classifier is to minimize classification errors, and to train with feature extractors alternately, specifically:
in the training of the domain classifier, the MSE is adopted as a loss function, the training process is optimized through an Adam algorithm, a gradient inversion layer is introduced in the back propagation, and the optimization process of network parameters is as follows:
wherein ,is the learning rate of the gradient descent algorithm, +.>,/>,/>Feature extractor->Decoder->Domain classifier->Adam optimizer of->For the loss function of the decoder, < >>For the loss function of the domain classifier, +.>Is->A loss function converted by the gradient inversion layer;
the source domain training is specifically as follows:
the source domain training comprises three main body modules of a feature extractor, an RUL predictor and a domain classifier, and aims to improve RUL prediction capacity of common features, and is divided into health data training and degradation data training;
In the source domain training process, the feature extractor respectively receives errors from the RUL predictor and the domain classifier to adjust network parameters, the RUL predictor also adjusts the network parameters of the RUL predictor, and a mapping relation between new public features and RUL labels is searched;
after the feature extractor and the RUL predictor finish optimization, the domain classifier is continuously updated and adjusted, three networks gradually tend to be balanced through iterative training, and the optimal values of the overall loss function and the parameters of the source domain training are as follows:
wherein ,loss function for training using source domain, +.>Feature extractor->RUL predictor->Domain classifier->Network parameters of->For the loss function of the RUL predictor, +.>For the loss function of the domain classifier, +.>Representing gradient inversion layer, "> and />Health data set and degradation data set of source domain tagged data set, respectively, < >>Separate feature extractor->RUL predictor->Domain classifier->Optimized value of network parameter, +.>For loss function->Feature extractor in source domain>Domain classifier->Optimum values of network parameters, definition fieldsThe label is->;
The parameter updating and optimizing method comprises the following steps:
2. The method for predicting the remaining service life of a rotating assembly according to claim 1, wherein the migration countermeasure model is composed of a feature extractor, a decoder, a domain classifier and a RUL predictor, specifically:
the feature extractorThe encoder part of the Autoencoder feature extraction network comprises a 3-layer convolution layer and a 2-layer full-connection layer, is a training object for transfer learning, and is used for extracting common features of a source domain and a target domain;
the decoderThe decoder part of the Autoencoder feature extraction network comprises 3 deconvolution layers and 2 full connection layers, and is used for reconstructing the features extracted by the encoder into time-frequency features, and the self-correlation of the features in the migration process is kept between the specific gravity features and the original input features;
the domain classifierIs a classification network comprising 2 layers of full connection layers and 1 layer of classification layers for distinguishing whether the extracted features belong to a source domain or a target domain, and a feature extractor +.>The method comprises the steps of countertraining, alternately and iteratively updating, wherein the countertraining is used for reducing the data distribution difference between a source domain and a target domain;
The RUL predictorIs RUL prediction network based on attention mechanism intervention cyclic neural network, and comprises SVM classification layer and 2The layer A-GRU network and the 2-layer full-connection layer are backbone functional networks for predicting the residual service life and are used for keeping the distinguishing property of the public features extracted by the model.
3. A system for predicting the remaining useful life of a rotating assembly, the system comprising: the system comprises a memory and a processor, wherein the memory comprises a rotating assembly RUL predicted data distribution balance method program, and the rotating assembly RUL predicted data distribution balance method program realizes the following steps when being executed by the processor:
dividing an original signal of a rotating component into a source domain data set and a target domain data set, and carrying out feature transformation through short-time Fourier transformation;
introducing a domain classifier network into an Autoencoder feature extraction network, and constructing an countermeasure migration model with balanced data distribution by combining a circulating neural network based on attention mechanism intervention;
feature dimension reduction is carried out through an countermeasure migration model, a domain classifier and a feature extractor are subjected to countermeasure migration training, and a feature extraction network is retrained by combining tag data of a source domain data set and untagged data of a target domain data set;
The data distribution balance of the target domain and the source domain is realized through the iterative updating of the domain classifier and the feature extractor, and the residual service life of the rotating assembly is predicted through the data after the data distribution balance;
the anti-migration training process is divided into two parts, namely target domain training and source domain training according to data sources, and specifically comprises the following steps:
the target domain training comprises three main body modules of a feature extractor, a decoder and a domain classifier, wherein the feature extractor adjusts network parameters according to errors of the decoder and the domain classifier, and when the feature extractor and the decoder finish updating, the other main body domain classifier of the countermeasure training is updated;
the source domain training comprises two steps of health data training and degradation data training, wherein in the source domain training process, a feature extractor respectively receives errors from an RUL predictor and a domain classifier to adjust network parameters, and the RUL predictor adjusts the network parameters of the RUL predictor;
the method comprises the following steps of training a target domain, wherein after the feature extractor and the RUL predictor finish optimization, updating and adjusting the domain classifier are continuously carried out, and three networks gradually tend to be balanced through iterative training;
the target domain training specifically comprises the following steps:
The target domain training comprises three main body modules of a feature extractor, a decoder and a domain classifier, wherein data of a target domain is projected to a new feature space in the target domain training, data alignment is carried out on the data of the source domain in the new feature space, and the common features of the source domain and the target domain are learned through the data alignment;
the feature extractor adjusts network parameters according to errors of the decoder and the domain classifier, the decoder receives the errors of the decoder to adjust, when the feature extractor and the decoder finish updating, the domain classifier of the other main body of the countermeasure training is updated along with the updating, and the domain classifier improves the classification accuracy through sample data with domain labels and training times;
the method comprises the steps of selecting and adopting earlier-stage operation data of a rotating assembly system and health data of a source domain to conduct data alignment, and using a loss function when a target domain is used for training to be:
wherein ,loss function for training using the target domain, +.>Feature extractor->Decoder->Domain classifier/>Network parameters of->For the loss function of the decoder, < >>For the loss function of the domain classifier, the domain label is defined as +.>The source domain label is 0, the target domain label is 1, and the target domain unlabeled dataset is +. >;
The objective of the domain classifier is to minimize classification errors, and to train with feature extractors alternately, specifically:
in the training of the domain classifier, the MSE is adopted as a loss function, the training process is optimized through an Adam algorithm, a gradient inversion layer is introduced in the back propagation, and the optimization process of network parameters is as follows:
wherein ,is the learning rate of the gradient descent algorithm, +.>,/>,/>Feature extractor->Encoder->Domain classifier->Adam optimizer of->For the loss function of the decoder, < >>For the loss function of the domain classifier, +.>Is->A loss function converted by the gradient inversion layer;
the source domain training is specifically as follows:
the source domain training comprises three main body modules of a feature extractor, an RUL predictor and a domain classifier, and aims to improve RUL prediction capacity of common features, and is divided into health data training and degradation data training;
in the source domain training process, the feature extractor respectively receives errors from the RUL predictor and the domain classifier to adjust network parameters, the RUL predictor also adjusts the network parameters of the RUL predictor, and a mapping relation between new public features and RUL labels is searched;
after the feature extractor and the RUL predictor finish optimization, the domain classifier is continuously updated and adjusted, three networks gradually tend to be balanced through iterative training, and the optimal values of the overall loss function and the parameters of the source domain training are as follows:
wherein ,loss function for training using source domain, +.>Feature extractor->RUL predictor->Domain classifier->Network parameters of->For the loss function of the RUL predictor, +.>For the loss function of the domain classifier, +.>Representing gradientsReversing layer(s)> and />Health data set and degradation data set of source domain tagged data set, respectively, < >>Feature extractor->RUL predictor->Domain classifier->Optimized value of network parameter, +.>For loss function->Feature extractor in source domain>Domain classifier->Optimal value of network parameter, domain label is defined as +.>;
The parameter updating and optimizing method comprises the following steps:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211547913.6A CN115577245B (en) | 2022-12-05 | 2022-12-05 | Data distribution balancing method and system for RUL prediction of rotating assembly |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211547913.6A CN115577245B (en) | 2022-12-05 | 2022-12-05 | Data distribution balancing method and system for RUL prediction of rotating assembly |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115577245A CN115577245A (en) | 2023-01-06 |
CN115577245B true CN115577245B (en) | 2023-05-16 |
Family
ID=84590320
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211547913.6A Active CN115577245B (en) | 2022-12-05 | 2022-12-05 | Data distribution balancing method and system for RUL prediction of rotating assembly |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115577245B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860677A (en) * | 2020-07-29 | 2020-10-30 | 湖南科技大学 | Rolling bearing transfer learning fault diagnosis method based on partial domain confrontation |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110555273B (en) * | 2019-09-05 | 2023-03-24 | 苏州大学 | Bearing life prediction method based on hidden Markov model and transfer learning |
US10839269B1 (en) * | 2020-03-20 | 2020-11-17 | King Abdulaziz University | System for fast and accurate visual domain adaptation |
CN114332090B (en) * | 2022-03-16 | 2022-05-10 | 中南大学 | Multi-source domain self-adaptive brain network classification method, system, equipment and storage medium |
CN114970715A (en) * | 2022-05-26 | 2022-08-30 | 山东大学 | Variable working condition fault diagnosis method and system under small sample and unbalanced data constraint |
CN115374711B (en) * | 2022-10-24 | 2022-12-27 | 广东工业大学 | Service life prediction method of rotating multi-component system and related device |
-
2022
- 2022-12-05 CN CN202211547913.6A patent/CN115577245B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860677A (en) * | 2020-07-29 | 2020-10-30 | 湖南科技大学 | Rolling bearing transfer learning fault diagnosis method based on partial domain confrontation |
Non-Patent Citations (1)
Title |
---|
基于生成对抗网络的乳腺癌组织病理图像样本均衡化;杨俊豪;李东升;陈春晓;闫强;陆熊;;生物医学工程研究(第02期);第59-64页 * |
Also Published As
Publication number | Publication date |
---|---|
CN115577245A (en) | 2023-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11138514B2 (en) | Review machine learning system | |
CN108459955A (en) | Software Defects Predict Methods based on depth autoencoder network | |
CN111027576A (en) | Cooperative significance detection method based on cooperative significance generation type countermeasure network | |
CN112994701A (en) | Data compression method and device, electronic equipment and computer readable medium | |
CN112883990A (en) | Data classification method and device, computer storage medium and electronic equipment | |
CN114548199A (en) | Multi-sensor data fusion method based on deep migration network | |
Maduako et al. | Deep learning for component fault detection in electricity transmission lines | |
Dong et al. | Multi‐task learning method for classification of multiple power quality disturbances | |
US11429856B2 (en) | Neural networks adaptive boosting using semi-supervised learning | |
KR102144010B1 (en) | Methods and apparatuses for processing data based on representation model for unbalanced data | |
CN115577245B (en) | Data distribution balancing method and system for RUL prediction of rotating assembly | |
CN113743277A (en) | Method, system, equipment and storage medium for short video frequency classification | |
US11295229B1 (en) | Scalable generation of multidimensional features for machine learning | |
CN117131022A (en) | Heterogeneous data migration method of electric power information system | |
CN110348581B (en) | User feature optimizing method, device, medium and electronic equipment in user feature group | |
CN116451081A (en) | Data drift detection method, device, terminal and storage medium | |
KR20210038027A (en) | Method for Training to Compress Neural Network and Method for Using Compressed Neural Network | |
Xiong et al. | Computationally-efficient voice activity detection based on deep neural networks | |
WO2022009013A1 (en) | Automated data linkages across datasets | |
Li et al. | Focus on local: transmission line defect detection via feature refinement | |
US20230419102A1 (en) | Token synthesis for machine learning models | |
Wang et al. | Tackling the Unlimited Staleness in Federated Learning with Intertwined Data and Device Heterogeneities | |
CN116150508B (en) | Article recommendation method based on contrast learning, electronic equipment and storage medium | |
WO2024032386A1 (en) | Systems and methods for artificial-intelligence model training using unsupervised domain adaptation with multi-source meta-distillation | |
Liu et al. | Deep ordinal classification for automatic cloud assessment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |