CN111738455A - Fault diagnosis method and system based on integration domain self-adaptation - Google Patents

Fault diagnosis method and system based on integration domain self-adaptation Download PDF

Info

Publication number
CN111738455A
CN111738455A CN202010490493.7A CN202010490493A CN111738455A CN 111738455 A CN111738455 A CN 111738455A CN 202010490493 A CN202010490493 A CN 202010490493A CN 111738455 A CN111738455 A CN 111738455A
Authority
CN
China
Prior art keywords
domain
data
fault diagnosis
feature
feature extractor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010490493.7A
Other languages
Chinese (zh)
Other versions
CN111738455B (en
Inventor
宋艳
李沂滨
贾磊
王代超
郭庆稳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202010490493.7A priority Critical patent/CN111738455B/en
Publication of CN111738455A publication Critical patent/CN111738455A/en
Application granted granted Critical
Publication of CN111738455B publication Critical patent/CN111738455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The utility model provides a fault diagnosis method and system based on integration domain self-adaptation, utilize two feature extractors to project source domain and target domain data to different feature spaces, wherein one feature extractor learns the characteristic based on domain confrontation learning, another feature extractor learns as the loss function with the biggest mean value difference, different feature extractors use different loss functions, obtain different classification results, and integrate the classification result into domain self-adaptation, output and obtain the fault diagnosis result, when there is great difference between two domains, effectively extract the feature expression in the data, greatly improve the performance of fault diagnosis.

Description

Fault diagnosis method and system based on integration domain self-adaptation
Technical Field
The disclosure belongs to the technical field of fault diagnosis, and relates to a fault diagnosis method and system based on integration domain self-adaptation.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Fault diagnosis is the key to ensure the safety of industrial production. In recent years, with the development of machine learning and big data, intelligent fault diagnosis methods based on data driving are rapidly developed. However, although the conventional machine learning method has achieved a certain success in fault diagnosis, the following problems exist in use: 1) they need to be used in conjunction with feature extraction methods, and the selection of features will largely influence the final classification result. In addition, the feature extraction network and the classifier are designed separately, which consumes a lot of time and cannot realize global optimization. 2) Most of the methods belong to shallow structures, and the effective feature representation and the nonlinear mapping relation of a complex system are difficult to learn.
The deep learning algorithm can be used for self-adaptively extracting fault characteristics, has a complex network structure and the like, and is widely applied to mechanical fault diagnosis. Deep learning algorithms such as Deep Belief Networks (DBN) and Convolutional Neural Networks (CNN) are prominent in fault diagnosis and health monitoring. However, due to the complexity of the industrial environment in the field, operating conditions (such as mechanical load and speed) or the device under test may vary. In practical application, because the domains of the training data set and the testing data set are different, the diagnosis precision of the well-trained fault diagnosis model to the testing data set is low.
Although the problem that the domains of the training data set and the testing data set are not matched can be solved to a certain extent by the domain-adaptive fault diagnosis method, if the source domain data and the target domain data have large differences, the method of projecting the data to the same feature space loses some important feature representations, and further the performance of the trained fault diagnosis model is poor in practical application.
Disclosure of Invention
The invention provides a fault diagnosis method and system based on integrated domain self-adaptation to solve the problems, and the method and system effectively extract feature expression in data when a large difference exists between two domains, thereby greatly improving the performance of fault diagnosis.
According to some embodiments, the following technical scheme is adopted in the disclosure:
an integrated domain self-adaptive fault diagnosis method comprises the following steps:
the method comprises the steps that two feature extractors are used for projecting source domain data and target domain data into different feature spaces, one feature extractor learns features based on domain confrontation learning, the other feature extractor learns the maximum mean difference as a loss function, the different feature extractors use different loss functions to obtain different classification results, the classification results are integrated into domain self-adaptation, and the fault diagnosis results are output.
As an alternative embodiment, data preprocessing is performed on the source domain data and the target domain data, specifically including dividing the data into data segments of set length, and normalizing the divided data.
As an alternative embodiment, different labels are defined for the preprocessed source domain and target domain data, and the first feature extractor and the first classification network are used for training in combination with the domain prediction network on the basis of domain confrontation training, so that the first feature extractor and the domain prediction network obtain optimal parameters.
As an alternative embodiment, the training is performed based on the maximum average difference of the radial basis function kernels, using a second feature extractor and a second classification network.
As an alternative embodiment, in the integration step, the two classification results are averaged to obtain the final prediction result.
An integrated domain adaptation-based fault diagnosis system comprising:
the data preprocessing module is configured to divide the acquired source domain data and the acquired target domain data and normalize the data;
the feature processing module comprises two feature extractors and projects the data of a source domain and a target domain into different feature spaces, wherein one feature extractor learns features based on domain confrontation learning, the other feature extractor learns by taking the maximum mean difference as a loss function, and different feature extractors use different loss functions to obtain different classification results;
and the integration module is configured to integrate the classification result into the domain self-adaption and output the obtained fault diagnosis result.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to execute a method for integrated domain adaptive based fault diagnosis as described.
A terminal device comprising a processor and a computer readable storage medium, the processor being configured to implement instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the fault diagnosis method based on integration domain self-adaption.
Compared with the prior art, the beneficial effect of this disclosure is:
the present disclosure introduces an efficient structure to model and integrate different feature representations between source domain and target domain data into domain adaptation. When a large difference exists between the two domains, the feature expression in the data is effectively extracted, and the performance of fault diagnosis is greatly improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
FIG. 1(a) is a prior art domain confrontation training network framework;
FIG. 1(b) is a schematic diagram of a process for predicting test data using a trained model;
FIG. 2 is a network architecture diagram of the present disclosure;
fig. 3 is a flow chart of fault diagnosis performed by the trained network of the present disclosure.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
In domain confrontation training, source domain data and target domain data are projected into the same subspace, and a feature extractor outputs two domain invariant features. In most cases, domain confrontation training is effective and applicable. However, if there is a large difference between the two domains, some representative features may be lost when only one feature model is considered. Here we introduce an efficient structure to model the different feature representations between source domain and target domain data and integrate them into domain adaptation. In particular, two feature extractors are used to project data into different feature spaces. One of the feature extractors learns features based on domain confrontation learning, the other feature extractor learns the Maximum Mean Difference (MMD) as a loss function, and different feature extractors use different loss functions. In addition, since the two feature extractors use separate classifiers, ensemble learning is used to obtain the final result.
Specifically, a conventional Domain Adaptive Training (DAT) plays an important role in Domain adaptation, and a framework of the Domain adaptive Training is shown in fig. 1 (a). Let training data be XsThe training label is YsTest data is XtThe feature extractors of the training data and the test data are respectively MsAnd MtThe classification network is CsThe domain prediction network is D. In the training process, training data and test data are first input into the feature extractor. Number of trainingAccording to the characteristics MsAs input to the classification network. M is trained using a back propagation algorithm to minimize the loss function shown in equation (1)sAnd CsThe parameter (1).
Figure BDA0002520891740000061
Wherein N is the number of classes, and when k is ysWhen it is time to
Figure BDA0002520891740000062
Cs(Ms(xs) Represents the probability that the classification is correct. Then using MsParameter pair MtInitialization is performed. Training domain prediction network D and feature extractor M using a method of countertrainings、Mt. First, the domain prediction network D aims to distinguish the characteristics of the target domain and the source domain, and connects Ms(Xs) And Mt(Xt) There are two categories. If the feature label of the source domain data is 1 and the feature label of the target domain data is 0, the loss function of the domain prediction network D is defined as:
Figure BDA0002520891740000063
in the confrontation training, M is settSeen as a generator using the target domain data as input. Expectation of Mt(Xt) The generated data can trick the domain prediction network. In other words, the domain prediction network D will Mt(Xt) The source domain data and the target domain data are projected to the same feature space by being mistaken for the features of the source domain and marked as 1. Therefore, M is trained using a method that minimizes the following loss functiont
Figure BDA0002520891740000064
After the domain confrontation training method, the prediction result of the test data can pass through the feature extractor MtAnd a classification network CsObtained as shown in FIG. 1 (b).
The scheme of the disclosure, as shown in fig. 2, includes:
(1) data preprocessing:
data preprocessing includes data segmentation and data normalization. The data division means dividing the vibration signal data into data segments having the same length. For example, a complete segment of vibration signal data contains 48000 data segments, and the data may be divided into 1024,2048 or 4096 data segments by data division. The data segmentation divides the high-frequency vibration signal data into data sections with equal length, so that the subsequent machine learning algorithm can be favorably used for data analysis, and the data segmentation is a general data preprocessing method in the field of machine learning.
After segmentation, the training data set and the test data set are represented as
Figure BDA0002520891740000071
And
Figure BDA0002520891740000072
wherein n is1And n2Respectively represent XsAnd XtNumber of samples in (1). The data normalization can be expressed as:
Figure BDA0002520891740000073
Figure BDA0002520891740000074
where mean (x)s,i) Denotes xs,iAverage value of (a), std (x)t,j) Denotes xs,iStandard deviation of (2). Data normalization is also a common method for data and processing in machine learning.
(2) Training feature representation based on domain confrontation training
In this process, as shown in fig. 2, a feature extractor (feature extractor I), a classifier (classification network I) is trained in combination with the domain prediction network based on the domain confrontation training. Order training data setIs composed of
Figure BDA0002520891740000075
Training dataset label YsThe test data set is
Figure BDA0002520891740000076
The feature extractor I is MIThe classification network I is CIThe domain prediction network is D.
Figure BDA0002520891740000077
And
Figure BDA0002520891740000078
are respectively as
Figure BDA0002520891740000079
And
Figure BDA00025208917400000710
the characteristic expression of (1). To train a domain prediction network, given source domain data
Figure BDA00025208917400000711
Label 1, target domain data
Figure BDA00025208917400000713
The label is 0, and the loss function of the domain prediction network is defined as:
Figure BDA00025208917400000712
employing counter training to make MIAnd D obtaining optimal parameters, training M by minimizing the loss functionI
Figure BDA0002520891740000081
M is then trained using the following classification loss functionIAnd CIThe parameters of (2):
Figure BDA0002520891740000082
wherein N is the number of failure classes, and when k is ysWhen it is time to
Figure BDA0002520891740000083
Indicating the probability that the source domain classification is correct. After training, the predicted results of the test data are passed through feature extractor MIAnd a classification network CIAnd (4) obtaining.
(3) Maximum Mean Difference (MMD) based feature representation training
Further, MMD-based feature representation and training of classifiers. Let feature extractor II be M as shown in FIG. 2IIThe classifier II is CII
Figure BDA0002520891740000084
And
Figure BDA0002520891740000085
are respectively as
Figure BDA0002520891740000086
And
Figure BDA0002520891740000087
using MMD based on Radial Basis Function (RBF) kernel, we can write as:
Figure BDA0002520891740000088
where G (-) denotes an RBF core, which may be denoted as G (x)1,x2)=exp(-||x1-x2||2τ). Due to MIICan be learned adaptive to the bandwidth τ, and τ can be set to an arbitrary value, here set to 1.
Classifier CIIAnd MIISimultaneous training, CIITraining was performed using the following loss function:
Figure BDA0002520891740000089
the method adopts two classifiers, and each classifier adopts a feature extractor.
(4) The final prediction result is obtained by ensemble learning
The method provided by the scheme uses two classification networks. To obtain the final prediction result, the two classification results need to be averaged. The output of the two classifiers is averaged based on an ensemble learning method. Let the predicted output result of the classification network I be YIThe predicted output result of the classification network II is YIIThen the final predicted result
Figure BDA0002520891740000091
Is composed of
Figure BDA0002520891740000092
(5) Example of network architecture implementation
The proposed network architecture is as shown in table I. Batch Normalization (BN) and leakage corrected Linear units (leakage ReLU) are added after each convolutional layer, and BN, leakage ReLU and dropout are added after each fully-connected layer.
Table 1 network architecture for the method presented
Figure BDA0002520891740000093
Figure BDA0002520891740000101
The integrated domain adaptive fault diagnosis method introduces an effective structure to model different feature representations between source domain data and target domain data, and integrates the feature representations into the domain adaptation. When a large difference exists between the two domains, the feature expression in the data is effectively extracted, and the performance of fault diagnosis is greatly improved.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (10)

1. A fault diagnosis method based on integration domain self-adaptation is characterized in that: the method comprises the following steps:
the method comprises the steps that two feature extractors are used for projecting source domain data and target domain data into different feature spaces, one feature extractor learns features based on domain confrontation learning, the other feature extractor learns the maximum mean difference as a loss function, the different feature extractors use different loss functions to obtain different classification results, the classification results are integrated into domain self-adaptation, and the fault diagnosis results are output.
2. The integrated domain adaptive-based fault diagnosis method as claimed in claim 1, wherein: data preprocessing is carried out on data of a source domain and data of a target domain, and the data preprocessing specifically comprises the steps of dividing the data into data sections with set lengths and normalizing the divided data.
3. The integrated domain adaptive-based fault diagnosis method as claimed in claim 1, wherein: different labels are defined for the preprocessed source domain data and the preprocessed target domain data, and the first feature extractor and the first classification network are used for training in combination with the domain prediction network on the basis of domain confrontation training, so that the first feature extractor and the domain prediction network can obtain optimal parameters.
4. The integrated domain adaptive-based fault diagnosis method as claimed in claim 1, wherein: training is performed based on the maximum average difference of the radial basis function kernels using a second feature extractor and a second classification network.
5. The integrated domain adaptive-based fault diagnosis method as claimed in claim 1, wherein: in the integration step, the two classification results are averaged to obtain a final prediction result.
6. A fault diagnosis system based on integration domain self-adaptation is characterized in that: the method comprises the following steps:
the data preprocessing module is configured to divide the acquired source domain data and the acquired target domain data and normalize the data;
the feature processing module comprises two feature extractors and projects the data of a source domain and a target domain into different feature spaces, wherein one feature extractor learns features based on domain confrontation learning, the other feature extractor learns by taking the maximum mean difference as a loss function, and different feature extractors use different loss functions to obtain different classification results;
and the integration module is configured to integrate the classification result into the domain self-adaption and output the obtained fault diagnosis result.
7. The integrated domain adaptive-based fault diagnosis system of claim 6, wherein: different labels are defined for the preprocessed source domain data and the preprocessed target domain data, and the first feature extractor and the first classification network are used for training in combination with the domain prediction network on the basis of domain confrontation training, so that the first feature extractor and the domain prediction network can obtain optimal parameters.
8. The integrated domain adaptive-based fault diagnosis system of claim 6, wherein: training is performed based on the maximum average difference of the radial basis function kernels using a second feature extractor and a second classification network.
9. A computer-readable storage medium characterized by: a plurality of instructions are stored, wherein the instructions are suitable for being loaded by a processor of the terminal equipment and executing the fault diagnosis method based on the integration domain self-adaption in any one of claims 1-5.
10. A terminal device is characterized in that: the system comprises a processor and a computer readable storage medium, wherein the processor is used for realizing instructions; the computer readable storage medium is used for storing a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the fault diagnosis method based on integration domain self-adaption in any one of claims 1-5.
CN202010490493.7A 2020-06-02 2020-06-02 Fault diagnosis method and system based on integration domain self-adaptation Active CN111738455B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010490493.7A CN111738455B (en) 2020-06-02 2020-06-02 Fault diagnosis method and system based on integration domain self-adaptation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010490493.7A CN111738455B (en) 2020-06-02 2020-06-02 Fault diagnosis method and system based on integration domain self-adaptation

Publications (2)

Publication Number Publication Date
CN111738455A true CN111738455A (en) 2020-10-02
CN111738455B CN111738455B (en) 2021-05-11

Family

ID=72648227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010490493.7A Active CN111738455B (en) 2020-06-02 2020-06-02 Fault diagnosis method and system based on integration domain self-adaptation

Country Status (1)

Country Link
CN (1) CN111738455B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112683532A (en) * 2020-11-25 2021-04-20 西安交通大学 Cross-working condition countermeasure diagnostic method for bearing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308318A (en) * 2018-08-14 2019-02-05 深圳大学 Training method, device, equipment and the medium of cross-domain texts sentiment classification model
CN109492099A (en) * 2018-10-28 2019-03-19 北京工业大学 It is a kind of based on field to the cross-domain texts sensibility classification method of anti-adaptive
CN110210371A (en) * 2019-05-29 2019-09-06 华南理工大学 A kind of aerial hand-written inertia sensing signal creating method based on depth confrontation study
CN110309798A (en) * 2019-07-05 2019-10-08 中新国际联合研究院 A kind of face cheat detecting method extensive based on domain adaptive learning and domain
US20190325299A1 (en) * 2018-04-18 2019-10-24 Element Ai Inc. Unsupervised domain adaptation with similarity learning for images
CN110728377A (en) * 2019-10-21 2020-01-24 山东大学 Intelligent fault diagnosis method and system for electromechanical equipment
CN110837850A (en) * 2019-10-23 2020-02-25 浙江大学 Unsupervised domain adaptation method based on counterstudy loss function

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325299A1 (en) * 2018-04-18 2019-10-24 Element Ai Inc. Unsupervised domain adaptation with similarity learning for images
CN109308318A (en) * 2018-08-14 2019-02-05 深圳大学 Training method, device, equipment and the medium of cross-domain texts sentiment classification model
CN109492099A (en) * 2018-10-28 2019-03-19 北京工业大学 It is a kind of based on field to the cross-domain texts sensibility classification method of anti-adaptive
CN110210371A (en) * 2019-05-29 2019-09-06 华南理工大学 A kind of aerial hand-written inertia sensing signal creating method based on depth confrontation study
CN110309798A (en) * 2019-07-05 2019-10-08 中新国际联合研究院 A kind of face cheat detecting method extensive based on domain adaptive learning and domain
CN110728377A (en) * 2019-10-21 2020-01-24 山东大学 Intelligent fault diagnosis method and system for electromechanical equipment
CN110837850A (en) * 2019-10-23 2020-02-25 浙江大学 Unsupervised domain adaptation method based on counterstudy loss function

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIN WANG ET AL: "Domain Adaptive Transfer Learning for Fault Diagnosis", 《2019 PROGNOSTICS AND SYSTEM HEALTH MANAGEMENT CONFERENCE》 *
王春峰: "基于迁移学习的工业过程故障诊断方法研究与实现", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112683532A (en) * 2020-11-25 2021-04-20 西安交通大学 Cross-working condition countermeasure diagnostic method for bearing

Also Published As

Publication number Publication date
CN111738455B (en) 2021-05-11

Similar Documents

Publication Publication Date Title
US10187415B2 (en) Cognitive information security using a behavioral recognition system
US20240071037A1 (en) Mapper component for a neuro-linguistic behavior recognition system
US10409910B2 (en) Perceptual associative memory for a neuro-linguistic behavior recognition system
Mao et al. Deep Learning of Segment-Level Feature Representation with Multiple Instance Learning for Utterance-Level Speech Emotion Recognition.
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
US20220012422A1 (en) Lexical analyzer for a neuro-linguistic behavior recognition system
CN113220865B (en) Text similar vocabulary retrieval method, system, medium and electronic equipment
US20230237306A1 (en) Anomaly score adjustment across anomaly generators
Kwon et al. Multi-scale speaker embedding-based graph attention networks for speaker diarisation
CN113849653A (en) Text classification method and device
CN114048729A (en) Medical document evaluation method, electronic device, storage medium, and program product
Wang et al. Contrastive Predictive Coding of Audio with an Adversary.
CN116337448A (en) Method, device and storage medium for diagnosing faults of transfer learning bearing based on width multi-scale space-time attention
CN111738455B (en) Fault diagnosis method and system based on integration domain self-adaptation
CN111782804A (en) TextCNN-based same-distribution text data selection method, system and storage medium
Fonseca et al. Model-agnostic approaches to handling noisy labels when training sound event classifiers
US12032909B2 (en) Perceptual associative memory for a neuro-linguistic behavior recognition system
Lee et al. MFRD-80K: A Dataset and Benchmark for Masked Face Recognition.
CN116245148A (en) Compression method and device for automatic driving model, electronic equipment and storage medium
CN114330320A (en) Entity extraction method, training method of first entity extraction model and related device
CN116361715A (en) Data processing method, device and computer readable storage medium
CN117831566A (en) Multilingual speech emotion recognition system based on domain countermeasure learning
CN118069867A (en) Operation and maintenance data analysis method, device and equipment
CN118113914A (en) Artificial intelligence treatment method and robot for progressively identifying sensitive language

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant