CN115408190A - Fault diagnosis method and device - Google Patents
Fault diagnosis method and device Download PDFInfo
- Publication number
- CN115408190A CN115408190A CN202211063252.XA CN202211063252A CN115408190A CN 115408190 A CN115408190 A CN 115408190A CN 202211063252 A CN202211063252 A CN 202211063252A CN 115408190 A CN115408190 A CN 115408190A
- Authority
- CN
- China
- Prior art keywords
- feature
- fault
- network
- data
- feature data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003745 diagnosis Methods 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000003993 interaction Effects 0.000 claims abstract description 74
- 238000000605 extraction Methods 0.000 claims abstract description 51
- 238000004364 calculation method Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 11
- 239000013598 vector Substances 0.000 claims description 46
- 239000011159 matrix material Substances 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 9
- 230000006872 improvement Effects 0.000 claims description 7
- 238000003860 storage Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 6
- 238000013461 design Methods 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 5
- 238000012217 deletion Methods 0.000 claims description 4
- 230000037430 deletion Effects 0.000 claims description 4
- 230000008030 elimination Effects 0.000 claims description 4
- 238000003379 elimination reaction Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 4
- 230000009471 action Effects 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 83
- 230000008569 process Effects 0.000 description 10
- 238000009826 distribution Methods 0.000 description 6
- 238000005065 mining Methods 0.000 description 5
- 238000012423 maintenance Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 101000653455 Homo sapiens Transcriptional and immune response regulator Proteins 0.000 description 2
- 102100030666 Transcriptional and immune response regulator Human genes 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000009347 mechanical transmission Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/079—Root cause analysis, i.e. error or fault diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/38—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/194—Calculation of difference between files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Library & Information Science (AREA)
- Quality & Reliability (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
技术领域technical field
本发明涉及故障分析技术领域,特别是指一种故障诊断方法和装置。The invention relates to the technical field of fault analysis, in particular to a fault diagnosis method and device.
背景技术Background technique
制造行业如航空航天,汽车,加工工业储存的海量故障记录激发了数据挖掘和文本挖掘在历史数据驱动的故障诊断技术中的应用。故障记录包含了产品失效的机制,涉及的零件和故障的现象,可以帮助开展产品分析并指导工作者完成对故障的修复。工业故障记录分为结构化的数据(例如零件的型号序号,运行信号,以及常见的电压电流观测值等可以直接使用计算机来诊断的信息),和非结构化的信息(通常嵌入在文本形式中)。结构化的数值数据可以直接被计算机利用,然而对非结构化的故障记录进行信息检索和诊断通常依赖专业的技术人员的经验,耗时且效率不足。对这类文本记录进行挖掘和分析可以帮助维修工作者完成故障类型的判断,故障原因的分析并检索对应的维修方案。复杂产品的故障往往会涉及不同的零部件,各个零件在不同的故障原因下也会产生不同的故障现象,对应着不同的故障主题类别和解决办法。另外,由于专业词语,多义词,语气词等常见的文本描述问题的存在,都可能造成计算机对于文本的翻译与特征识别产生偏差。不同的厂商及维修人员记录文本的习惯方式的差别也影响着故障特征的提取效果,特征的提取结果作为分类器和相似案例检索的输入直接决定着模型的故障诊断表现。因此,针对不同产品的案例记录,亟需一种能准确提取故障特征,开展智能故障诊断的通用化方法。Massive fault records stored in manufacturing industries such as aerospace, automotive, and process industries have inspired the application of data mining and text mining in historical data-driven fault diagnosis techniques. The failure record includes the mechanism of product failure, the parts involved and the phenomenon of failure, which can help to carry out product analysis and guide workers to complete the repair of the failure. Industrial fault records are divided into structured data (such as model numbers of parts, operating signals, and common voltage and current observations and other information that can be directly diagnosed using a computer), and unstructured information (usually embedded in text form ). Structured numerical data can be directly used by computers, but information retrieval and diagnosis of unstructured fault records usually rely on the experience of professional technicians, which is time-consuming and inefficient. Mining and analyzing such text records can help maintenance workers to complete the judgment of the type of failure, analyze the cause of the failure and retrieve the corresponding maintenance plan. The failure of complex products often involves different components, and each component will produce different failure phenomena under different failure reasons, corresponding to different failure subject categories and solutions. In addition, due to the existence of common text description problems such as professional words, polysemous words, and modal particles, it may cause deviations in the translation and feature recognition of the text by the computer. The differences in the habits of different manufacturers and maintenance personnel to record text also affect the extraction effect of fault features. The feature extraction results are used as the input of the classifier and similar case retrieval to directly determine the fault diagnosis performance of the model. Therefore, for the case records of different products, there is an urgent need for a general method that can accurately extract fault features and carry out intelligent fault diagnosis.
发明内容Contents of the invention
本发明的目的是提供一种故障诊断方法和装置,以解决现有技术方案中故障文本特征提取技术效率不足,特征不细,导致故障诊断自动化程度低,精度不够的问题。The purpose of the present invention is to provide a fault diagnosis method and device to solve the problem of insufficient efficiency of the fault text feature extraction technology in the existing technical solution, and the features are not detailed, resulting in low fault diagnosis automation and insufficient precision.
为达到上述目的,本发明的实施例提供一种故障诊断方法,包括:In order to achieve the above purpose, an embodiment of the present invention provides a fault diagnosis method, including:
构建故障诊断网络模型;所述故障诊断网络模型包括特征提取网络、特征交互网络和特征分类网络;Build a fault diagnosis network model; the fault diagnosis network model includes a feature extraction network, a feature interaction network and a feature classification network;
根据工单样本数据、所述特征提取网络和历史故障数据库,确定用于表征文本语义的第一特征数据和用于表征文本故障主题的第二特征数据;According to the work order sample data, the feature extraction network and the historical fault database, determine first feature data for characterizing text semantics and second feature data for characterizing text fault topics;
根据所述第一特征数据、所述第二特征数据、所述特征交互网络和所述特征分类网络,确定训练好的故障诊断模型;Determine a trained fault diagnosis model according to the first feature data, the second feature data, the feature interaction network and the feature classification network;
根据所述故障诊断模型与所述历史故障数据库进行相似度计算,确定所述历史故障数据库中对应的工单处理信息。Perform similarity calculation based on the fault diagnosis model and the historical fault database to determine corresponding work order processing information in the historical fault database.
可选地,根据所述第一特征数据、所述第二特征数据、所述特征交互网络和所述特征分类网络,确定训练好的故障诊断模型,包括:Optionally, determining a trained fault diagnosis model according to the first feature data, the second feature data, the feature interaction network, and the feature classification network includes:
将所述第一特征数据和所述第二特征数据进行向量乘积交互,确定交互后的权重系数矩阵;Performing vector product interaction on the first feature data and the second feature data to determine an interacted weight coefficient matrix;
根据所述特征交互网络的归一化函数和所述权重系数矩阵,确定处理后的第一权重系数;determining a processed first weight coefficient according to the normalization function of the feature interaction network and the weight coefficient matrix;
通过对所述第一权重系数和所述第一特征数据加权求和,确定第三特征数据;所述第三特征数据用于表征融合文本语义和主题的特征数据;By weighting and summing the first weight coefficient and the first feature data, the third feature data is determined; the third feature data is used to characterize the feature data of the fusion text semantics and theme;
根据所述第三特征数据和所述特征分类网络的目标损失函数,确定训练好的故障诊断模型。A trained fault diagnosis model is determined according to the third feature data and the target loss function of the feature classification network.
可选地,所述特征分类网络的目标损失函数通过如下方式确定:Optionally, the target loss function of the feature classification network is determined as follows:
确定第二特征数据经过所述特征分类网络的第一损失函数、第三特征数据经过所述特征分类网络的第二损失函数和模型参数正则化损失的第三损失函数;determining a first loss function of the second feature data passing through the feature classification network, a second loss function of the third feature data passing through the feature classification network, and a third loss function of model parameter regularization loss;
根据所述第一损失函数、所述第二损失函数和所述第三损失函数加权求和,确定所述目标损失函数。The target loss function is determined according to the weighted sum of the first loss function, the second loss function, and the third loss function.
可选地,所述第一损失函数根据预设的第一交叉熵函数以及训练样本的数量确定;Optionally, the first loss function is determined according to a preset first cross-entropy function and the number of training samples;
所述第二损失函数根据预设的第二交叉熵函数以及训练样本的数量确定;The second loss function is determined according to a preset second cross-entropy function and the number of training samples;
其中,所述第一交叉熵函数和所述第二交叉熵函数均包括:样本的类别标签和样本分类的预测值。Wherein, both the first cross-entropy function and the second cross-entropy function include: the class label of the sample and the predicted value of the class of the sample.
可选地,将所述第一特征数据和所述第二特征数据进行向量乘积交互,确定交互后的权重系数矩阵,包括:Optionally, performing vector product interaction on the first feature data and the second feature data to determine an interacted weight coefficient matrix, including:
将所述第一特征数据和所述第二特征数据进行向量乘积交互,确定语义特征交互矩阵;Performing vector product interaction on the first feature data and the second feature data to determine a semantic feature interaction matrix;
将所述语义特征交互矩阵进行一层卷积网络处理,确定所述权重系数矩阵。The semantic feature interaction matrix is processed by a layer of convolution network to determine the weight coefficient matrix.
可选地,根据工单样本数据、所述特征提取网络和历史故障数据库,确定用于表征文本语义的第一特征数据和用于表征文本故障主题的第二特征数据,包括:Optionally, according to the work order sample data, the feature extraction network and the historical fault database, determining the first feature data for characterizing text semantics and the second feature data for characterizing text fault topics include:
通过所述特征提取网络的第一预设算法,确定工单样本数据的第一特征数据;所述第一特征数据包括语义长度和嵌入向量维度;Determine the first feature data of the work order sample data through the first preset algorithm of the feature extraction network; the first feature data includes semantic length and embedding vector dimension;
通过所述特征提取网络的第二预设算法,根据所述工单样本数据和历史故障数据库,确定工单样本数据的第二特征数据;所述第二特征数据包括主题数量、故障现象主题、故障原因主题、故障措施主题。Through the second preset algorithm of the feature extraction network, according to the work order sample data and the historical fault database, determine the second feature data of the work order sample data; the second feature data includes the number of topics, the topic of failure phenomena, Failure cause topic, failure action topic.
可选地,根据所述第三特征数据和所述特征分类网络的目标损失函数,确定训练好的故障诊断模型,包括:Optionally, determining a trained fault diagnosis model according to the third feature data and the target loss function of the feature classification network includes:
根据所述第三特征数据和所述目标损失函数,确定目标故障分类损失值;determining a target fault classification loss value according to the third feature data and the target loss function;
若所述目标故障分类损失值低于阈值时,则根据预设函数优化所述特征提取网络、所述特征交互网络和所述特征分类网络,直至所述目标故障分类损失值大于或等于所述阈值时,确定训练好的故障诊断模型。If the target fault classification loss value is lower than the threshold, optimize the feature extraction network, the feature interaction network and the feature classification network according to a preset function until the target fault classification loss value is greater than or equal to the When the threshold value is determined, the trained fault diagnosis model is determined.
可选地,上述方法还包括:Optionally, the above method also includes:
获取N个历史故障工单,并进行预处理,确定故障特征数据;N为正整数;Obtain N historical fault work orders and perform preprocessing to determine the fault characteristic data; N is a positive integer;
基于所述故障特征数据,构建所述历史故障数据库;所述故障数据库包括N个历史故障工单和N个第三特征数据之间的对应关系;第三特征数据以向量形式存储于所述历史故障数据库中;Based on the fault feature data, construct the historical fault database; the fault database includes the correspondence between N historical fault work orders and N third feature data; the third feature data is stored in the history in vector form in the fault database;
其中,所述故障特征数据包括:故障模式、故障原因、故障影响、故障检测方法、设计改进措施和使用补偿措施中的一项或多项;Wherein, the fault characteristic data includes: one or more of fault mode, fault cause, fault impact, fault detection method, design improvement measures and use compensation measures;
所述预处理包括噪声信息剔除、重复数据删除和敏感词过滤中的一项或多项。The preprocessing includes one or more of noise information elimination, repeated data deletion, and sensitive word filtering.
为达到上述目的,本发明的实施例还提供一种故障诊断装置,包括:In order to achieve the above purpose, an embodiment of the present invention also provides a fault diagnosis device, including:
构建模块,用于构建故障诊断网络模型;所述故障诊断网络模型包括特征提取网络、特征交互网络和特征分类网络;A building block for constructing a fault diagnosis network model; the fault diagnosis network model includes a feature extraction network, a feature interaction network and a feature classification network;
第一确定模块,用于根据工单样本数据、所述特征提取网络和历史故障数据库,确定用于表征文本语义的第一特征数据和用于表征文本故障主题的第二特征数据;The first determination module is used to determine the first feature data used to represent the text semantics and the second feature data used to represent the text fault theme according to the work order sample data, the feature extraction network and the historical fault database;
第二确定模块,用于根据所述第一特征数据、所述第二特征数据、所述特征交互网络和所述特征分类网络,确定训练好的故障诊断模型;A second determining module, configured to determine a trained fault diagnosis model according to the first feature data, the second feature data, the feature interaction network, and the feature classification network;
第三确定模块,用于根据所述故障诊断模型与历史故障数据库进行相似度计算,确定所述历史故障数据库中对应的工单处理信息。The third determination module is configured to perform similarity calculation based on the fault diagnosis model and the historical fault database, and determine corresponding work order processing information in the historical fault database.
为达到上述目的,本发明的实施例还提供一种可读存储介质,其上存储有程序或指令,所述程序或指令被处理器执行时实现如上任一项所述的故障诊断方法中的步骤。In order to achieve the above purpose, an embodiment of the present invention also provides a readable storage medium, on which a program or instruction is stored, and when the program or instruction is executed by a processor, the fault diagnosis method described in any one of the above items is implemented. step.
本发明的上述技术方案的有益效果如下:The beneficial effects of above-mentioned technical scheme of the present invention are as follows:
上述的技术方案通过提取用于表征文本语义的第一特征数据和用于表征文本故障主题的第二特征数据,并通过特征交互网络将第一特征数据和第二特征数据结合,再通过特征分类网络确定训练好的故障诊断模型,根据所述故障诊断模型与历史故障数据库进行相似度计算,确定所述历史故障数据库中对应的工单处理信息,提升了自动化故障诊断的准确率,增强了模型特征学习的可解释性和对不同故障主题的诊断适应能力。The above technical solution extracts the first feature data used to represent the semantics of the text and the second feature data used to represent the text fault theme, and combines the first feature data and the second feature data through the feature interaction network, and then classifies the features The network determines the trained fault diagnosis model, calculates the similarity between the fault diagnosis model and the historical fault database, determines the corresponding work order processing information in the historical fault database, improves the accuracy of automatic fault diagnosis, and enhances the model Interpretability of feature learning and diagnostic adaptability to different fault topics.
附图说明Description of drawings
图1为本发明实施例提供的故障诊断方法的流程图之一;Fig. 1 is one of the flowcharts of the fault diagnosis method provided by the embodiment of the present invention;
图2为本发明实施例提供的故障诊断网络模型的结构图;Fig. 2 is the structural diagram of the fault diagnosis network model that the embodiment of the present invention provides;
图3为本发明实施例提供的特征交互网络的结构图;FIG. 3 is a structural diagram of a feature interaction network provided by an embodiment of the present invention;
图4为本发明实施例提供的故障诊断方法的流程图之二;Fig. 4 is the second flow chart of the fault diagnosis method provided by the embodiment of the present invention;
图5为本发明实施例提供的故障诊断装置的结构图。Fig. 5 is a structural diagram of a fault diagnosis device provided by an embodiment of the present invention.
具体实施方式Detailed ways
为使本发明要解决的技术问题、技术方案和优点更加清楚,下面将结合附图及具体实施例进行详细描述。In order to make the technical problems, technical solutions and advantages to be solved by the present invention clearer, the following will describe in detail with reference to the drawings and specific embodiments.
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本发明的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。It should be understood that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present invention. Thus, appearances of "in one embodiment" or "in an embodiment" in various places throughout the specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
在本发明的各种实施例中,应理解,下述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of each process should be determined by its functions and internal logic, rather than implementing the present invention. The implementation of the examples constitutes no limitation.
另外,本文中术语“系统”和“网络”在本文中常可互换使用。Additionally, the terms "system" and "network" are often used interchangeably herein.
在本申请所提供的实施例中,应理解,“与A相应的B”表示B与A相关联,根据A可以确定B。但还应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信息确定B。In the embodiments provided in this application, it should be understood that "B corresponding to A" means that B is associated with A, and B can be determined according to A. However, it should also be understood that determining B according to A does not mean determining B only according to A, and B may also be determined according to A and/or other information.
如图1所示,本发明实施例提供一种故障诊断方法,包括:As shown in Figure 1, an embodiment of the present invention provides a fault diagnosis method, including:
步骤101、构建故障诊断网络模型;所述故障诊断网络模型包括特征提取网络、特征交互网络和特征分类网络。
本发明中构建的基于文本挖掘的特征提取交互的故障诊断网络模型,称为主题-语义交互特征模型(Topic-context Interaction Model,TCIM)。如图2所示,按照各部分结构的具体功能,可将TCIM模型划分为特征提取网络(Gg)、特征交互网络(Gf)和特征分类网络(Gd)三部分,这里,图2的网络结构可以用于下述任意的步骤中,图2中的箭头代表故障原始数据在模型训练时的流动过程。The feature extraction and interaction fault diagnosis network model based on text mining constructed in the present invention is called topic-semantic interaction feature model (Topic-context Interaction Model, TCIM). As shown in Figure 2, according to the specific functions of each part structure, the TCIM model can be divided into three parts: feature extraction network (Gg), feature interaction network (Gf) and feature classification network (Gd). Here, the network structure in Figure 2 It can be used in any of the following steps. The arrows in Figure 2 represent the flow process of fault raw data during model training.
步骤102、根据工单样本数据、所述特征提取网络和历史故障数据库,确定用于表征文本语义的第一特征数据和用于表征文本故障主题的第二特征数据。其中,工单样本数据为故障案例文本数据,如维修日志和手册等文本形式的记录数据。
该实施例中,第一特征数据可以表示从字或词的角度,挖掘出文本的全局信息,还可以表示为通过注意力机制确定的局部语义特征;第二特征数据是文本故障主题的特征数据,代表了文本中的关键信息,若主题词与候选关键词匹配,则代表候选内容充分代表文本主旨。利用隐含狄利克雷分布(Latent Dirichlet Allocation,LDA)主题模型构建主题关键词,主题特征评分的依据是候选关键词是否出现在主题特征词里,如果出现,则权重加倍,否则权重不变。In this embodiment, the first feature data can represent the global information of the text mined from the perspective of words or words, and can also be represented as local semantic features determined by the attention mechanism; the second feature data is the feature data of the text fault topic , represents the key information in the text, and if the subject word matches the candidate keyword, it means that the candidate content fully represents the gist of the text. The topic keywords are constructed using the Latent Dirichlet Allocation (LDA) topic model. The basis of topic feature scoring is whether the candidate keywords appear in the topic feature words. If they appear, the weight is doubled, otherwise the weight remains unchanged.
步骤103、根据所述第一特征数据、所述第二特征数据、所述特征交互网络和所述特征分类网络,确定训练好的故障诊断模型;
该实施例中,将第一特征数据和第二特征数据通过特征交互网络交互,再通过特征分类网络进一步确定训练好的故障诊断模型。In this embodiment, the first feature data and the second feature data are interacted through the feature interaction network, and then the trained fault diagnosis model is further determined through the feature classification network.
步骤104、根据所述故障诊断模型与历史故障数据库进行相似度计算,确定所述历史故障数据库中对应的工单处理信息。Step 104: Perform similarity calculation based on the fault diagnosis model and the historical fault database to determine corresponding work order processing information in the historical fault database.
本发明实施例中,在确定故障诊断模型后,将待处理的工单数据输入至故障诊断模型中,故障诊断模型可以对待处理的工单数据进行分类,确定分类后的数据或者确定待处理的工单数据的关键信息,根据分类后的数据或者待处理的工单数据的关键信息,与历史故障数据库进行相似度计算,从而发现相近的工单,并返回该工单的处理意见。In the embodiment of the present invention, after the fault diagnosis model is determined, input the work order data to be processed into the fault diagnosis model, the fault diagnosis model can classify the work order data to be processed, determine the classified data or determine the The key information of the work order data is calculated according to the similarity between the classified data or the key information of the work order data to be processed and the historical fault database, so as to find similar work orders and return the processing opinion of the work order.
需要说明的是,相似度的过程是:经过特征交互网络将第一特征数据、第二特征数据生成包含前两个特征信息的第三特征数据,与历史故障数据库中储存的第三特征数据集进行相似度计算,来找到相似的故障案例从而帮助解决当前问题,故障诊断模型用来帮助完成故障的分类智能诊断。It should be noted that the process of similarity is: through the feature interaction network, the first feature data and the second feature data are generated into the third feature data containing the first two feature information, and the third feature data set stored in the historical fault database Carry out similarity calculations to find similar fault cases to help solve current problems, and the fault diagnosis model is used to help complete fault classification and intelligent diagnosis.
可选地,相似度计算的计算方法采用余弦相似度的确定方式,通过余弦度量判断两文本的相关程度,如果余弦相似度越大,说明两变量夹角越小,两文本的相似程度就越高,如果余弦相似度越小,说明两变量夹角越大,两文本的相似程度就越低,最终与历史故障数据库对比后,确定所述历史故障数据库中相似度最高的工单处理信息。Optionally, the calculation method of similarity calculation adopts the determination method of cosine similarity, and the degree of correlation between the two texts is judged by the cosine measure. If the cosine similarity is larger, it means that the angle between the two variables is smaller, and the similarity between the two texts is greater. High, if the cosine similarity is smaller, it means that the greater the angle between the two variables, the lower the similarity between the two texts. Finally, after comparing with the historical fault database, determine the work order processing information with the highest similarity in the historical fault database.
本发明的方案可以完成对不同案例的诊断和检索任务,提升了自动化故障诊断的准确率,增强了模型特征学习的可解释性和对不同故障主题的诊断适应能力。The solution of the invention can complete the diagnosis and retrieval tasks for different cases, improves the accuracy of automatic fault diagnosis, and enhances the interpretability of model feature learning and the adaptability of diagnosis to different fault topics.
可选地,上述的步骤102,包括:Optionally, the
步骤104、通过所述特征提取网络的第一预设算法,确定工单样本数据的第一特征数据;所述第一特征数据包括语义长度和嵌入向量维度;Step 104: Determine the first feature data of the work order sample data through the first preset algorithm of the feature extraction network; the first feature data includes semantic length and embedding vector dimension;
步骤105、通过所述特征提取网络的第二预设算法,根据所述工单样本数据和历史故障数据库,确定工单样本数据的第二特征数据;所述第二特征数据包括主题数量、故障现象主题、故障原因主题、故障措施主题。Step 105, through the second preset algorithm of the feature extraction network, according to the work order sample data and the historical failure database, determine the second characteristic data of the work order sample data; the second characteristic data includes the number of topics, failure Symptom topic, failure cause topic, failure action topic.
需要说明的是,主题数量,主题的先验分布为事先确定好的参数(预计研究领域分成几个主题),事先确定可以通过人为确定,或者机器算法确定。这里,第二特征数据的主题是指所有历史故障数据库中包含的故障主题,比如针对汽车领域包含:发动机气缸进排气故障、机械传动故障、电源电池故障类型等。特征提取网络的主题特征模块学习后,将所有历史数据分为对应主题数目的几类,且会为每个主题下对应有相应的词语(即文章-主题和主题-词语概率分布),对每个主题下前几个高频词语对应的嵌入向量加权求平均可以得到最终的第二特征信息,它代表了每个主题下主要的故障现象和原因。It should be noted that the number of topics and the prior distribution of topics are parameters determined in advance (the research field is expected to be divided into several topics), which can be determined manually or by machine algorithms. Here, the subjects of the second characteristic data refer to all the fault subjects contained in the historical fault database, for example, for the automotive field, it includes: engine cylinder intake and exhaust faults, mechanical transmission faults, power battery fault types, etc. After the topic feature module of the feature extraction network is learned, all historical data are divided into several categories corresponding to the number of topics, and there will be corresponding words under each topic (that is, article-topic and topic-word probability distribution). The weighted average of the embedding vectors corresponding to the first few high-frequency words under each topic can obtain the final second feature information, which represents the main fault phenomenon and cause under each topic.
本发明实施例中,特征提取网络(Gg)用于从原始故障数据文本(工单样本数据)中提取有效的故障特征,分为语义特征提取模块(如图2所示)和主题特征提取模块(如图2所示)。语义特征提取模块基于学习到的词向量,利用卷积神经网络提取文本的语义信息,经过归一化,线性整流函数(ReLU函数)以及池化层可以得到需要的文本语义特征向量,即第一特征数据。主题特征提取模块基于文本挖掘中广泛应用的潜在狄利克雷(LDA)主题模型,对案例数据库中的记录进行主题挖掘,得到能够人为观测的故障主题分布及每个主题下的高频词语,对每个主题下的高频词语向量进行加权求和,即得到每个故障主题下的主题特征向量,即第一特征数据。In the embodiment of the present invention, the feature extraction network (Gg) is used to extract effective fault features from the original fault data text (work order sample data), and is divided into a semantic feature extraction module (as shown in Figure 2) and a theme feature extraction module (as shown in picture 2). The semantic feature extraction module uses the convolutional neural network to extract the semantic information of the text based on the learned word vector. After normalization, the linear rectification function (ReLU function) and the pooling layer can obtain the required text semantic feature vector, that is, the first feature data. The topic feature extraction module is based on the latent Dirichlet (LDA) topic model widely used in text mining, and performs topic mining on the records in the case database to obtain the distribution of fault topics that can be observed artificially and the high-frequency words under each topic. The high-frequency word vectors under each topic are weighted and summed to obtain a topic feature vector under each fault topic, that is, the first feature data.
在一可选实施例中,通过特征提取网络(Gg)进行语义特征和主题类别特征的提取,其中语义特征(第一特征数据)为每个案例文本经过Word2vec(word to vector)模型和卷积层后输出的语义特征嵌入向量D={d1,d2,…,de}∈Rs×e,s为文档的长度,e为设置的嵌入向量维度。主题类别特征(第二特征数据)为根据主题模型的输出(主题-词语概率分布),对每个主题下概率前五名词语的嵌入向量加和求取平均值得到的主题特征向量T={t1,t2,…,tk}∈Rk×e,k为主题数量,e为嵌入向量维度。可理解的,Rk×e代表向量维度空间,即有k个分量,每个分量是e维的向量。In an optional embodiment, the extraction of semantic features and topic category features is carried out through a feature extraction network (Gg), wherein the semantic features (first feature data) pass through the Word2vec (word to vector) model and convolution for each case text The semantic feature embedding vector D={d 1 ,d 2 ,…,d e }∈R s×e output after the layer, s is the length of the document, and e is the dimension of the embedding vector set. The topic category feature (second feature data) is the topic feature vector T={ t 1 ,t 2 ,…,t k }∈R k×e , k is the number of topics, and e is the embedding vector dimension. Understandably, R k×e represents a vector dimension space, that is, there are k components, and each component is an e-dimensional vector.
可选地,上述的步骤103包括:Optionally, the
步骤104、将所述第一特征数据和所述第二特征数据进行向量乘积交互,确定交互后的权重系数矩阵;
步骤105、根据所述特征交互网络的归一化函数和所述权重系数矩阵,确定处理后的第一权重系数;Step 105. Determine the processed first weight coefficient according to the normalization function of the feature interaction network and the weight coefficient matrix;
步骤106、通过对所述第一权重系数和所述第一特征数据加权求和,确定第三特征数据;所述第三特征数据用于表征融合文本语义和主题的特征数据;Step 106. Determine third feature data by weighting and summing the first weight coefficient and the first feature data; the third feature data is used to characterize the feature data of the fusion text semantics and theme;
步骤107、根据所述第三特征数据和所述特征分类网络的目标损失函数,确定训练好的故障诊断模型。Step 107: Determine a trained fault diagnosis model according to the third feature data and the target loss function of the feature classification network.
该实施例中,在特征交互网络(Gf)中,对提取的主题特征和语义特征进行向量乘积,得到主题-语义关系,即将所述第一特征数据和所述第二特征数据进行向量乘积交互,确定交互后的权重系数矩阵;权重系数矩阵经过归一化函数(softmax激活函数)后,即为文本片段的对应主题权重值,加权给语义特征即可得到文本在主题类别指导下的第三特征数据,第三特征数据融合了案例的主题和语义信息。对于第三特征数据,其为计算机便于储存的向量形式,并可以通过余弦向量的计算求得当前案例与案例库中历史故障案例之间的相似度,完成案例检索以提供当前案例的解决方案。In this embodiment, in the feature interaction network (Gf), the vector product is performed on the extracted topic features and semantic features to obtain the topic-semantic relationship, that is, the vector product interaction between the first feature data and the second feature data , to determine the weight coefficient matrix after the interaction; after the weight coefficient matrix passes through the normalization function (softmax activation function), it is the corresponding topic weight value of the text segment, which can be weighted to the semantic features to get the third content of the text under the guidance of the topic category. Feature data, the third feature data incorporates the topic and semantic information of the case. For the third feature data, it is in the form of a vector that can be easily stored by the computer, and the similarity between the current case and the historical fault cases in the case library can be obtained through the calculation of the cosine vector, and the case retrieval is completed to provide the solution of the current case.
其中,步骤107中,特征分类网络(Gd)由多层全连接神经网络组成,其输入为主题特征提取模块提取的主题类别特征向量以及经过交互网络生成的第三特征数据,经过特征分类网络(Gd)的目标损失函数,输出为所有故障类别对应的概率值,用来根据案例特征预测故障分类,完成智能故障诊断。Wherein, in step 107, the feature classification network (Gd) is composed of a multi-layer fully connected neural network, and its input is the theme category feature vector extracted by the theme feature extraction module and the third feature data generated through the interaction network, through the feature classification network ( The target loss function of Gd), the output is the probability value corresponding to all fault categories, which is used to predict the fault classification according to the case characteristics and complete the intelligent fault diagnosis.
可选地,所述特征分类网络的目标损失函数通过如下方式确定:Optionally, the target loss function of the feature classification network is determined as follows:
步骤108、确定第二特征数据经过所述特征分类网络的第一损失函数、第三特征数据经过所述特征分类网络的第二损失函数和模型参数正则化损失的第三损失函数;Step 108, determining the first loss function of the second feature data passing through the feature classification network, the second loss function of the third feature data passing through the feature classification network, and the third loss function of the model parameter regularization loss;
步骤109、根据所述第一损失函数、所述第二损失函数和所述第三损失函数加权求和,确定所述目标损失函数。Step 109: Determine the target loss function according to the weighted summation of the first loss function, the second loss function, and the third loss function.
该实施例中,目标损失函数主要由分类器的损失函数构成。故障分类器的总损失函数,也就是目标损失函数Lc由主题特征的预测损失与交互特征的预测损失及模型参数正则化损失加权求和得到,定义为:In this embodiment, the target loss function is mainly composed of the loss function of the classifier. The total loss function of the fault classifier, that is, the target loss function Lc, is obtained by the weighted sum of the prediction loss of topic features, the prediction loss of interaction features and the regularization loss of model parameters, and is defined as:
Lc=L1+PL2+λR(w),公式一;L c =L 1 +PL 2 +λR(w), Formula 1;
公式一中,L1为由交互特征经过分类模块得到的预测损失,即第三特征数据经过所述特征分类网络的第二损失函数;L2为由主题特征经过分类模块得到的预测损失,即第二特征数据经过所述特征分类网络的第一损失函数;P、λ为二者损失对应的权重值,介于0和1之间(包括0和1),R(W)=‖W‖2是模型中所有待学习参数的二次平方正则化值。In Formula 1, L1 is the prediction loss obtained from the interaction feature through the classification module, that is, the second loss function of the third feature data through the feature classification network; L2 is the prediction loss obtained from the topic feature through the classification module, that is, the second The feature data passes through the first loss function of the feature classification network; P and λ are the weight values corresponding to the losses of the two, between 0 and 1 (including 0 and 1), R(W)=‖W‖ 2 is The quadratic square regularization value of all parameters to be learned in the model.
具体地,所述第一损失函数根据预设的第一交叉熵函数以及训练样本的数量确定;Specifically, the first loss function is determined according to a preset first cross-entropy function and the number of training samples;
所述第二损失函数根据预设的第二交叉熵函数以及训练样本的数量确定;The second loss function is determined according to a preset second cross-entropy function and the number of training samples;
其中,所述第一交叉熵函数和所述第二交叉熵函数均包括:样本的类别标签和样本分类的预测值。Wherein, both the first cross-entropy function and the second cross-entropy function include: the class label of the sample and the predicted value of the class of the sample.
该实施例中,第二损失函数L1、第一损失函数L2的故障预测损失值采用预测与真实标签间的交叉熵来衡量,其L1以及L2的预测损失公式如下:In this embodiment, the failure prediction loss values of the second loss function L1 and the first loss function L2 are measured by the cross entropy between the prediction and the real label, and the prediction loss formulas of L1 and L2 are as follows:
其中,n为训练样本的数量;为交叉熵函数,y1为当前故障文本的故障类型,yh=f(H)为交互特征H经过分类器得到的故障类型预测值,yt=f(T)表示为主题特征向量T经过分类器得到的故障类型预测值。Among them, n is the number of training samples; is the cross-entropy function, y 1 is the fault type of the current fault text, y h = f(H) is the fault type prediction value obtained by the interaction feature H through the classifier, y t = f(T) is expressed as the topic feature vector T through The predicted value of the fault type obtained by the classifier.
交叉熵函数定义如下:The cross entropy function is defined as follows:
其中y(i)是第i个样本的类别标签,即样本的真实值;为模型对第i个样本分类的预测值。where y (i) is the category label of the i-th sample, that is, the true value of the sample; is the predicted value of the model for the i-th sample classification.
综上所述,目标损失函数Lc表示等同如下:In summary, the target loss function Lc is expressed as follows:
可选地,如图3所示,上述的步骤104,包括:Optionally, as shown in FIG. 3, the
步骤110、将所述第一特征数据和所述第二特征数据进行向量乘积交互,确定语义特征交互矩阵。Step 110, performing vector product interaction on the first feature data and the second feature data to determine a semantic feature interaction matrix.
该实施例中,将工单样本数据提取后的第一特征数据和所述第二特征数据进行向量乘积交互,即将第一特征数据和所述第二特征数据输入给特征交互网络,特征交互网络的具体网络结构如图3所示。首先,进行向量乘积完成交互计算,得到类别语义特征交互矩阵G=DTT={g1,g2,…,gs}∈Rs×k,语义特征交互矩阵G使模型为后面的任务提供了更多的特征信息且保留了模型的可解释性,通过引入主题先验知识,帮助文本的重新表达。In this embodiment, the first feature data extracted from the work order sample data and the second feature data are subjected to vector product interaction, that is, the first feature data and the second feature data are input to the feature interaction network, and the feature interaction network The specific network structure is shown in Figure 3. First, carry out the vector product to complete the interactive calculation, and get the category semantic feature interaction matrix G=DT T ={g 1 ,g 2 ,…,g s }∈R s×k , the semantic feature interaction matrix G enables the model to provide More feature information and the interpretability of the model are preserved, and the re-expression of the text is helped by introducing topic prior knowledge.
步骤111、将所述语义特征交互矩阵进行一层卷积网络处理,确定所述权重系数矩阵。Step 111 , subjecting the semantic feature interaction matrix to one layer of convolutional network processing to determine the weight coefficient matrix.
为了获得主题-单词对的关联,进一步用一层卷积网络处理语义特征交互矩阵G来捕捉相邻词语的非线性关系。In order to obtain the topic-word pair association, a layer of convolutional network is further used to process the semantic feature interaction matrix G to capture the non-linear relationship of adjacent words.
对于中心为d,从d-n到d+n的每个G向量进行卷积核大小为n的卷积,采用relu激活函数得到卷积后的向量U={u1,u2,…,us}∈Rs×k,对于第d个卷积向量ud,该卷积过程的公式五如下:For each G vector whose center is d, from dn to d+n, the convolution kernel size is n, and the relu activation function is used to obtain the convolutional vector U={u 1 ,u 2 ,…,u s }∈R s×k , for the dth convolution vector u d , the formula 5 of the convolution process is as follows:
ud=Relu(wdGd-n:d+n+bd),公式五;u d = Relu(w d G dn:d+n +b d ), Formula 5;
其中,wd和bd是卷积网络模型待学习的参数,并最终通过最大池化(maxpooling)ud得到最终的权重注意力向量V={v1,v2,…,vS}∈Rs×1。V是存储了当前文本中每个单词与主题之间注意力得分的长度为s的向量,用Softmax函数可以将得分转化为主题-单词权重系数向量β,即:β=SoftMax(v)。Among them, w d and b d are the parameters to be learned by the convolutional network model, and finally get the final weight attention vector V={v 1 ,v 2 ,…,v S }∈ through maxpooling u d R s ×1 . V is a vector of length s that stores the attention score between each word and topic in the current text, and the score can be converted into a topic-word weight coefficient vector β with the Softmax function, namely: β=SoftMax(v).
其中,第d个主题-单词权重系数计算为:Among them, the d-th topic-word weight coefficient is calculated as:
计算得到权重系数向量β后,和原始的文本语义特征每个单词分量对应加权可以生成新的交互特征H={h1,h2,…,hS}∈RS×E,该特征就是最终生成的带有主题-单词特征交互关系的新特征,即第三特征数据,它包含了每个案例的文本信息以及类别信息,计算为:After the weight coefficient vector β is calculated, the new interaction feature H={h 1 ,h 2 ,…,h S }∈R S×E can be generated by corresponding weighting with each word component of the original text semantic feature, which is the final The generated new feature with the interactive relationship between topic and word features, that is, the third feature data, which contains the text information and category information of each case, is calculated as:
进一步地,如图2所示,本发明实施例的步骤107,包括:Further, as shown in FIG. 2, step 107 of the embodiment of the present invention includes:
步骤112、根据所述第三特征数据和所述目标损失函数,确定目标故障分类损失值;Step 112. Determine a target fault classification loss value according to the third characteristic data and the target loss function;
步骤113、若所述目标故障分类损失值低于阈值时,则根据预设函数优化所述特征提取网络、所述特征交互网络和所述特征分类网络,直至所述目标故障分类损失值大于或等于所述阈值时,确定训练好的故障诊断模型。Step 113: If the target fault classification loss value is lower than the threshold, optimize the feature extraction network, the feature interaction network, and the feature classification network according to a preset function until the target fault classification loss value is greater than or When it is equal to the threshold, the trained fault diagnosis model is determined.
本发明实施例中,确定训练好的故障诊断模型中还可以优化语义特征提取网络,特征交互网络和特征分类网络。固定训练好的主题特征提取模块,利用原始数据提取语义特征,完成特征交互和分类预测,得到一轮训练后的目标故障分类损失值;若所述目标故障分类损失值低于阈值时,则根据预设函数优化所述特征提取网络、所述特征交互网络和所述特征分类网络,也就是针对当前损失反向梯度传播,反复优化,重复步骤112,直至所述目标故障分类损失值大于或等于所述阈值时,模型损失收敛结束,确定训练好的故障诊断模型。In the embodiment of the present invention, the semantic feature extraction network, feature interaction network and feature classification network can also be optimized in determining the trained fault diagnosis model. Fix the trained topic feature extraction module, use the original data to extract semantic features, complete feature interaction and classification prediction, and obtain the target fault classification loss value after a round of training; if the target fault classification loss value is lower than the threshold value, then according to The preset function optimizes the feature extraction network, the feature interaction network, and the feature classification network, that is, for the current loss reverse gradient propagation, repeated optimization, repeating step 112, until the target fault classification loss value is greater than or equal to When the threshold is reached, the model loss convergence ends, and a trained fault diagnosis model is determined.
需要说明的是,模型的优化目标为最小化故障分类器的总体分类损失。一方面,模型需要最小化L2损失来约束提取出的主题特征加权后能够尽量贴合分类类别,另一方面,在加入语义特征后,模型需要最小化L1损失来约束生成的特征能够代表案例完整信息,使得特征分类网络能够辨别故障的类型,使生成的细粒度特征具有故障诊断的能力。训练过程中,模型通过计算特征分类网络中的目标故障分类损失值,并反向传播依次迭代优化特征提取模型和特征交互模型的参数来实现上述优化目标。It should be noted that the optimization objective of the model is to minimize the overall classification loss of the fault classifier. On the one hand, the model needs to minimize the L 2 loss to constrain the extracted topic features to fit the classification category as much as possible after weighting. On the other hand, after adding semantic features, the model needs to minimize the L 1 loss to constrain the generated features to represent The complete information of the case enables the feature classification network to distinguish the type of fault, so that the generated fine-grained features have the ability of fault diagnosis. During the training process, the model achieves the above optimization goals by calculating the target fault classification loss value in the feature classification network, and backpropagating to iteratively optimize the parameters of the feature extraction model and the feature interaction model in turn.
还需要说明的是,在上述步骤103之前,首先需要模型初始化。对特征提取网络(Gg)、特征交互网络(Gf)和特征分类网络(Gd)的权值参数进行初始化;其次,试验并确定特征提取网络(Gg)中主题特征提取模块参数;利用原始数据提取主题特征,利用高频词及主题可视化分布方法进行试验确定合适的主题模型参数,充分利用相关技术人员的经验,使得主题特征分布尽量贴近真实的故障特征分布。It should also be noted that before
可选地,上述方法还包括:Optionally, the above method also includes:
步骤114、获取N个历史故障工单,并进行预处理,确定故障特征数据;N为正整数;Step 114, obtaining N historical fault work orders, and performing preprocessing to determine fault characteristic data; N is a positive integer;
步骤115、基于所述故障特征数据,构建所述历史故障数据库;所述故障数据库包括N个历史故障工单和N个第三特征数据之间的对应关系;第三特征数据以向量形式存储于所述历史故障数据库中;Step 115. Based on the fault feature data, construct the historical fault database; the fault database includes the correspondence between N historical fault work orders and N third feature data; the third feature data is stored in vector form in In the historical failure database;
其中,所述故障特征数据包括:故障模式、故障原因、故障影响、故障检测方法、设计改进措施和使用补偿措施中的一项或多项;Wherein, the fault characteristic data includes: one or more of fault mode, fault cause, fault impact, fault detection method, design improvement measures and use compensation measures;
所述预处理包括噪声信息剔除、重复数据删除和敏感词过滤中的一项或多项。The preprocessing includes one or more of noise information elimination, repeated data deletion, and sensitive word filtering.
需要说明的是,构建的历史故障数据库中存储的是历史故障案例(历史故障工单)及其对应的第三特征(训练的时候就已经生成,包含了第一、第二特征数据),第三特征数据是向量的形式存储,所以对比传统文本存储和相似文本检索,该方法更加便于储存并且更方便完成案例之间的相似度计算。It should be noted that the constructed historical fault database stores historical fault cases (historical fault work orders) and their corresponding third features (generated during training, including the first and second feature data). The three-feature data is stored in the form of vectors, so compared with traditional text storage and similar text retrieval, this method is easier to store and more convenient to complete the similarity calculation between cases.
该实施例中,获取N个历史故障工单,并进行预处理,其中,确定的故障特征数据包括人工诊断结果、系统诊断结果,所述系统诊断结果是基于预存的故障诊断系统处理故障特征数据而得到的,N个历史故障工单中包括M个人工诊断结果,所述M为小于或等于N的正整数,所述N为正整数;其中,所述故障特征数据是处理所述历史故障工单的故障详单数据而得到的,所述故障详单数据包括与所述历史故障工单中的故障的发生时间关联的预设时段内的数据。所述故障特征数据包括故产品信息(产品名称、产品型号、功能、材料、环境载荷、性能参数)和故障信息(故障模式、故障原因、故障影响、故障检测方法、设计改进措施和使用补偿措施等)In this embodiment, N historical fault work orders are obtained and pre-processed, wherein the determined fault characteristic data includes manual diagnosis results and system diagnosis results, and the system diagnosis results are processed based on pre-stored fault diagnosis system fault characteristic data As obtained, N historical fault work orders include M manual diagnosis results, where M is a positive integer less than or equal to N, and N is a positive integer; wherein, the fault characteristic data is to process the historical fault The fault detailed data of the work order is obtained, and the fault detailed data includes data within a preset time period associated with the occurrence time of the fault in the historical fault work order. The fault characteristic data includes product information (product name, product model, function, material, environmental load, performance parameters) and fault information (failure mode, fault cause, fault impact, fault detection method, design improvement measures and use compensation measures) Wait)
在一可选实施方式中:建立历史故障数据库,包括产品族/产品平台、产品详细信息、功能描述三个数据库表,以及包括故障模式、详细信息、故障机理三个数据库表。通过产品族、产品树的形式,将实现相似功能的有同样内部接口的产品组织起来,然后在产品平台上添加不同的个性模块形成产品实例。所有的故障知识都属于某一个产品实例或平台。数据库包括关系层表和应用层表两个数据库表。关系层表中存储着已知的对象关系,应用层表中存储着产品功能和故障之间数据关系,构成了历史故障数据库。In an optional embodiment: establish a historical failure database, including three database tables of product family/product platform, product detailed information, and function description, and three database tables including failure mode, detailed information, and failure mechanism. Through the form of product family and product tree, organize products with the same internal interface that achieve similar functions, and then add different personality modules on the product platform to form product instances. All failure knowledge belongs to a certain product instance or platform. The database includes two database tables, the relational layer table and the application layer table. The known object relationship is stored in the relationship layer table, and the data relationship between product functions and faults is stored in the application layer table, forming a historical fault database.
在另一具体实施例中,如图4所示,本发明还提供一种整体流程图,包括:In another specific embodiment, as shown in Figure 4, the present invention also provides an overall flow chart, including:
步骤1:从数据库中提取历史故障案例数据(历史故障工单),进行文本预处理,包括专业词的识别并去除停用词,得到故障案例语料库(历史故障数据库)。Step 1: Extract historical fault case data (historical fault work orders) from the database, perform text preprocessing, including identifying professional words and removing stop words, and obtain a fault case corpus (historical fault database).
步骤2:按照特征提取网络分两阶段提取每一个故障案例的文本语义特征和案例库的整体故障主题类别特征,选择合适的网络层结构参数通过特征交互网络进行特征的生成,并通过特征分类网络中分类器最小化整体损失函数训练模型。Step 2: According to the feature extraction network, the text semantic features of each fault case and the overall fault topic category features of the case library are extracted in two stages, and the appropriate network layer structure parameters are selected to generate features through the feature interaction network, and through the feature classification network The medium classifier minimizes the overall loss function to train the model.
步骤3:针对新的故障案例,对案例测试样本进行预处理后直接输入训练好的模型,可以得到当前案例的特征表征。Step 3: For a new fault case, preprocess the case test sample and directly input the trained model to obtain the feature representation of the current case.
步骤4:对该案例表征特征可以进行分类预测诊断,并和历史故障数据库中的案例特征进行相似度的计算,找到相似案例,完成对新故障案例的诊断和解决方案检索。Step 4: The characteristic features of the case can be classified, predicted and diagnosed, and the similarity calculation with the case features in the historical fault database can be performed to find similar cases and complete the diagnosis and solution retrieval of new fault cases.
步骤5:将该案例分析检查后加入历史故障数据库进行数据库的不断更新。Step 5: Add the case analysis and inspection to the historical fault database for continuous updating of the database.
综上所述,本发明的方案,通过两阶段提取语义和主题特征并进行交互,得到了考虑故障主题-语义关系下的细粒度案例特征,完成对不同案例的诊断和检索任务,提升了自动化故障诊断的准确率,增强了模型特征学习的可解释性和对不同故障主题的诊断适应能力。In summary, the scheme of the present invention obtains fine-grained case features considering the fault topic-semantic relationship through two-stage extraction and interaction of semantic and topic features, completes diagnosis and retrieval tasks for different cases, and improves automation The accuracy of fault diagnosis enhances the interpretability of model feature learning and the adaptability of diagnosis to different fault topics.
如图5所示,本发明实施例还提供一种故障诊断装置,包括:As shown in Figure 5, an embodiment of the present invention also provides a fault diagnosis device, including:
构建模块501,用于构建故障诊断网络模型;所述故障诊断网络模型包括特征提取网络、特征交互网络和特征分类网络;A
第一确定模块502,用于根据工单样本数据、所述特征提取网络和历史故障数据库,确定用于表征文本语义的第一特征数据和用于表征文本故障主题的第二特征数据;The first determining
第二确定模块503,用于根据所述第一特征数据、所述第二特征数据、所述特征交互网络和所述特征分类网络,确定训练好的故障诊断模型;The
第三确定模块504,用于根据所述故障诊断模型与历史故障数据库进行相似度计算,确定所述历史故障数据库中对应的工单处理信息。The
可选地,所述第二确定模块503,包括:Optionally, the second determining
第一确定子模块,用于将所述第一特征数据和所述第二特征数据进行向量乘积交互,确定交互后的权重系数矩阵;The first determining submodule is used to perform vector product interaction on the first characteristic data and the second characteristic data, and determine the weight coefficient matrix after interaction;
第二确定子模块,用于根据所述特征交互网络的归一化函数和所述权重系数矩阵,确定处理后的第一权重系数;The second determination submodule is used to determine the processed first weight coefficient according to the normalization function of the feature interaction network and the weight coefficient matrix;
第三确定子模块,用于通过对所述第一权重系数和所述第一特征数据加权求和,确定第三特征数据;所述第三特征数据用于表征融合文本语义和主题的特征数据;The third determining submodule is used to determine the third feature data by weighting and summing the first weight coefficient and the first feature data; the third feature data is used to characterize the feature data of the fusion text semantics and theme ;
第四确定子模块,用于根据所述第三特征数据和所述特征分类网络的目标损失函数,确定训练好的故障诊断模型。The fourth determining submodule is used to determine a trained fault diagnosis model according to the third feature data and the target loss function of the feature classification network.
可选地,所述构建模块501中,特征分类网络的目标损失函数通过如下方式确定:Optionally, in the
第一确定单元,用于确定第二特征数据经过所述特征分类网络的第一损失函数、第三特征数据经过所述特征分类网络的第二损失函数和模型参数正则化损失的第三损失函数;A first determining unit, configured to determine a first loss function of the second feature data passing through the feature classification network, a second loss function of the third feature data passing through the feature classification network, and a third loss function of model parameter regularization loss ;
第二确定单元,用于根据所述第一损失函数、所述第二损失函数和所述第三损失函数加权求和,确定所述目标损失函数。A second determining unit, configured to determine the target loss function according to the weighted summation of the first loss function, the second loss function, and the third loss function.
具体的,第一确定单元具体用于,所述第一损失函数根据预设的第一交叉熵函数以及训练样本的数量确定;Specifically, the first determining unit is specifically configured to determine the first loss function according to a preset first cross-entropy function and the number of training samples;
第二确定单元具体用于,所述第二损失函数根据预设的第二交叉熵函数以及训练样本的数量确定;The second determination unit is specifically configured to determine the second loss function according to a preset second cross-entropy function and the number of training samples;
其中,所述第一交叉熵函数和所述第二交叉熵函数均包括:样本的类别标签和样本分类的预测值。Wherein, both the first cross-entropy function and the second cross-entropy function include: the class label of the sample and the predicted value of the class of the sample.
可选地,所述第一确定子模块,包括:Optionally, the first determining submodule includes:
第三确定单元,用于将所述第一特征数据和所述第二特征数据进行向量乘积交互,确定语义特征交互矩阵;A third determining unit, configured to perform vector product interaction on the first feature data and the second feature data to determine a semantic feature interaction matrix;
第四确定单元,用于将所述语义特征交互矩阵进行一层卷积网络处理,确定所述权重系数矩阵。The fourth determination unit is configured to process the semantic feature interaction matrix with a layer of convolutional network to determine the weight coefficient matrix.
可选地,所述第一确定模块502,包括:Optionally, the first determining
第五确定单元,用于通过所述特征提取网络的第一预设算法,确定工单样本数据的第一特征数据;所述第一特征数据包括语义长度和嵌入向量维度;The fifth determination unit is configured to determine the first feature data of the work order sample data through the first preset algorithm of the feature extraction network; the first feature data includes a semantic length and an embedding vector dimension;
第六确定单元,用于通过所述特征提取网络的第二预设算法,根据所述工单样本数据和历史故障数据库,确定工单样本数据的第二特征数据;所述第二特征数据包括主题数量、故障现象主题、故障原因主题、故障措施主题。The sixth determination unit is configured to determine the second feature data of the work order sample data according to the work order sample data and the historical fault database through the second preset algorithm of the feature extraction network; the second feature data includes The number of topics, the topics of fault symptoms, the topics of fault causes, and the topics of fault measures.
可选地,所述第四确定子模块,包括:Optionally, the fourth determining submodule includes:
第七确定单元,用于根据所述第三特征数据和所述目标损失函数,确定目标故障分类损失值;A seventh determining unit, configured to determine a target fault classification loss value according to the third feature data and the target loss function;
第八确定单元,用于若所述目标故障分类损失值低于阈值时,则根据预设函数优化所述特征提取网络、所述特征交互网络和所述特征分类网络,直至所述目标故障分类损失值大于或等于所述阈值时,确定训练好的故障诊断模型。An eighth determination unit, configured to optimize the feature extraction network, the feature interaction network, and the feature classification network according to a preset function until the target fault classification is performed if the target fault classification loss value is lower than a threshold When the loss value is greater than or equal to the threshold, a trained fault diagnosis model is determined.
本发明实施例中,上述的故障诊断装置,还包括:In an embodiment of the present invention, the above-mentioned fault diagnosis device further includes:
获取模块,用于获取N个历史故障工单,并进行预处理,确定故障特征数据;N为正整数;The obtaining module is used to obtain N historical fault work orders, and perform preprocessing to determine fault characteristic data; N is a positive integer;
第二构建模块,用于基于所述故障特征数据,构建所述历史故障数据库;所述故障数据库包括N个历史故障工单和N个第三特征数据之间的对应关系;第三特征数据以向量形式存储于所述历史故障数据库中;The second building module is configured to construct the historical fault database based on the fault feature data; the fault database includes correspondence between N historical fault work orders and N third feature data; the third feature data is Stored in the historical fault database in vector form;
其中,所述故障特征数据包括:故障模式、故障原因、故障影响、故障检测方法、设计改进措施和使用补偿措施中的一项或多项;Wherein, the fault characteristic data includes: one or more of fault mode, fault cause, fault impact, fault detection method, design improvement measures and use compensation measures;
所述预处理包括噪声信息剔除、重复数据删除和敏感词过滤中的一项或多项。The preprocessing includes one or more of noise information elimination, repeated data deletion, and sensitive word filtering.
其中,上述故障诊断方法的所述实现实施例均适用于该故障诊断装置的实施例中,也能达到相同的技术效果,为避免重复,在此不再赘述。Wherein, the implementation embodiments of the above-mentioned fault diagnosis method are all applicable to the embodiment of the fault diagnosis device, and can also achieve the same technical effect, so in order to avoid repetition, details are not repeated here.
本发明实施例的一种可读存储介质,其上存储有程序或指令,所述程序或指令被处理器执行时实现如上所述的故障诊断方法中的步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。A readable storage medium according to an embodiment of the present invention, on which programs or instructions are stored, and when the programs or instructions are executed by a processor, the steps in the above-mentioned fault diagnosis method can be realized, and the same technical effect can be achieved, To avoid repetition, details are not repeated here.
其中,所述处理器为上述实施例中所述的故障诊断方法中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。Wherein, the processor is the processor in the fault diagnosis method described in the above embodiments. The readable storage medium includes a computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM for short), random access memory (Random Access Memory, RAM for short), magnetic disk or optical disk, and the like.
本发明实施例中,模块可以用软件实现,以便由各种类型的处理器执行。举例来说,一个标识的可执行代码模块可以包括计算机指令的一个或多个物理或者逻辑块,举例来说,其可以被构建为对象、过程或函数。尽管如此,所标识模块的可执行代码无需物理地位于一起,而是可以包括存储在不同位里上的不同的指令,当这些指令逻辑上结合在一起时,其构成模块并且实现该模块的规定目的。In the embodiments of the present invention, the modules may be implemented by software so as to be executed by various types of processors. An identified module of executable code may, by way of example, comprise one or more physical or logical blocks of computer instructions which may, for example, be structured as an object, procedure, or function. Notwithstanding, the executable code of an identified module need not be physically located together, but may include distinct instructions stored in different bits which, when logically combined, constitute the module and implement the specified Purpose.
实际上,可执行代码模块可以是单条指令或者是许多条指令,并且甚至可以分布在多个不同的代码段上,分布在不同程序当中,以及跨越多个存储器设备分布。同样地,操作数据可以在模块内被识别,并且可以依照任何适当的形式实现并且被组织在任何适当类型的数据结构内。所述操作数据可以作为单个数据集被收集,或者可以分布在不同位置上(包括在不同存储设备上),并且至少部分地可以仅作为电子信号存在于系统或网络上。Indeed, a module of executable code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs and across multiple memory devices. Likewise, operational data may be identified within modules, and may be implemented in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed in different locations (including on different storage devices), and may exist, at least in part, only as electronic signals on a system or network.
在模块可以利用软件实现时,考虑到现有硬件工艺的水平,所以可以以软件实现的模块,在不考虑成本的情况下,本领域技术人员都可以搭建对应的硬件电路来实现对应的功能,所述硬件电路包括常规的超大规模集成(VLSI)电路或者门阵列以及诸如逻辑芯片、晶体管之类的现有半导体或者是其它分立的元件。模块还可以用可编程硬件设备,诸如现场可编程门阵列、可编程阵列逻辑、可编程逻辑设备等实现。When the module can be realized by software, considering the level of the existing hardware technology, the module that can be realized by software, regardless of the cost, those skilled in the art can build the corresponding hardware circuit to realize the corresponding function. The hardware circuit includes conventional very large scale integration (VLSI) circuits or gate arrays as well as existing semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, and the like.
上述范例性实施例是参考该些附图来描述的,许多不同的形式和实施例是可行而不偏离本发明精神及教示,因此,本发明不应被建构成为在此所提出范例性实施例的限制。更确切地说,这些范例性实施例被提供以使得本发明会是完善又完整,且会将本发明范围传达给那些熟知此项技术的人士。在该些图式中,组件尺寸及相对尺寸也许基于清晰起见而被夸大。在此所使用的术语只是基于描述特定范例性实施例目的,并无意成为限制用。如在此所使用地,除非该内文清楚地另有所指,否则该单数形式“一”、“一个”和“该”是意欲将该些多个形式也纳入。会进一步了解到该些术语“包含”及/或“包括”在使用于本说明书时,表示所述特征、整数、步骤、操作、构件及/或组件的存在,但不排除一或更多其它特征、整数、步骤、操作、构件、组件及/或其族群的存在或增加。除非另有所示,陈述时,一值范围包含该范围的上下限及其间的任何子范围。The exemplary embodiments described above are described with reference to these drawings. Many different forms and embodiments are possible without departing from the spirit and teachings of the present invention. Therefore, the present invention should not be construed as the exemplary embodiments set forth herein. limits. Rather, these exemplary embodiments are provided so that this disclosure will be thorough and complete, and will convey the scope of the invention to those skilled in the art. In the drawings, component sizes and relative sizes may be exaggerated for clarity. The terminology used herein is for the purpose of describing certain exemplary embodiments only and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include these plural forms unless the context clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprises", when used in this specification, indicate the presence of stated features, integers, steps, operations, components and/or components, but do not exclude one or more other The presence or addition of features, integers, steps, operations, components, components and/or groups thereof. Unless otherwise indicated, when stated a range of values includes the upper and lower limits of that range and any subranges therebetween.
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明所述原理的前提下,还可以作出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above description is a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, without departing from the principle of the present invention, some improvements and modifications can also be made, and these improvements and modifications can also be made. It should be regarded as the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211063252.XA CN115408190A (en) | 2022-08-31 | 2022-08-31 | Fault diagnosis method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211063252.XA CN115408190A (en) | 2022-08-31 | 2022-08-31 | Fault diagnosis method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115408190A true CN115408190A (en) | 2022-11-29 |
Family
ID=84163964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211063252.XA Pending CN115408190A (en) | 2022-08-31 | 2022-08-31 | Fault diagnosis method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115408190A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116542252A (en) * | 2023-07-07 | 2023-08-04 | 北京营加品牌管理有限公司 | Financial text checking method and system |
CN116738323A (en) * | 2023-08-08 | 2023-09-12 | 北京全路通信信号研究设计院集团有限公司 | Fault diagnosis method, device, equipment and medium for railway signal equipment |
CN117651290A (en) * | 2023-10-30 | 2024-03-05 | 湖北省信产通信服务有限公司数字科技分公司 | Fault decision method, system, medium and electronic equipment based on immune algorithm |
CN118094185A (en) * | 2024-02-22 | 2024-05-28 | 远江盛邦(北京)网络安全科技股份有限公司 | Load feature extraction method and device, electronic equipment and storage medium |
CN118568251A (en) * | 2024-08-02 | 2024-08-30 | 山东亚微软件股份有限公司 | Similar work order recommendation system based on semantics and similarity |
CN119148608A (en) * | 2024-11-21 | 2024-12-17 | 江苏南极星新能源技术股份有限公司 | Drive control circuit with fault detection function |
-
2022
- 2022-08-31 CN CN202211063252.XA patent/CN115408190A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116542252A (en) * | 2023-07-07 | 2023-08-04 | 北京营加品牌管理有限公司 | Financial text checking method and system |
CN116542252B (en) * | 2023-07-07 | 2023-09-29 | 北京营加品牌管理有限公司 | Financial text checking method and system |
CN116738323A (en) * | 2023-08-08 | 2023-09-12 | 北京全路通信信号研究设计院集团有限公司 | Fault diagnosis method, device, equipment and medium for railway signal equipment |
CN116738323B (en) * | 2023-08-08 | 2023-10-27 | 北京全路通信信号研究设计院集团有限公司 | Fault diagnosis method, device, equipment and medium for railway signal equipment |
CN117651290A (en) * | 2023-10-30 | 2024-03-05 | 湖北省信产通信服务有限公司数字科技分公司 | Fault decision method, system, medium and electronic equipment based on immune algorithm |
CN118094185A (en) * | 2024-02-22 | 2024-05-28 | 远江盛邦(北京)网络安全科技股份有限公司 | Load feature extraction method and device, electronic equipment and storage medium |
CN118568251A (en) * | 2024-08-02 | 2024-08-30 | 山东亚微软件股份有限公司 | Similar work order recommendation system based on semantics and similarity |
CN119148608A (en) * | 2024-11-21 | 2024-12-17 | 江苏南极星新能源技术股份有限公司 | Drive control circuit with fault detection function |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115408190A (en) | Fault diagnosis method and device | |
CN114970605B (en) | Refrigerating equipment fault diagnosis method of multi-mode feature fusion neural network | |
CN111914090B (en) | Method and device for enterprise industry classification identification and characteristic pollutant identification | |
CN110825877A (en) | A Semantic Similarity Analysis Method Based on Text Clustering | |
CN110633366B (en) | Short text classification method, device and storage medium | |
CN111291188B (en) | Intelligent information extraction method and system | |
CN110516070B (en) | Chinese question classification method based on text error correction and neural network | |
CN112147432A (en) | BiLSTM module based on attention mechanism, transformer state diagnosis method and system | |
CN110969015B (en) | A method and device for automatically identifying tags based on operation and maintenance scripts | |
CN113792594B (en) | Method and device for locating language fragments in video based on contrast learning | |
CN112559741A (en) | Nuclear power equipment defect recording text classification method, system, medium and electronic equipment | |
CN112767106A (en) | Automatic auditing method, system, computer readable storage medium and auditing equipment | |
CN117009521A (en) | Knowledge-graph-based intelligent process retrieval and matching method for engine | |
CN117574858A (en) | Automatic generation method of class case retrieval report based on large language model | |
CN117744483A (en) | Bearing fault diagnosis method based on fusion of twin information model and measured data | |
CN116306923A (en) | Evaluation weight calculation method based on knowledge graph | |
CN112069379A (en) | An efficient public opinion monitoring system based on LSTM-CNN | |
CN113837266B (en) | Software defect prediction method based on feature extraction and Stacking ensemble learning | |
CN115130601A (en) | Two-stage academic data webpage classification method and system based on multi-dimensional feature fusion | |
CN113468410A (en) | System for intelligently optimizing search results and search engine | |
CN118114658A (en) | Data retrieval intention recognition method for complex regulation and control service of power grid | |
CN108345622A (en) | Model retrieval method based on semantic model frame and device | |
CN111460817A (en) | Method and system for recommending criminal legal document related law provision | |
CN113761123B (en) | Keyword acquisition method, device, computing device and storage medium | |
Xu et al. | Based on improved CNN bearing fault detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |