CN115983265A - A Convolutional Neural Network Based Defect Grading Method for Relay Protection - Google Patents

A Convolutional Neural Network Based Defect Grading Method for Relay Protection Download PDF

Info

Publication number
CN115983265A
CN115983265A CN202310013026.9A CN202310013026A CN115983265A CN 115983265 A CN115983265 A CN 115983265A CN 202310013026 A CN202310013026 A CN 202310013026A CN 115983265 A CN115983265 A CN 115983265A
Authority
CN
China
Prior art keywords
defect
neural network
relay protection
convolutional neural
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310013026.9A
Other languages
Chinese (zh)
Inventor
郑少明
董鹏
杨心平
杜鹃
刘丹
宿洪智
陶畅
王书鸿
薛安成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Grid Co Ltd
Original Assignee
North China Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Grid Co Ltd filed Critical North China Grid Co Ltd
Priority to CN202310013026.9A priority Critical patent/CN115983265A/en
Publication of CN115983265A publication Critical patent/CN115983265A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Image Analysis (AREA)

Abstract

本发明涉及继电保护装置缺陷分析技术领域,提供了一种基于卷积神经网络的继电保护置缺陷定级方法。该方法包括:基于某区域电网继电保护运行缺陷记录,对数据进行预处理,得到缺陷定级数据集,通过马尔可夫假设,采用一维卷积层对向量化的文本矩阵进行特征提取,将得到的特征传入全连接神经网络,得到识别向量化的文本预测结果,并通过前向传播梯度,进行预设次数的模型迭代至收敛,获得目标卷积神经网络模型;基于选取的预测参数,将测试集输入目标卷积神经网络模型得到缺陷定级的识别结果。本发明提高了缺陷记录文本的定级预测结果的分类准确率。

Figure 202310013026

The invention relates to the technical field of defect analysis of relay protection devices, and provides a method for grading defects of relay protection devices based on a convolutional neural network. The method includes: preprocessing the data based on the defect records of relay protection operation in a certain regional power grid, and obtaining a defect grading data set, and using a one-dimensional convolutional layer to extract features from the vectorized text matrix through the Markov assumption, Pass the obtained features into the fully connected neural network to obtain the text prediction result of recognition vectorization, and perform the preset number of model iterations to convergence through the forward propagation gradient, and obtain the target convolutional neural network model; based on the selected prediction parameters , input the test set into the target convolutional neural network model to obtain the recognition results of defect grading. The invention improves the classification accuracy rate of the grading prediction result of the defect record text.

Figure 202310013026

Description

一种基于卷积神经网络的继电保护置缺陷定级方法A Convolutional Neural Network Based Defect Grading Method for Relay Protection

技术领域technical field

本发明涉及继电保护装置缺陷分析技术领域,尤其涉及一种基于卷积神经网络的继电保护置缺陷定级方法。The invention relates to the technical field of defect analysis of relay protection devices, in particular to a method for grading defects of relay protection devices based on a convolutional neural network.

背景技术Background technique

继电保护设备在长期运行过程中,通过巡检、试验等手段记录和积累了大量缺陷文本数据。这些文本数据存入系统后,通常作为记录数据,未能发掘其中蕴含的大量有价值的信息,是设备缺陷等级分类的关键数据。During the long-term operation of relay protection equipment, a large amount of defect text data has been recorded and accumulated through inspections, tests and other means. After the text data is stored in the system, it is usually used as record data, and a large amount of valuable information contained in it has not been discovered, which is the key data for the classification of equipment defect levels.

但是,大量的设备缺陷等级分类工作需要人工完成,不仅效率较低,工作量大,而且对于某些模糊性较强的亚健康缺陷,常常处于难以精确判断的尴尬局面,因此,分类准确性受到影响。However, a large amount of equipment defect classification work needs to be done manually, which not only has low efficiency and heavy workload, but also is often in an embarrassing situation where it is difficult to accurately judge some sub-health defects with strong ambiguity. Therefore, the classification accuracy is limited. Influence.

发明内容Contents of the invention

有鉴于此,本发明提供了一种基于卷积神经网络的继电保护置缺陷定级方法,以解决现有技术中缺陷等级分类工作需要人工完成,效率较低、分类准确性差的问题。In view of this, the present invention provides a method for grading defects in relay protection based on convolutional neural networks to solve the problems in the prior art that the classification of defect levels needs to be done manually, the efficiency is low, and the classification accuracy is poor.

本发明提供了一种基于卷积神经网络的继电保护置缺陷定级方法,包括:The present invention provides a method for grading relay protection faults based on convolutional neural network, including:

S1.基于某区域电网继电保护运行缺陷记录,对数据进行预处理,得到缺陷定级数据集,其中,所述缺陷定级数据集包括训练集、测试集和验证集;S1. Preprocessing the data based on the defect records of relay protection operation in a certain area to obtain a defect grading data set, wherein the defect grading data set includes a training set, a test set and a verification set;

S2.通过马尔可夫假设,采用一维卷积层对向量化的文本矩阵进行特征提取,将得到的特征传入全连接神经网络,得到识别向量化的文本预测结果,并通过前向传播梯度,进行预设次数的模型迭代至收敛,获得目标卷积神经网络模型;S2. Through the Markov assumption, a one-dimensional convolutional layer is used to extract features from the vectorized text matrix, and the obtained features are passed into the fully connected neural network to obtain the prediction result of the vectorized text recognition, and the gradient is propagated forward , performing a preset number of model iterations to convergence to obtain the target convolutional neural network model;

S3.基于选取的预测参数,将所述测试集输入所述目标卷积神经网络模型得到缺陷定级的识别结果。S3. Based on the selected prediction parameters, input the test set into the target convolutional neural network model to obtain a defect grading recognition result.

进一步地,所述S1包括:Further, the S1 includes:

S11.基于某区域电网继电保护运行缺陷记录,采用继电保护缺陷词典,去除停用词和无关词汇,并进行向量化处理,形成向量化的文本矩阵,并将所述向量化的文本矩阵分为危急、严重和一般三个缺陷等级;S11. Based on the defect records of relay protection operation in a certain regional power grid, use the relay protection defect dictionary to remove stop words and irrelevant words, and perform vectorization processing to form a vectorized text matrix, and convert the vectorized text matrix Divided into three defect levels: critical, serious and general;

S12.利用jieba分词结合专业词典的方法对所述危急、严重和一般三个缺陷等级分别进行分词处理,按照6:2:2划分为所述训练集、测试集和验证集,其中,jieba是基于python编程语言的分词函数包。S12. Utilize the method of jieba word segmentation in combination with professional dictionaries to carry out word segmentation processing on the three defect levels of critical, serious and general respectively, and divide them into the training set, test set and verification set according to 6:2:2, wherein jieba is A word segmentation function package based on the python programming language.

进一步地,所述S2中模型训练包括:batch_size=128,且预设次数=10000次。Further, the model training in S2 includes: batch_size=128, and preset times=10000 times.

进一步地,所述S2包括:Further, said S2 includes:

S21.根据马尔可夫假设,构建3个长度为2,3,4的一维卷积核;S21. According to the Markov assumption, construct three one-dimensional convolution kernels with lengths of 2, 3, and 4;

S22.利用构建的所述一维卷积核,对所述向量文本矩阵进行卷积操作,提取文本中蕴含设定量信息熵的特征部分,将所述特征部分传入所述全连接神经网络,采用sigmoid作为激活函数进行计算,然后采用softmax函数输出向量化的文本所属类别的概率分布,获得所述识别向量化的文本预测结果;S22. Using the constructed one-dimensional convolution kernel, perform a convolution operation on the vector text matrix, extract the feature part containing a set amount of information entropy in the text, and pass the feature part into the fully connected neural network , using sigmoid as the activation function to calculate, and then using the softmax function to output the probability distribution of the category of the vectorized text to obtain the text prediction result of the recognition vectorization;

S23.基于所述所述识别向量化的文本预测结果,利用交叉熵损失函数,通过前向传播梯度,进行预设次数的模型迭代至收敛,获得所述目标卷积神经网络模型。S23. Based on the text prediction result of the recognition vectorization, use a cross-entropy loss function to propagate the gradient forward, perform a preset number of model iterations until convergence, and obtain the target convolutional neural network model.

进一步地,所述向量化的文本所属类别的概率分布为范围在[0,1]之间、和为1。Further, the probability distribution of the category to which the vectorized text belongs is in the range [0,1], and the sum is 1.

进一步地,所述交叉熵损失函数包括如下计算式:Further, the cross-entropy loss function includes the following calculation formula:

Figure BDA0004038894340000021
Figure BDA0004038894340000021

其中,M表示类别的数量,yic表示符号函数,若样本i的真实类别等于c取1,否则取0,pic表示观测样本i属于类别c的概率。Among them, M represents the number of categories, y ic represents the sign function, if the true category of sample i is equal to c, it takes 1, otherwise it takes 0, and p ic represents the probability that observed sample i belongs to category c.

进一步地,所述S3包括:Further, said S3 includes:

S31.采用准确率,召回率,F1分数,表征目标卷积神经网络模型的预测精度;S31. Accuracy, recall, and F1 score are used to represent the prediction accuracy of the target convolutional neural network model;

S32.将所述测试集输入所述目标卷积神经网络模型得到缺陷定级的识别结果。S32. Input the test set into the target convolutional neural network model to obtain a defect grading recognition result.

进一步地,所述模型评价指标的获得包括如下计算式:Further, the acquisition of the model evaluation index includes the following calculation formula:

Figure BDA0004038894340000031
Figure BDA0004038894340000031

Figure BDA0004038894340000032
Figure BDA0004038894340000032

Figure BDA0004038894340000033
Figure BDA0004038894340000033

其中,P表示准确率,R表示召回率,F表示F1分数,TP表示正确定级的缺陷文本数量,FP表示错误定级的缺陷文本数量,FN表示未检测到的文本数量。Among them, P represents the precision rate, R represents the recall rate, F represents the F1 score, TP represents the number of flawed texts correctly classified, FP represents the number of faulty texts classified incorrectly, and FN represents the number of non-detected texts.

本发明与现有技术相比存在的有益效果是:The beneficial effect that the present invention exists compared with prior art is:

1、本发明采用softmax函数计算,输出向量化的文本所属类别的概率分布,提高了缺陷记录文本的定级预测结果的分类准确率。1. The present invention uses the softmax function to calculate, and outputs the probability distribution of the category to which the vectorized text belongs, which improves the classification accuracy of the grading prediction result of the defect record text.

2、本发明通过提取文本中蕴含设定量信息熵的特征部分,获得损失函数收敛至预设值,计算速度快,且避免了陷入局部最优解的情况。2. The present invention obtains the loss function converging to the preset value by extracting the characteristic part of the text containing the set amount of information entropy, the calculation speed is fast, and the situation of falling into the local optimal solution is avoided.

3、本发明通过选用准确率,召回率,F1分数一系列模型评价指标参数,使得目标卷积神经网络模型的预测精度得到保障,保证了缺陷能够得到及时处理和上报。3. The present invention guarantees the prediction accuracy of the target convolutional neural network model by selecting a series of model evaluation index parameters such as accuracy rate, recall rate, and F1 score, and ensures that defects can be processed and reported in a timely manner.

附图说明Description of drawings

为了更清楚地说明本发明中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to illustrate the technical solution in the present invention more clearly, the accompanying drawings that need to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the accompanying drawings in the following description are only some implementations of the present invention For example, those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1是本发明提供的一种基于卷积神经网络的继电保护置缺陷定级方法的流程图。Fig. 1 is a flow chart of a method for grading defects in relay protection based on convolutional neural network provided by the present invention.

具体实施方式Detailed ways

以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本发明实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本发明。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本发明的描述。In the following description, specific details such as specific system structures and technologies are presented for the purpose of illustration rather than limitation, so as to thoroughly understand the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced in other embodiments without these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.

下面将结合附图详细说明本发明提供的一种基于卷积神经网络的继电保护置缺陷定级方法。A convolutional neural network-based relay protection defect grading method provided by the present invention will be described in detail below with reference to the accompanying drawings.

图1是本发明提供的一种基于卷积神经网络的继电保护置缺陷定级方法的流程图。Fig. 1 is a flow chart of a method for grading defects in relay protection based on convolutional neural network provided by the present invention.

如图1所示,该继电保护置缺陷定级方法包括:As shown in Figure 1, the method for grading relay protection faults includes:

S1.基于某区域电网继电保护运行缺陷记录,对数据进行预处理,得到缺陷定级数据集,其中,所述缺陷定级数据集包括训练集、测试集和验证集;S1. Preprocessing the data based on the defect records of relay protection operation in a certain area to obtain a defect grading data set, wherein the defect grading data set includes a training set, a test set and a verification set;

所述S1包括:The S1 includes:

S11.基于某区域电网继电保护运行缺陷记录,采用继电保护缺陷词典,去除停用词和无关词汇,并进行向量化处理,形成向量化的文本矩阵,并将所述向量化的文本矩阵分为危急、严重和一般三个缺陷等级;S11. Based on the defect records of relay protection operation in a certain regional power grid, use the relay protection defect dictionary to remove stop words and irrelevant words, and perform vectorization processing to form a vectorized text matrix, and convert the vectorized text matrix Divided into three defect levels: critical, serious and general;

S12.利用jieba分词结合专业词典的方法对所述危急、严重和一般三个缺陷等级分别进行分词处理,按照6:2:2划分为所述训练集、测试集和验证集,其中,jieba是基于python编程语言的分词函数包。S12. Utilize the method of jieba word segmentation in combination with professional dictionaries to carry out word segmentation processing on the three defect levels of critical, serious and general respectively, and divide them into the training set, test set and verification set according to 6:2:2, wherein jieba is A word segmentation function package based on the python programming language.

缺陷定级数据集信息如表1所示:The defect grading data set information is shown in Table 1:

数据集data set 训练集Training set 测试集test set 验证集validation set 合计total 危急(标签0)critical (label 0) 972972 301301 290290 15631563 严重(标签1)severe (label 1) 705705 238238 188188 11311131 一般(标签2)General (Label 2) 723723 239239 242242 12041204 合计total 24002400 778778 720720 38983898

S2.通过马尔可夫假设,采用一维卷积层对向量化的文本矩阵进行特征提取,将得到的特征传入全连接神经网络,得到识别向量化的文本预测结果,并通过前向传播梯度,进行预设次数的模型迭代至收敛,获得目标卷积神经网络模型;S2. Through the Markov assumption, a one-dimensional convolutional layer is used to extract features from the vectorized text matrix, and the obtained features are passed into the fully connected neural network to obtain the prediction result of the vectorized text recognition, and the gradient is propagated forward , performing a preset number of model iterations to convergence to obtain the target convolutional neural network model;

所述S2中模型训练包括:batch_size=128,且预设次数=10000次。The model training in S2 includes: batch_size=128, and preset times=10000 times.

所述S2包括:The S2 includes:

S21.根据马尔可夫假设,构建3个长度为2,3,4的一维卷积核;S21. According to the Markov assumption, construct three one-dimensional convolution kernels with lengths of 2, 3, and 4;

根据所述马尔可夫假设:任意一个词出现的概率只与它前面出现的一个或者几个词有关,构建3个长度为2,3,4的一维卷积核,即考虑该词附近2,3,4个词对其的影响。According to the Markov assumption: the probability of any word appearing is only related to one or a few words appearing before it, and three one-dimensional convolution kernels with lengths of 2, 3, and 4 are constructed, that is, considering the 2 , 3, 4 words influence on it.

S22.利用构建的所述一维卷积核,对所述向量文本矩阵进行卷积操作,提取文本中蕴含设定量信息熵的特征部分,将所述特征部分传入所述全连接神经网络,采用sigmoid作为激活函数进行计算,然后采用softmax函数输出向量化的文本所属类别的概率分布,获得所述识别向量化的文本预测结果;S22. Using the constructed one-dimensional convolution kernel, perform a convolution operation on the vector text matrix, extract the feature part containing a set amount of information entropy in the text, and pass the feature part into the fully connected neural network , using sigmoid as the activation function to calculate, and then using the softmax function to output the probability distribution of the category of the vectorized text to obtain the text prediction result of the recognition vectorization;

具体地,利用构建的一维卷积核,对向量化的文本矩阵进行卷积操作,提取文本中蕴含设定量信息熵的特征部分,并将提取出的所述特征部分输入一个5层的全连接神经网络,采用sigmoid作为激活函数,即Specifically, use the constructed one-dimensional convolution kernel to perform a convolution operation on the vectorized text matrix, extract the feature part of the text that contains a set amount of information entropy, and input the extracted feature part into a 5-layer Fully connected neural network, using sigmoid as the activation function, namely

Figure BDA0004038894340000061
Figure BDA0004038894340000061

其中,所述全连接神经网络是指卷积神经网络模型,所述卷积神经网络模型包括多个一维卷积来获取句子中N-gram的特征表示。Wherein, the fully connected neural network refers to a convolutional neural network model, and the convolutional neural network model includes multiple one-dimensional convolutions to obtain N-gram feature representations in sentences.

经过所述全连接神经网络输出向量化的文本所属类别的概率分布,再利用softmax函数计算,将概率转换为范围在[0,1]之间和为1的概率分布,其中,softmax函数的获得包括如下计算式:After the fully connected neural network outputs the probability distribution of the category of the vectorized text, and then uses the softmax function to calculate, the probability is converted into a probability distribution ranging between [0,1] and 1, wherein the acquisition of the softmax function Including the following calculation formula:

Figure BDA0004038894340000062
Figure BDA0004038894340000062

其中,Zi为第i个节点的输出值,C为输出节点的个数,即分类的类别数。Among them, Z i is the output value of the i-th node, and C is the number of output nodes, that is, the number of classification categories.

S23.基于所述所述识别向量化的文本预测结果,利用交叉熵损失函数,通过前向传播梯度,进行预设次数的模型迭代至收敛,获得所述目标卷积神经网络模型。S23. Based on the text prediction result of the recognition vectorization, use a cross-entropy loss function to propagate the gradient forward, perform a preset number of model iterations until convergence, and obtain the target convolutional neural network model.

利用交叉熵损失函数,计算预测分类和实际分类的模型差距,其中交叉熵损失函数的获得包括如下计算式:Use the cross-entropy loss function to calculate the model gap between the predicted classification and the actual classification, where the cross-entropy loss function includes the following calculation formula:

Figure BDA0004038894340000071
Figure BDA0004038894340000071

其中,M为类别的数量,yic为符号函数,如果样本i的真实类别等于c取1,否则取0,pic为观测样本i属于类别c的概率。Among them, M is the number of categories, y ic is a sign function, if the true category of sample i is equal to c, it takes 1, otherwise it takes 0, and p ic is the probability that observed sample i belongs to category c.

基于随机梯度下降法,将所述交叉熵损失函数作为目标函数,不断随机抽取样本计算梯度,并按照设定的步长进行梯度下降,直至所述交叉熵损失函数收敛至设定值,获得所述目标卷积神经网络模型。本发明采用随机梯度下降法的梯度计算的随机抽取性,不容易陷入局部最优解,并且有着较快的计算速度。Based on the stochastic gradient descent method, the cross-entropy loss function is used as the objective function, samples are continuously randomly selected to calculate the gradient, and the gradient is descended according to the set step size until the cross-entropy loss function converges to the set value, and the obtained Describe the target convolutional neural network model. The present invention adopts the random extraction of the gradient calculation of the stochastic gradient descent method, is not easy to fall into a local optimal solution, and has a relatively fast calculation speed.

S3.基于选取的预测参数,将所述测试集输入所述目标卷积神经网络模型得到缺陷定级的识别结果。S3. Based on the selected prediction parameters, input the test set into the target convolutional neural network model to obtain a defect grading recognition result.

所述S3包括:The S3 includes:

S31.采用准确率,召回率,F1分数,表征目标卷积神经网络模型的预测精度;S31. Accuracy, recall, and F1 score are used to represent the prediction accuracy of the target convolutional neural network model;

S32.将所述测试集输入所述目标卷积神经网络模型得到缺陷定级的识别结果。S32. Input the test set into the target convolutional neural network model to obtain a defect grading recognition result.

所述目标卷积神经网络模型通过目标卷积神经网络模型评价指标体现缺陷定级的识别结果。The target convolutional neural network model reflects the identification result of defect grading through the evaluation index of the target convolutional neural network model.

所述目标卷积神经网络模型评价指标的获得包括如下计算式:The acquisition of the target convolutional neural network model evaluation index includes the following calculation formula:

Figure BDA0004038894340000072
Figure BDA0004038894340000072

Figure BDA0004038894340000073
Figure BDA0004038894340000073

Figure BDA0004038894340000074
Figure BDA0004038894340000074

其中,P表示准确率,R表示召回率,F表示F1分数,TP表示正确定级的缺陷文本数量,FP表示错误定级的缺陷文本数量,FN表示未检测到的文本数量。Among them, P represents the precision rate, R represents the recall rate, F represents the F1 score, TP represents the number of flawed texts correctly classified, FP represents the number of faulty texts classified incorrectly, and FN represents the number of non-detected texts.

为了全面评估训练模型的准确性,设定训练过程中batch_size=128,每个batch训练一轮后取出最优模型作为下一轮训练的基础模型,通过1万次训练得到如下结果:In order to comprehensively evaluate the accuracy of the training model, set batch_size=128 during the training process. After each batch is trained for one round, the optimal model is taken out as the basic model for the next round of training. After 10,000 times of training, the following results are obtained:

P=0.6833P=0.6833

R=0.7560R=0.7560

F=0.2472F=0.2472

本发明采用softmax函数计算,输出向量化的文本所属类别的概率分布,提高了识别向量化的文本预测结果的分类准确率;通过提取文本中蕴含设定量信息熵的特征部分,获得损失函数收敛至预设值,计算速度快,且避免了陷入局部最优解的情况;通过选用准确率,召回率,F1分数一系列模型评价参数,使得目标卷积神经网络模型的预测精度得到保障,还保证了缺陷能够及时处理和上报。The present invention uses the softmax function to calculate, and outputs the probability distribution of the category to which the vectorized text belongs, which improves the classification accuracy rate of identifying the predicted results of the vectorized text; and obtains the convergence of the loss function by extracting the characteristic part containing the set amount of information entropy in the text To the preset value, the calculation speed is fast, and the situation of falling into the local optimal solution is avoided; by selecting a series of model evaluation parameters such as accuracy rate, recall rate, and F1 score, the prediction accuracy of the target convolutional neural network model is guaranteed. It also ensures that defects can be processed and reported in a timely manner.

上述所有可选技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。All the above optional technical solutions may be combined in any way to form optional embodiments of the present application, which will not be repeated here.

应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。It should be understood that the sequence numbers of the steps in the above embodiments do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, and should not constitute any limitation to the implementation process of the embodiment of the present invention.

以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围,均应包含在本发明的保护范围之内。The above embodiments are only used to illustrate the technical solutions of the present invention, rather than to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still be described in the foregoing embodiments Modifications to the technical solutions recorded, or equivalent replacement of some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of each embodiment of the present invention, and shall be included in the scope of the technical solutions of the embodiments of the present invention. within the scope of protection.

Claims (8)

1. A relay protection defect setting and grading method based on a convolutional neural network is characterized by comprising the following steps:
s1, preprocessing data based on a certain regional power grid relay protection operation defect record to obtain a defect grading data set, wherein the defect grading data set comprises a training set, a testing set and a verification set;
s2, performing feature extraction on a text matrix subjected to opposite quantization by adopting a one-dimensional convolutional layer through Markov assumption, transmitting the obtained features into a fully-connected neural network to obtain a text prediction result subjected to identification vectorization, and performing model iteration of preset times to convergence through a forward propagation gradient to obtain a target convolutional neural network model;
and S3, inputting the test set into the target convolutional neural network model based on the selected prediction parameters to obtain a defect grading identification result.
2. The relay protection device defect grading method according to claim 1, wherein S1 comprises:
s11, based on a relay protection operation defect record of a certain regional power grid, removing stop words and irrelevant words by adopting a relay protection defect dictionary, performing vectorization processing to form a vectorized text matrix, and dividing the vectorized text matrix into three defect grades of critical, serious and general;
s12, performing word segmentation processing on the critical, serious and general three defect levels respectively by using a method of combining the jieba word segmentation with a professional dictionary, and dividing the three defect levels into the training set, the testing set and the verification set according to the following ratio of 6.
3. The relay protection device defect grading method according to claim 1, wherein the model training in S2 comprises: batch _ size =128, and the preset number = 10000.
4. The relay protection device defect grading method according to claim 1, wherein the S2 comprises:
s21, constructing 3 one-dimensional convolution kernels with the length of 2,3,4 according to the Markov assumption;
s22, performing convolution operation on the vector text matrix by using the constructed one-dimensional convolution kernel, extracting a characteristic part containing a set amount of information entropy in a text, transmitting the characteristic part into the fully-connected neural network, calculating by using sigmoid as an activation function, and outputting probability distribution of the category to which the vectorized text belongs by using a softmax function to obtain a text prediction result for identifying vectorization;
and S23, based on the recognition vectorization text prediction result, performing model iteration of preset times to convergence by using a cross entropy loss function and through a forward propagation gradient to obtain the target convolutional neural network model.
5. The relay protection device defect grading method according to claim 4, wherein the probability distribution of the category to which the vectorized text belongs is in the range of [0,1] and is 1.
6. The relay protection device defect grading method according to claim 4, wherein the cross entropy loss function comprises the following calculation formula:
Figure FDA0004038894330000021
where M denotes the number of categories, yi c Representing a symbolic function, taking 1 if the true class of sample i is equal to c, otherwise taking 0 c Representing the probability that the observed sample i belongs to the class c.
7. The relay protection device defect grading method according to claim 1, wherein the S3 comprises:
s31, representing the prediction precision of the target convolutional neural network model by adopting the accuracy, the recall rate and the F1 score;
and S32, inputting the test set into the target convolutional neural network model to obtain a defect classification recognition result.
8. The relay protection device defect grading method according to claim 7, wherein the obtaining of the model evaluation index comprises the following calculation formula:
Figure FDA0004038894330000022
Figure FDA0004038894330000023
Figure FDA0004038894330000024
wherein P represents accuracy, R represents recall, F represents F1 score, TP represents number of correctly ranked defect texts, FP represents number of incorrectly ranked defect texts, and FN represents number of undetected texts.
CN202310013026.9A 2023-01-05 2023-01-05 A Convolutional Neural Network Based Defect Grading Method for Relay Protection Pending CN115983265A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310013026.9A CN115983265A (en) 2023-01-05 2023-01-05 A Convolutional Neural Network Based Defect Grading Method for Relay Protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310013026.9A CN115983265A (en) 2023-01-05 2023-01-05 A Convolutional Neural Network Based Defect Grading Method for Relay Protection

Publications (1)

Publication Number Publication Date
CN115983265A true CN115983265A (en) 2023-04-18

Family

ID=85966533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310013026.9A Pending CN115983265A (en) 2023-01-05 2023-01-05 A Convolutional Neural Network Based Defect Grading Method for Relay Protection

Country Status (1)

Country Link
CN (1) CN115983265A (en)

Similar Documents

Publication Publication Date Title
CN111914090B (en) Method and device for enterprise industry classification identification and characteristic pollutant identification
CN107169035B (en) A Text Classification Method Hybrid Long Short-Term Memory Network and Convolutional Neural Network
CN111767398A (en) Classification method of short text data of secondary equipment fault based on convolutional neural network
CN110232188A (en) The Automatic document classification method of power grid user troublshooting work order
CN111767397A (en) A classification method for short text data of secondary equipment faults in power system
CN111309607B (en) A software defect localization method at code method level
CN112685324B (en) Method and system for generating test scheme
CN112559741B (en) Nuclear power equipment defect record text classification method, system, medium and electronic equipment
CN109783637A (en) Electric power overhaul text mining method based on deep neural network
CN112507376A (en) Sensitive data detection method and device based on machine learning
CN109800309A (en) Classroom Discourse genre classification methods and device
CN117033976B (en) Carbon steel pipeline electromagnetic ultrasonic defect evaluation method and equipment based on data enhancement and deep learning
CN108304567A (en) High-tension transformer regime mode identifies and data classification method and system
CN113283288B (en) Recognition method of eddy current signal type of nuclear power plant evaporator based on LSTM-CNN
CN113378563B (en) Case feature extraction method and device based on genetic variation and semi-supervision
CN109063983A (en) A kind of natural calamity loss real time evaluating method based on social media data
CN112818110B (en) Text filtering method, equipment and computer storage medium
CN104699614A (en) Software defect component predicting method
CN110851593A (en) Complex value word vector construction method based on position and semantics
CN107766560A (en) The evaluation method and system of customer service flow
CN113159225B (en) A Multivariate Industrial Process Fault Classification Method
CN118733712A (en) An intelligent search method based on retrieval-enhanced generation
CN115983265A (en) A Convolutional Neural Network Based Defect Grading Method for Relay Protection
CN118171282A (en) A smart contract vulnerability location method based on source code slicing and pre-training technology
CN103984756B (en) Semi-supervised probabilistic latent semantic analysis based software change log classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination