CN109614488B - Text classification and image recognition-based distribution network live working condition judgment method - Google Patents

Text classification and image recognition-based distribution network live working condition judgment method Download PDF

Info

Publication number
CN109614488B
CN109614488B CN201811470666.8A CN201811470666A CN109614488B CN 109614488 B CN109614488 B CN 109614488B CN 201811470666 A CN201811470666 A CN 201811470666A CN 109614488 B CN109614488 B CN 109614488B
Authority
CN
China
Prior art keywords
layer
line
area
network
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811470666.8A
Other languages
Chinese (zh)
Other versions
CN109614488A (en
Inventor
熊小萍
许爽
龙凤英
谭建成
田富瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi University
Original Assignee
Guangxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi University filed Critical Guangxi University
Priority to CN201811470666.8A priority Critical patent/CN109614488B/en
Publication of CN109614488A publication Critical patent/CN109614488A/en
Application granted granted Critical
Publication of CN109614488B publication Critical patent/CN109614488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a distribution network live working condition distinguishing method based on text classification and image recognition, which comprises the steps of firstly leading out external condition data of a distribution line from a power grid company production management system, generating a distribution line live working external condition distinguishing text database, then carrying out on-site photographing on the distribution line, generating a line equipment image database, preprocessing the database, then building a Chinese text automatic classification model and an image recognition classification model based on machine learning, dividing the database into a training set and a test set, respectively carrying out supervised training on the two models by using the training set, and testing the trained models by using the test set; and finally, inputting newly acquired data into the trained model, identifying and scoring each distinguishing condition characteristic by the model, distinguishing whether the distribution line meets the requirement of the hot-line work condition or not according to the corresponding scoring grade of the total scoring value, and providing an intelligent and effective decision basis for workers to judge whether the hot-line work can be carried out or not.

Description

Distribution network live working condition discrimination method based on text classification and image recognition
Technical Field
The invention belongs to the technical field of hot-line work of medium and low voltage power distribution network lines, and particularly relates to a method for judging hot-line work conditions of a distribution network based on text classification and image recognition.
Background
Live working is an important means and method for detecting, overhauling, maintaining and transforming power grid equipment, and is an important technical measure for ensuring the reliable and stable operation of a power system. With the continuous improvement of the power supply reliability requirement of the power distribution network, live working and non-power-off working methods of the power distribution line are also widely applied and popularized. The application of new technology, new equipment and new materials in the field of live working also powerfully promotes the progress and development of live working tools, equipment, standard preparation and other aspects, and provides new requirements and development directions for the live working technology in future. However, most of the existing power grid companies still judge the live working conditions by manually checking the equipment on site, lack clear cognition on the position of the current equipment in a power grid structure, have insufficient intelligent analysis means and have more influence on artificial subjective factors; the method does not have a uniform judgment standard, cannot analyze internal and external factors of the line which cannot be subjected to live working, and cannot quantify the influence of each condition on whether the line can be subjected to live working.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a distribution network live working condition judging method based on text classification and image recognition, so that the problem that whether distribution lines can be subjected to live working or not can not be intelligently judged according to external conditions and equipment condition data of the distribution lines at present is solved.
The technical problem to be solved by the invention is realized by the following technical scheme:
a distribution network hot-line work condition distinguishing method based on text classification and image recognition comprises the following steps:
s1, exporting external condition data of the distribution line from a production management system of a power grid company, and generating a live-line work external condition judgment text database of the distribution line;
s2, acquiring pictures on the distribution line site, forming line equipment condition data, and generating a line equipment image database;
s3, preprocessing a text database and an image database, comprising: respectively corresponding different scores to the external conditions of the line and the conditions of line equipment to generate a condition score table, wherein the condition score table reflects the score corresponding to each condition, and the score reflects the specific gravity of each condition; expressing the text in a matrix or vector form, and segmenting and extracting the image to obtain feature expression with invariance;
s4, building a Chinese text automatic classification model based on machine learning and an image recognition classification model based on machine learning;
s5, dividing a preprocessed text database and an image database into a training set and a testing set, respectively performing supervised training on a built Chinese text automatic classification model based on machine learning and an image recognition classification model based on machine learning by using training set data, testing the recognition accuracy of the trained model by using the testing set data, and enabling the recognition accuracy of the model to the data in the testing set to be more than 90% by adjusting parameters;
and S6, grading distribution lines, importing the newly acquired data into a trained model, identifying live working judging condition characteristics by the model, grading, and judging whether the live working condition requirements are met or not according to the total grading value corresponding to the grading.
The line external condition data of step S1 includes: power supply area, grid structure, N-N inspection, terrain, user access and distribution automation level.
The line equipment condition data of step S2 includes: overhead line, pole type, breaking equipment, transformer equipment, insulating equipment and hardware fitting.
The specific operation of building the Chinese text automatic classification model based on machine learning in the step S4 is as follows: text representation is carried out by taking words as a unit to form word vectors, the word vectors are spliced according to the sequence of the words appearing in sentences to form a matrix representing the sentences, then the matrix is sent into a convolutional neural network model based on a deep learning technology, automatic extraction and learning of sentence characteristics are realized on the basis of the word vectors, and finally automatic classification of defective texts is realized. The convolutional neural network model based on the deep learning technology is a four-layer convolutional neural network model, and the specific form is as follows:
the first layer is an input layer, which is an unclassified external conditionCorresponding phrase matrix W is belonged to R s×n W represents a word group corresponding to an unclassified external condition, R represents a matrix converted from the word group, each row of the matrix represents a vector corresponding to each word in the word group, the row number s is the word number of the word group, and the column number n is the dimension of the vector;
the second layer is a one-dimensional convolution layer, and a convolution matrix window I epsilon R with the same column number as W and the same row number as h is adopted h×n Performing convolution operation with each h row and n column matrix block of the input layer matrix W from top to bottom in sequence, wherein each convolution window can extract a characteristic graph characteristic from the input matrix R, and the characteristic graph characteristic is called as a text characteristic;
the third layer is a pooling layer, and a maximum pooling method is adopted, the maximum element in the feature map vector obtained by convolution of each convolution window is taken as a feature value, so that the feature value corresponding to each convolution window is extracted, and all the feature values are sequentially spliced to form a one-dimensional vector of the pooling layer, namely the vector representing the global feature of the sentence;
the fourth layer is the output layer, and the output layer is connected with pooling layer is complete to pooling layer's one-dimensional vector is as the input, through activation function output, in addition lose the layer and get rid of partial data and prevent the overfitting, adopt the softmax classifier to classify the one-dimensional vector at last, and output final classification result.
The specific operation of building the image recognition classification model based on machine learning in the step S4 is as follows: the method comprises the steps of establishing an image recognition database by taking a database for classification recognition and detection challenge of international large visual objects as a template, storing preprocessed image data, sending the preprocessed image data into a convolutional neural network model based on a deep learning technology, realizing automatic extraction and learning of image features on the basis of image preprocessing, and finally realizing scoring classification of line equipment conditions in the image data. Wherein the deep learning technique-based convolutional neural network model comprises three block component network models:
the first block is a pre-training front-end network, resNet50 is used as a pre-training model, firstly, model parameters of a full connection layer are not contained in the ResNet50 network model to the local, then, the network structure of ResNet50 is defined, model weight parameters are loaded into the defined network structure, finally, the structure of the last full connection layer is changed, training is started at a lower learning rate, and the pre-training front-end network model is obtained;
the second block is a preselected area network, the preselected area network takes the images in the training set as input and outputs a set of rectangular target preselected areas, each preselected area has a score, and the score is used for judging whether the selected area is the area where the target is located; in order to generate a rectangular target preselection area, a small sliding window is added behind the last shared convolution layer of a pre-training front-end network, the sliding window is fully connected to a space window for inputting convolution feature mapping, each sliding window is mapped to a low-dimensional vector, the vector is output to a preselection area regression layer and a preselection area classification layer, the preselection area regression layer finally outputs coordinate codes of a preselection area, the preselection area classification layer finally outputs scores of the preselection area, whether the preselection area is the area where the target is located or not is judged through the scores, and then the true rectangular target preselection area set is sent to a next-level network for classification and identification;
the third block is a fast regionalized convolutional neural network, the fast regionalized convolutional neural network shares a shared characteristic layer initialized by a pre-training front-end network with a pre-selection area network, after the pre-training front-end network performs convolutional network characteristic extraction on the image, a rectangular target pre-selection area is output through the pre-selection area network to generate a rectangular target pre-selection area convolutional characteristic map, corresponding depth characteristics on the rectangular target pre-selection area convolutional characteristic map are taken out, all characteristics in a channel are unified into the same size by using a rectangular target pre-selection area pooling layer to generate a characteristic map with fixed dimensionality, finally characteristic vectors are obtained through two fully-connected characteristic layers, and the identification and the framing of line equipment in the image are completed through two multi-task models in the respective fully-connected layers; the two multitask models are an identification classification model based on a flexible maximum transfer function and a preselected region window regression model.
The invention has the following beneficial effects: according to the invention, influence factors for judging whether the distribution network can be subjected to live working or not are provided, a reasonable quantitative evaluation table is provided for determining the influence proportion of each influence factor, a Chinese text automatic classification model based on machine learning is built for quantitatively grading the external conditions of the line to be evaluated, an image recognition classification model based on machine learning is built for quantitatively grading the internal conditions of the line to be evaluated, an intelligent judgment method is provided for judging whether the distribution network meets the live working conditions or not, and the model can accurately judge the interval of the live working conditions. Therefore, the problem that whether the distribution line can be subjected to live working or not can not be intelligently judged according to the external condition and the equipment condition data of the distribution line at present is solved.
Drawings
FIG. 1 is a flow chart of a method for determining distribution network hot-line work conditions based on text classification and image recognition according to the present invention;
Detailed Description
The present invention is described in further detail below with reference to the attached drawings.
As shown in fig. 1, the method for distinguishing live-wire work conditions of the distribution network based on text classification and image recognition provided by the invention adopts a deep learning technology, and uses a text classification and image recognition model to distinguish and score lines needing live-wire work in the distribution network, so as to provide an intelligent and effective decision basis for live-wire workers to judge whether the lines of the distribution network can carry out live-wire work. The method comprises the following specific steps:
s1, exporting external condition data of the distribution line, and generating a live-wire work external condition judgment text database of the distribution line:
the external condition data of the distribution line, which can be used for judging the live working condition of the distribution line, is derived from a production management system of a power grid company, and a text database for judging the external condition of the live working of the distribution line is generated. The line external condition data includes: power supply area, grid structure, N-N inspection, terrain, user access and distribution automation level.
S2, acquiring pictures on the distribution line equipment site, forming line equipment condition data, and generating a line equipment image database: and taking a picture on the distribution line site, acquiring distribution line equipment condition data which can reflect the real-time state of the distribution line and can be used for judging the live working condition of the distribution line, and generating a line equipment image database. The line equipment condition data includes: overhead line (cable), pole type, breaking equipment, power transformation equipment, insulating equipment and hardware fittings.
S3, preprocessing a text database and an image database:
and respectively corresponding the external conditions of the line and the conditions of the line equipment to different scores to generate a condition score table, wherein the condition score table reflects the score corresponding to each condition, and the score reflects the proportion of each condition.
The scores corresponding to the line external conditions and the line equipment conditions are shown in table 1 below.
TABLE 1 Condition score correspondence Table
Figure BDA0001890856320000041
In the preprocessing of a text database and an image database, a hidden Markov model is adopted to extract segmentation, clause segmentation, word segmentation, stop word removal and the like in a text, then the text is converted into a form which can be recognized and processed by a computer, and the text is represented in a matrix or vector form; preprocessing an image database, firstly excavating statistical characteristics of the prior shape of the small power component, and then segmenting and extracting an image by using local invariant characteristics to obtain feature representation with invariance.
S4, building a Chinese text automatic classification model based on machine learning and an image recognition classification model based on machine learning:
s41, building a Chinese text automatic classification model based on machine learning:
text representation is carried out by taking words as a unit to form word vectors, then the word vectors are spliced according to the sequence of the words appearing in sentences to form a matrix representing the sentences, then the matrix is sent into a convolutional neural network model based on a deep learning technology, automatic extraction and learning of sentence characteristics are realized on the basis of the word vectors, and finally automatic classification of defective texts is realized.
The convolutional neural network model based on the deep learning technology is a four-layer convolutional neural network model, and the specific form is as follows:
the first layer is the input layer. The input layer is a word group matrix W belonging to R and corresponding to an unclassified external condition s×n W represents a word group corresponding to an unclassified external condition, R represents a matrix converted from the word group, each row of the matrix represents a vector corresponding to each word in the word group, the row number s is the word number of the word group, and the column number n is the dimension of the vector.
The second layer is a one-dimensional convolutional layer. Adopting convolution matrix window I epsilon R with the column number being same as W (being n) and the row number being h h×n And sequentially carrying out convolution operation with each h row and n column matrix block of the input layer matrix W from top to bottom, wherein each convolution window can extract a characteristic map characteristic called text characteristic from the input matrix R.
The third layer is a pooling layer. And adopting a maximum pooling method, taking the maximum element in the feature map vector obtained by convolution of each convolution window as a feature value, thereby extracting the feature value corresponding to each convolution window, and sequentially splicing all the feature values to form a one-dimensional vector of a pooling layer, namely the vector representing the global features of the sentence.
The fourth layer is an output layer. The output layer is fully connected with the pooling layer, the one-dimensional vector of the pooling layer is used as input, the output is carried out through an activation function, partial data are removed through the lost layer to prevent overfitting, and finally the softmax classifier is adopted to classify the one-dimensional vector and output a final classification result.
S42, constructing an image recognition classification model based on machine learning:
firstly, establishing an image recognition database by taking a database of a challenge match (PASCAL VOC) for classification recognition and detection of international large Visual objects as a template, storing preprocessed image data, then sending the preprocessed image data into a convolutional neural network model based on a deep Learning technology, realizing automatic extraction and Learning of image features on the basis of image preprocessing, and finally realizing scoring classification of line equipment conditions in the image data.
The convolutional neural network model based on the deep learning technology comprises three block component network models:
the first block is the pre-trained front-end network model. In the learning process of deep learning, due to the fact that computing resources are limited or a training set is small, in order to obtain a good and stable result, some trained network models are subjected to fine adjustment and then are led into the whole network recognition model. ResNet50 was used as a pre-training model. Firstly, model parameters of a full connection layer are not contained in a ResNet50 network model to be local, then a network structure of the ResNet50 is defined, model weight parameters are loaded into the defined network structure, finally the structure of the last full connection layer is changed, training is started at a low learning rate, and a pre-trained front-end network model is obtained.
The second block is the pre-candidate area Network (RPN). The core idea Of the RPN network is to take the images in the training set as input and output a set Of rectangular target preselected Regions (ROIs), each having a score to determine whether the selected Region is the Region where the target is located. In order to generate a rectangular target preselection area, a small sliding window is added behind the last shared convolution layer of the pre-training front-end network, the sliding window is fully connected to a space window of input convolution feature mapping, each sliding window is mapped to a low-dimensional vector, the vector is output to two fully-connected layers of the same level, namely a preselection area regression layer (reg) and a preselection area classification layer (cls), the reg layer finally outputs coordinate codes of the preselection area, the cls layer finally outputs scores of the preselection area, whether the preselection area is the area where the target is located or not is judged through the scores, and then the true rectangular target preselection area set is sent to a next-level network for classification and identification.
The third block is the Fast regionalized convolutional neural network (Fast R-CNN). The Fast R-CNN network shares the shared feature layer initialized by the pre-training front-end network with the RPN network. After the pre-training front-end network carries out convolutional network feature extraction on an image, an ROI is output through an RPN, an ROI convolutional feature map is generated, corresponding depth features on the ROI convolutional feature map are taken out, all features in a channel are unified to be the same in size through an ROI pooling layer, a feature map with a fixed dimensionality is generated, finally, feature vectors are obtained through two fully-connected feature layers, and the feature vectors are subjected to recognition and frame selection of power equipment in the image through two multitask models in the respective fully-connected layers, namely a recognition classification model based on a flexible maximum transfer function (softmax) and a preselected region window regression (BBox) model.
And S5, dividing the preprocessed text database and the preprocessed image database into a training set and a test set, respectively performing supervised training on the built Chinese text automatic classification model based on machine learning and the built image recognition classification model based on machine learning by using the training set data, and then testing the accuracy of the trained model recognition by using the test set data. The accuracy of the two models for recognizing the data in the test set can reach more than 90% by adjusting the parameters, and a Chinese text automatic classification model and an image recognition classification model which have the optimal recognition effect and are based on machine learning are obtained.
And S6, grading distribution lines, importing the newly acquired data into a trained model, identifying live working judging condition characteristics and grading by the model, and judging whether the live working condition requirements are met or not according to the total grading value corresponding to the grading.
Assuming that n lines are suitable for the hot-line work condition, and the score of a certain condition is X, the average score of whether the whole line is suitable for the hot-line work condition is:
Figure BDA0001890856320000061
wherein, L is the total score of whether a certain line is suitable for the live working condition, i represents one of the judging conditions of the line, and the value range is from 1 to n.
And after the total score value is obtained, judging whether the distribution line meets the requirement of the hot-line work condition according to the grade corresponding to the score value of the table below and the description thereof.
TABLE 2 score values and score grade Classification Table
Total value of credit 70 points and less 70 to 80 minutes 80 to 90 minutes 90 to 100 minutes
Grade of scoring Power failure maintenance line Line to be modified Quasi-uninterrupted power line Uninterrupted power line
Grading grade:
(1) A non-power-off line: the total score of the whole line meets the requirement of the non-power-off line;
(2) A quasi-uninterrupted power line: partial equipment in the whole line can meet the requirement of the uninterrupted line through small-range transformation;
(3) A line to be reconstructed: the requirement of the uninterrupted operation line can be met only by transforming the whole line in a larger range;
(4) And (3) power failure maintenance of the line: the requirement of the uninterrupted operation line can be met only by the transformation of the whole line needing full stop.
And when the total score of the whole line meets the requirement of the non-power-off line of each level, but a few devices of the whole line do not meet the requirement of the non-power-off operation, the level of the line is reduced by one level.
The invention completes the construction of a network model in the configuration of an independent video memory with a CPU model of Intel Core i7 and a GPU model of NVIDIAGeForce GTX 970,4G under a Windows10 operating system. And (3) realizing a convolutional neural network model by adopting a Tensorflow framework. Firstly, partial data in a database are taken as a training set and imported into a model for training, after the training is finished, test set data are imported for testing, the recognition effect of a text classification model is considered by adopting the error rate and the serious deviation rate, and the recognition effect of an image recognition model is considered by adopting the accuracy and the recall rate. The result shows that the error rate and the serious deviation rate of the model are respectively 2.86 percent and 0.80 percent, the accuracy rate and the recall rate are respectively 92 percent and 86 percent, which shows that the model can accurately judge the interval of the hot-line work condition to a great extent, and provides an intelligent and effective decision basis for workers to judge whether the hot-line work can be carried out.

Claims (5)

1. The method for distinguishing the live working conditions of the distribution network based on text classification and image recognition is characterized by comprising the following steps of:
s1, exporting external condition data of the distribution line from a production management system of a power grid company, and generating a live-line work external condition judgment text database of the distribution line;
s2, acquiring pictures on the distribution line site, forming line equipment condition data, and generating a line equipment image database;
s3, preprocessing a text database and an image database, and comprises the following steps: respectively corresponding the external conditions of the line and the conditions of line equipment to different scores to generate a condition score table, wherein the condition score table reflects the score corresponding to each condition, and the score reflects the proportion of each condition; expressing the text in a matrix or vector form, and segmenting and extracting the image to obtain feature expression with invariance;
s4, establishing a Chinese text automatic classification model based on machine learning and an image recognition classification model based on machine learning;
s5, dividing a preprocessed text database and an image database into a training set and a testing set, respectively performing supervised training on a built Chinese text automatic classification model based on machine learning and an image recognition classification model based on machine learning by using training set data, testing the recognition accuracy of the trained model by using the testing set data, and enabling the recognition accuracy of the model to the data in the testing set to be more than 90% by adjusting parameters;
s6, grading distribution lines, importing newly acquired data into a trained model, identifying live working judging condition characteristics and grading by the model, and judging whether the live working condition requirements are met or not according to the total grading value corresponding grading grades;
the step S4 is specifically to set up a machine learning-based automatic chinese text classification model: text representation is carried out by taking words as a unit to form word vectors, then the word vectors are spliced according to the sequence of the words appearing in sentences to form a matrix representing the sentences, then the matrix is sent into a convolutional neural network model based on a deep learning technology, automatic extraction and learning of sentence characteristics are realized on the basis of the word vectors, and finally automatic classification of defective texts is realized;
the specific operation of building the image recognition classification model based on machine learning in the step S4 is as follows: firstly establishing an image recognition database by taking a database for recognizing and detecting international large visual objects as a template, storing preprocessed image data, then sending the preprocessed image data into a convolutional neural network model based on a deep learning technology, realizing automatic extraction and learning of image characteristics on the basis of image preprocessing, and finally realizing scoring classification of line equipment conditions in the image data.
2. The method according to claim 1, wherein the line external condition data of step S1 comprises: power supply area, grid structure, N-N inspection, terrain, user access and distribution automation level.
3. The method of claim 1, wherein the line equipment condition data of step S2 comprises: overhead line, pole type, breaking equipment, transformer equipment, insulating equipment and hardware fitting.
4. The method according to claim 1, wherein the convolutional neural network model based on the deep learning technique is a four-layer convolutional neural network model in the following specific form:
the first layer is an input layer, and the input layer is a phrase matrix W epsilon R corresponding to an unclassified external condition s×n W represents a word group corresponding to an unclassified external condition, R represents a matrix converted from the word group, each row of the matrix represents a vector corresponding to each word in the word group, the row number s is the word number of the word group, and the column number n is the dimension of the vector;
the second layer is a one-dimensional convolution layer, and a convolution matrix window I belonging to R and having the same row number as W and the same row number as h is adopted h×n Performing convolution operation with each h row and n column matrix block of the input layer matrix W from top to bottom in sequence, wherein each convolution window can extract a characteristic graph characteristic from the input matrix R, and the characteristic graph characteristic is called as a text characteristic;
the third layer is a pooling layer, and a maximum pooling method is adopted, the maximum element in the feature map vector obtained by convolution of each convolution window is taken as a feature value, so that the feature value corresponding to each convolution window is extracted, and all the feature values are sequentially spliced to form a one-dimensional vector of the pooling layer, namely the vector representing the global feature of the sentence;
the fourth layer is an output layer, the output layer is fully connected with the pooling layer, the one-dimensional vector of the pooling layer is used as input, the output is carried out through an activation function, partial data are removed through the lost layer to prevent overfitting, finally, the softmax classifier is adopted to classify the one-dimensional vector, and a final classification result is output.
5. The method of claim 1, wherein the deep learning technique based convolutional neural network model comprises a three-block component network model:
the first block is a pre-training front-end network, resNet50 is used as a pre-training model, firstly, model parameters of a full connection layer are not contained in the ResNet50 network model to the local, then, the network structure of ResNet50 is defined, model weight parameters are loaded into the defined network structure, finally, the structure of the last full connection layer is changed, training is started at a lower learning rate, and the pre-training front-end network model is obtained;
the second block is a preselected area network, the preselected area network takes the images in the training set as input and outputs a set of rectangular target preselected areas, each preselected area has a score, and the score judges whether the selected area is the area where the target is located; in order to generate a rectangular target preselection area, a small sliding window is added behind the last shared convolution layer of the pre-training front-end network, the sliding window is fully connected to a space window of input convolution feature mapping, each sliding window is mapped to a low-dimensional vector, the vector is output to a preselection area regression layer and a preselection area classification layer, the preselection area regression layer finally outputs coordinate codes of a preselection area, the preselection area classification layer finally outputs scores of the preselection area, whether the preselection area is the area where the target is located or not is judged according to the scores, and then the true rectangular target preselection area set is sent to the next-level network for classification and identification;
the third block is a fast regional convolutional neural network, the fast regional convolutional neural network shares a shared characteristic layer initialized by a pre-training front-end network with a pre-selection area network, after the pre-training front-end network extracts the characteristics of the convolutional network of the image, a rectangular target pre-selection area is output through the pre-selection area network to generate a rectangular target pre-selection area convolutional characteristic diagram, corresponding depth characteristics on the rectangular target pre-selection area convolutional characteristic diagram are taken out, all the characteristics in a channel are unified into the same size by using a rectangular target pre-selection area pooling layer to generate a characteristic diagram with fixed dimensionality, finally characteristic vectors are obtained through two fully-connected characteristic layers, and the identification and the frame selection of line equipment in the image are completed through two multi-task models in respective fully-connected layers; the two multitask models are an identification classification model based on a flexible maximum transfer function and a preselected region window regression model.
CN201811470666.8A 2018-12-04 2018-12-04 Text classification and image recognition-based distribution network live working condition judgment method Active CN109614488B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811470666.8A CN109614488B (en) 2018-12-04 2018-12-04 Text classification and image recognition-based distribution network live working condition judgment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811470666.8A CN109614488B (en) 2018-12-04 2018-12-04 Text classification and image recognition-based distribution network live working condition judgment method

Publications (2)

Publication Number Publication Date
CN109614488A CN109614488A (en) 2019-04-12
CN109614488B true CN109614488B (en) 2022-12-02

Family

ID=66006906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811470666.8A Active CN109614488B (en) 2018-12-04 2018-12-04 Text classification and image recognition-based distribution network live working condition judgment method

Country Status (1)

Country Link
CN (1) CN109614488B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3948590A4 (en) 2019-05-01 2022-11-16 Microsoft Technology Licensing, LLC Method and system of utilizing unsupervised learning to improve text to content suggestions
WO2020220369A1 (en) 2019-05-01 2020-11-05 Microsoft Technology Licensing, Llc Method and system of utilizing unsupervised learning to improve text to content suggestions
CN110490105A (en) * 2019-08-06 2019-11-22 南京大国科技有限公司 Distribute-electricity transformer district acceptance method, device and computer storage medium based on image recognition
CN110908901B (en) * 2019-11-11 2023-05-02 福建天晴数码有限公司 Automatic verification method and system for image recognition capability
CN112784652A (en) * 2019-11-11 2021-05-11 中强光电股份有限公司 Image recognition method and device
CN111026870A (en) * 2019-12-11 2020-04-17 华北电力大学 ICT system fault analysis method integrating text classification and image recognition
CN113742508B (en) * 2021-07-30 2023-09-08 国网河南省电力公司信息通信公司 Graphic data mining method for monitoring mass information of power equipment on line
CN116416247A (en) * 2023-06-08 2023-07-11 常州微亿智造科技有限公司 Pre-training-based defect detection method and device
CN116701303B (en) * 2023-07-06 2024-03-12 浙江档科信息技术有限公司 Electronic file classification method, system and readable storage medium based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038480A (en) * 2017-05-12 2017-08-11 东华大学 A kind of text sentiment classification method based on convolutional neural networks
CN107918772A (en) * 2017-12-10 2018-04-17 北京工业大学 Method for tracking target based on compressive sensing theory and gcForest
CN108898138A (en) * 2018-05-30 2018-11-27 西安理工大学 Scene text recognition methods based on deep learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156766B (en) * 2015-03-25 2020-02-18 阿里巴巴集团控股有限公司 Method and device for generating text line classifier
CN106569059A (en) * 2016-11-01 2017-04-19 广西电网有限责任公司电力科学研究院 Production service system with function of converting test data to structured storage
CN107392901A (en) * 2017-07-24 2017-11-24 国网山东省电力公司信息通信公司 A kind of method for transmission line part intelligence automatic identification
CN107491435B (en) * 2017-08-14 2021-02-26 苏州狗尾草智能科技有限公司 Method and device for automatically identifying user emotion based on computer
CN107808132A (en) * 2017-10-23 2018-03-16 重庆邮电大学 A kind of scene image classification method for merging topic model
CN108563731A (en) * 2018-04-08 2018-09-21 北京奇艺世纪科技有限公司 A kind of sensibility classification method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038480A (en) * 2017-05-12 2017-08-11 东华大学 A kind of text sentiment classification method based on convolutional neural networks
CN107918772A (en) * 2017-12-10 2018-04-17 北京工业大学 Method for tracking target based on compressive sensing theory and gcForest
CN108898138A (en) * 2018-05-30 2018-11-27 西安理工大学 Scene text recognition methods based on deep learning

Also Published As

Publication number Publication date
CN109614488A (en) 2019-04-12

Similar Documents

Publication Publication Date Title
CN109614488B (en) Text classification and image recognition-based distribution network live working condition judgment method
CN107122375B (en) Image subject identification method based on image features
CN110399905B (en) Method for detecting and describing wearing condition of safety helmet in construction scene
CN110532900B (en) Facial expression recognition method based on U-Net and LS-CNN
CN109118479B (en) Capsule network-based insulator defect identification and positioning device and method
CN111126482B (en) Remote sensing image automatic classification method based on multi-classifier cascade model
CN103605972A (en) Non-restricted environment face verification method based on block depth neural network
CN111027631B (en) X-ray image classification and identification method for judging crimping defects of high-voltage strain clamp
CN113205063A (en) Visual identification and positioning method for defects of power transmission conductor
CN108710894A (en) A kind of Active Learning mask method and device based on cluster representative point
CN111813997A (en) Intrusion analysis method, device, equipment and storage medium
CN111402224A (en) Target identification method for power equipment
CN108898623A (en) Method for tracking target and equipment
CN111126820A (en) Electricity stealing prevention method and system
CN111209832A (en) Auxiliary obstacle avoidance training method, equipment and medium for transformer substation inspection robot
CN112036384B (en) Sperm head shape recognition method, device and equipment
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
CN110414431B (en) Face recognition method and system based on elastic context relation loss function
CN113421223B (en) Industrial product surface defect detection method based on deep learning and Gaussian mixture
CN107633527A (en) Target tracking method and device based on full convolutional neural networks
CN111191027B (en) Generalized zero sample identification method based on Gaussian mixture distribution (VAE)
CN107944453A (en) Based on Hu not bushing detection methods of bending moment and support vector machines
CN108898157B (en) Classification method for radar chart representation of numerical data based on convolutional neural network
CN111160756A (en) Scenic spot assessment method and model based on secondary artificial intelligence algorithm
CN111353538B (en) Similar image matching method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant