CN113379313A - Intelligent preventive test operation management and control system - Google Patents

Intelligent preventive test operation management and control system Download PDF

Info

Publication number
CN113379313A
CN113379313A CN202110747608.0A CN202110747608A CN113379313A CN 113379313 A CN113379313 A CN 113379313A CN 202110747608 A CN202110747608 A CN 202110747608A CN 113379313 A CN113379313 A CN 113379313A
Authority
CN
China
Prior art keywords
test
defect
data
equipment
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110747608.0A
Other languages
Chinese (zh)
Other versions
CN113379313B (en
Inventor
赵超
文屹
吕黔苏
张迅
王冕
黄军凯
范强
陈沛龙
李欣
吴建蓉
丁江桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Power Grid Co Ltd
Original Assignee
Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Power Grid Co Ltd filed Critical Guizhou Power Grid Co Ltd
Priority to CN202110747608.0A priority Critical patent/CN113379313B/en
Publication of CN113379313A publication Critical patent/CN113379313A/en
Application granted granted Critical
Publication of CN113379313B publication Critical patent/CN113379313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/44Statistical methods, e.g. probability models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/042Backward inferencing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Bioinformatics & Computational Biology (AREA)

Abstract

The invention discloses an intelligent preventive test operation management and control system, which comprises a standard test database module, an annual production plan intelligent supervision module, a test report intelligent diagnosis and analysis module and a test data intelligent analysis module, wherein the standard test database module is used for establishing a standard data structure model, forming a data standard system and establishing a new standard test database; the annual production plan intelligent supervision module is used for carrying out intelligent analysis and accurate matching on the arrangement data of the production plan; the test report intelligent diagnosis analysis module is used for examining the normalization of the text content of the test report; the test data intelligent analysis module is used for comparing test report data of the test equipment with each other and performing cluster analysis, analyzing problems of a plan and a test report globally, and researching the test data commonality rule according to the city, the unit, the equipment and the test type. The intelligent operation management and control system for the test data integrates planning, management, supervision and analysis, changes the traditional working mode, fully uses the existing scientific and technological means, greatly improves the management efficiency and quality of the test operation, brings essential changes to the maintenance test operation, and comprehensively improves the informatization degree of the maintenance test operation.

Description

Intelligent preventive test operation management and control system
Technical Field
The invention relates to the technical field of equipment risk assessment, in particular to an intelligent preventive test operation management and control system.
Background
And (3) equipment defect diagnosis: in recent years, most of domestic and foreign researches on power grid equipment defect diagnosis are carried out, and partial scholars in China mainly carry out intelligent diagnosis research on equipment defects based on structured data such as test data, operation data and the like of equipment, for example, the cooperation of a national power grid and a transportation university, and develop a GIS switch defect diagnosis method research based on a radiation electric field characteristic parameter support vector machine in 2019, wherein the research is a GIS switch defect diagnosis method based on the radiation electric field characteristic parameter support vector machine, and comprises the steps of 1, preprocessing experimental data; 2. constructing a signal case knowledge base; 3. obtaining an SVM defect diagnosis model; 4. and supporting a defect diagnosis process of the vector machine. The research collects the operation transient radiation electric field in the operation process of the GIS isolating switch, processes the collected operation transient radiation electric field, obtains the signal characteristic vector corresponding to the SVM defect diagnosis model with the selected optimal recognition precision, inputs the obtained signal characteristic vector into the SVM defect diagnosis model with the selected optimal recognition precision, obtains the classification result of the GIS isolating switch, realizes the judgment of the operation condition of the GIS equipment, and guarantees the safe operation of a power grid.
The main problem of GIS equipment defect diagnosis research based on a support vector machine is that the selected data source is single, and the method can lead to a better research conclusion effect but cannot be applied to the ground.
At present, the defect analysis research and practice application based on big data mining technology is more abroad, such as America, Japan, English, Germany and the like, and the application of the technology is reported. The japan started working out a predictive overhaul based on condition monitoring from the 80 s. The japan power generation equipment overhaul association has intensively studied the data mining rule pattern, and in the overhaul, technologies such as association analysis, cluster analysis, time series analysis, and the like are used to perform defect analysis and life evaluation on the equipment. A maintenance strategy taking reliability as a center is provided by a certain research and development center of the American electric power research institute, a series of technical schemes and related systems based on optimization and maintenance of a big data mining technology are provided, and the maintenance strategy is popularized and applied to a plurality of power stations and achieves good effects. Data mining techniques are also actively employed in germany to improve overhaul efficiency. In recent years, germany has also studied the maintenance work of power plants, and has pursued a state maintenance based on a data mining technology in addition to a power plant development equipment monitoring and diagnosis technology, and has a potential for large data mining in equipment inspection.
Based on the problems and the research conditions, the method integrates the data of multiple service fields to carry out intelligent comprehensive diagnosis on the defects of the primary equipment, carries out deep analysis on the basis of the existing research, provides the severity of the defects of the primary equipment, supports the actual work of service personnel, and improves the defect solving capability of the service personnel.
Equipment risk assessment: the equipment risk assessment is to analyze and judge the equipment risk according to the characteristics and the change conditions of equipment risk influence factors, accurately assess the risk level of the equipment risk assessment, reasonably predict the development trend of defects or risks and provide a basis for reducing the equipment risk. At present, scientific research institutions, equipment operation units and manufacturers at home and abroad develop a great deal of research work in related fields, and have obtained abundant research results in aspects of evaluation methods, system construction and the like. Intelligent evaluation methods such as fuzzy comprehensive evaluation, rough set theory, neural network, support vector machine, evidence theory, expert system, etc. In the aspect of system construction, since 2008, national grid companies and southern grid companies have issued a series of state evaluation and risk assessment guidelines for grid equipment. The research results and systems effectively ensure the safe and reliable operation of the primary equipment of the power grid.
However, the primary equipment of the power grid has a complex structure, high integration level and complex and variable operating environment, and is often influenced by external bad working conditions and system scheduling mode changes, so that the difficulty of equipment risk assessment work is greatly increased. Mainly embodied in the following 3 aspects:
1) most of the existing risk assessment methods established on the basis of equipment test data are single or limited, the comprehensive influence degree of the internal influence factors of the equipment on the equipment risk cannot be comprehensively considered, and the accuracy and pertinence of assessment results need to be improved.
2) Because the defects or faults belong to small-probability events, the existing defect or fault sample data cannot meet the requirements of an intelligent evaluation method on modeling samples, and the incidence relation and the evolution rule between state parameters and equipment risks are difficult to obtain, so that key parameters of an evaluation model are mainly selected by experience, and the accuracy of an evaluation result and the practicability of the evaluation method are severely restricted.
3) The existing equipment risk assessment method relies on manual judgment, accuracy and efficiency are to be improved urgently, and accuracy of equipment risk assessment is severely limited.
Based on the problems, a new risk assessment method is urgently needed to be explored, a risk assessment model is established, the accuracy of assessment results is improved, and the fine assessment of equipment risks is realized.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the utility model provides a preventive test operation management and control system with it is intelligent to solve the technical problem that exists among the prior art.
The technical scheme adopted by the invention is as follows: an intelligent preventive test operation management and control system comprises a standard test database module, an annual production plan intelligent supervision module, a test report intelligent diagnosis analysis module and a test data intelligent analysis module, wherein the standard test database module is used for establishing a standard data structure model, forming a data standard system and establishing a new standard test database; the annual production plan intelligent supervision module is used for carrying out intelligent analysis and accurate matching on the arrangement data of the production plan; the test report intelligent diagnosis analysis module is used for examining the normalization of the text content of the test report; the test data intelligent analysis module is used for comparing test report data of the test equipment with each other and performing cluster analysis, analyzing problems of a plan and a test report globally, and researching the test data commonality rule according to the city, the unit, the equipment and the test type.
The standard test database construction method of the standard test database module comprises the following steps:
step1: and (3) analyzing defect data: the defect data characteristics of the equipment are known through defect data analysis;
step2: constructing an equipment defect standard library according to the equipment defect data characteristics in the step1, and finishing standardized storage of defect data;
and step3: constructing a defect intelligent diagnosis model, and identifying the defect reasons and defect parts of the equipment through the defect intelligent diagnosis model to realize intelligent diagnosis of the equipment defects and classification of defect severity;
and 4, step 4: analyzing the defect diagnosis result, and recommending defect management measures;
and 5: constructing an equipment risk intelligent evaluation model based on the result obtained by analyzing the defect diagnosis result, and identifying the influence degree of the defect on the equipment risk;
step 6: and classifying the risk grade according to the influence degree of the equipment risk.
And (3) analyzing defect data: respectively analyzing the number of different years of equipment defects, the number of types of the equipment defects and the number of equipment defect manufacturers, sequencing the number of different years of defects, the number of types of defects and the number of manufacturers, and obtaining the maximum number of failure years, the maximum number of failure types and the maximum number of manufacturers with failures.
The method for constructing the equipment defect standard library in the step2 comprises the following steps:
a) collecting defect data, wherein data sources for collecting the defect data comprise historical defect reports, defect record data, equipment operation data, equipment test data and equipment online monitoring data, and field names and field contents of a defect record data table of a defect classification standard library are obtained by analyzing the data sources;
b) cleaning and de-duplicating the defect data, and cleaning and de-duplicating two or more pieces of same defect data, defect data missing, defect data messy codes, blank spaces in the defect data, full angle turning half angle of the defect data and English case and capital and lowercase of English in the collected data.
c) And manually marking, namely performing text analysis and manual marking on the defect appearance, the defect part, the defect reason and the processing measure according to the historical defect report to finally obtain an equipment defect standard library.
The defect entry data includes fields: unit, voltage grade, defect grade, place, equipment name, defect type, defect description, professional category, manufacturer, factory year and month, equipment model, commissioning date, defect cause category, defect cause, defect representation, discovery time, defect part and treatment measure;
the device operation data contains the fields: voltage, three-phase unbalanced current, voltage class;
equipment online monitoring data: dielectric loss, equivalent capacitance, reference voltage alarm, three-phase unbalanced current alarm, dielectric loss alarm, full current alarm, equivalent capacitance alarm, monitoring equipment communication state, monitoring equipment running state, equipment self-checking abnormity, partial discharge and iron core current;
the equipment test data contains the fields: infrared imaging temperature measurement, gas in a gas chamber, contact loop resistance, pressure resistance of an outer insulating surface and gas decomposition product test values.
The method for constructing the intelligent defect diagnosis model in the step3 comprises the following steps: (1) defect diagnosis system: summarizing the device type, the defects and the parts of the corresponding devices and the defective parts corresponding to the defects to form a defect diagnosis system table; (2) and (3) a defect diagnosis model: a) according to the defect data record table, establishing an equipment defect diagnosis data index: including index name and index description content; b) text preprocessing: performing word segmentation processing on the defect description content, and obtaining word segmentation results of the electric power field according to the electric power field dictionary; c) text distributed representation: the text distributed expression method is based on the principle that the semanteme of a word is described by adjacent words, namely, a language model expressed by a word vector of each word is trained by taking a large number of preprocessed power equipment defects as a corpus, and each dimension of the word vector represents the semantic features of the word learned through the model; d) and (3) establishing a convolutional neural network: the intelligent diagnosis of the equipment defects mainly adopts a convolutional neural network algorithm, the processed defect index data is used as an input layer of the convolutional neural network, the defect texts of the vectorized word vectors in the step c) are classified through a classifier of the convolutional neural network, and corresponding classification results are output; e) model training: the model input variables are fields of defect representation, defect description, defect reason, equipment type, defect type and defect part, and the fields are learned by using a convolutional neural network algorithm to form a final equipment defect diagnosis model.
The evaluation method of the equipment risk intelligent evaluation model comprises the following steps:
(1) and (3) risk factor analysis: obtaining equipment risk factors according to the influence factor division of the equipment; aging factors, defect factors, state factors, main transformer alarm factors, thermal aging factors and fusion factors;
(2) analyzing the relevance of defect influence factors: and performing correlation analysis by calculating a correlation coefficient according to the equipment risk factor:
(3) constructing an equipment defect deduction rule base: 1) Establishing a defect severity deduction rule base, and giving out a score T1 according to the defect severity deduction rule base; 2) Formulating a defect frequency deduction rule, counting the frequency of defect occurrence in typical, batch and repeated manner, and giving out a score T2 according to the rule range; 3) An equipment importance rule is formulated, and according to the equipment where the defect occurs, a score T3 is given by using the equipment importance deduction rule; 4) Formulating a defect grade deduction rule, and giving out a corresponding score T4 according to the defect grade; 5) Formulating a voltage grade deduction rule, and giving a corresponding fraction T5 according to the voltage grade of the defect generation equipment; 6) Formulating an equipment type deduction rule, and giving out a corresponding score T6 according to the importance degrees of different equipment types; 7) And according to the final defect evaluation score, giving the risk grade of the equipment, wherein the risk grade of the equipment is divided into: normal, general, urgent and great four grades;
(4) risk intelligent assessment: when the defect risk of the equipment is evaluated, the deduction value index and the equipment risk factor are subjected to homotrending processing, the data can be used as an input parameter of an entropy value method after the data processing is finished, an equipment risk intelligent evaluation model based on the defect is constructed, the evaluation of the influence degree of the equipment defect on the equipment risk is finished, and a risk intelligent evaluation result is obtained.
The implementation method of the intelligent monitoring module for the annual production plan comprises the following steps: the method comprises the steps of extracting layout information of a test plan and a work order from a 6+1 production management system, supervising and supervising annual production plans, applying a cross exploration, dimension integration and splitting method, combining an intelligent reasoning algorithm, carrying out intelligent analysis and accurate matching on layout data of the production plans, and realizing plan supervision; the plan supervision method comprises three aspects: 1) by associating the production plan work order, the work ticket and the equipment test period time, the system is supervised to try the plan consistency in advance; 2) according to the equipment test period, monitoring whether the compiled test plan exceeds the period and the number condition of the period; 3) and monitoring the compiled test plan and whether the test object has omission or not according to the equipment standing book and the equipment test period.
The implementation method of the test report intelligent diagnosis analysis module comprises the following steps: the test report and the test management regulation are subjected to strong feature intelligent pairing, extraction and analysis, and the method of vocabulary standardization, named entity identification and standardized data dictionary in natural language processing is combined to perform keyword extraction, hierarchical classification and accurate reasoning, so that the normalization of the text content of the test report is examined, and whether the lack of items exists and whether the examined value meets the qualification of interval criterion is judged.
The implementation method of the test data intelligent analysis module comprises the following specific steps:
step 1), determining a test report version corresponding to the test equipment: finding out a test report corresponding to each test device by taking the test devices as dimensions, analyzing versions of the test reports, and finally determining that the test devices have a plurality of test versions;
step 2), determining the test items in the test report: after the test report version corresponding to the test equipment is determined, analyzing specific test items in the test reports according to each test report, and obtaining intersection of the test items through intelligent analysis;
step 3), determining test parameters in the test items: obtaining intersection of test parameters in each test item through intelligent analysis according to the test items determined in the step 2);
step 4), merging and configuring test parameters in the test items: merging and configuring the test parameters according to the test parameters determined in the step 3), and performing mutual comparison and cluster analysis on the merged and configured parameters;
step 5), analyzing the combined test parameters: according to the combined configuration parameters determined in the step 4), starting from two dimensions of a qualified test report, a qualified test report and an unqualified test report, performing mutual comparison and cluster analysis on the configuration parameters through an intelligent algorithm of regression analysis, clustering and association analysis, and visually displaying the mutual comparison data;
step 6), global analysis and display of the test plan and the test report: globally analyzing the problems of the test plan and the test report, researching the test data commonalities according to the local city, the unit, the equipment and the test type, and visually displaying the commonalities;
step 7), analyzing and displaying online monitoring data: and displaying the online monitoring data in the form of a display list or a trend graph by taking the equipment as a unit.
The invention has the beneficial effects that: compared with the prior art, the intelligent operation management and control system for the test data integrates planning, management, supervision and analysis, changes the traditional working mode, fully uses the existing scientific and technological means, greatly improves the management efficiency and quality of test operation, brings essential change to maintenance test work, and comprehensively improves the informatization degree of the maintenance test work.
1) And a test data structure system based on a unified standard provides sufficient data support for subsequent data analysis and diagnosis.
2) The annual production plan is intelligently supervised based on the reasoning algorithm, the correlation analysis of the production plan and the equipment test period is realized, and the supervision efficiency of the annual production plan is improved.
3) The intelligent diagnosis and analysis of the test report based on natural language processing realizes the comparison relationship between the test report and the regulation standard, and accurately judges the normalization and the qualification of the test report.
4) The intelligent analysis of test data based on data mining realizes the deep analysis of test result data of all test equipment in the province, gives corresponding personnel to know data conditions and trends in time, and makes auxiliary support for subsequent decision analysis.
Drawings
FIG. 1 is a schematic diagram of a management and control system of the present invention;
FIG. 2 is a flow chart of the construction of a standard experimental database;
FIG. 3 is a flowchart of an intelligent supervision of an annual production plan;
FIG. 4 is a flow chart of a test report intelligent diagnostic analysis;
fig. 5 is a flow chart of intelligent analysis of experimental data.
Detailed Description
The invention is further described below with reference to specific examples.
Example 1: an intelligent preventive test operation management and control system comprises a standard test database module, an annual production plan intelligent supervision module, a test report intelligent diagnosis analysis module and a test data intelligent analysis module, wherein the standard test database module is used for establishing a standard data structure model, forming a data standard system and establishing a new standard test database; the annual production plan intelligent supervision module is used for carrying out intelligent analysis and accurate matching on the arrangement data of the production plan; the test report intelligent diagnosis analysis module is used for examining the normalization of the text content of the test report; the test data intelligent analysis module is used for comparing test report data of the test equipment with each other and performing cluster analysis, analyzing problems of a plan and a test report globally, and researching the test data commonality rule according to the city, the unit, the equipment and the test type.
The standard test database module is realized by the following steps: extracting the data characteristics of the text elements, establishing a standard data structure model based on various devices by combining a parallel computing technology, forming a data standard system, and constructing a new standard test database.
The construction method of the test data structure system with the unified standard comprises the following specific steps:
step1: obtaining a test data structure system model from a production management system: combing all equipment to make operation instruction related to preventive tests, and obtaining combed operation instruction templates and preventive test data from a production system;
step2: constructing a test data structure system model based on unified standards: forming an operation instruction template by using an operation instruction template and preventive test data acquired from a production system; analyzing an operation instruction template obtained from a production system, and perfecting the template according to actual needs to form a unified standard template; meanwhile, for a test data template of equipment delivery handover, a test data template is obtained from a manufacturer (when a factory handover test is performed, the manufacturer has a delivery handover test word template, and equipment delivery handover test data is filled in the template) and a handover test version template is generated in the system, and finally a test data structure system model based on unified standards is constructed by the operation instruction template, the unified standard template and the test data template.
And step3: and (3) supplementary recording of test data: the test data sources in the test data mining intelligent operation management and control system are two:
1) the existing test data of the external system is directly obtained from the external system through an interface, and mainly comprises the steps of taking historical test data from a previous old system at one time and obtaining real-time test data from a production system every day.
2) For test data missing from an external system, additional recording needs to be carried out in the test data mining intelligent operation management and control system, wherein the additional recording is to select a corresponding operation instruction template from a test data structure system model based on unified standards, and a test data additional recording function is realized in the system according to a customized template.
The text data feature extraction method comprises the following steps: text data are obtained from a test data mechanism system model based on unified standards through a data interface, and fields with more occurrence times are found by adopting a document frequency feature selection algorithm technology to form a data standard system.
Document Frequency (DF) is the simplest feature selection algorithm, which refers to how many texts contain the word in the entire dataset. For each feature in the training corpus, its document frequency is computed, and those features with particularly low and high document frequency are removed according to a preset value of the aperture. The document frequency is a common method for feature dimension reduction, because a huge document set is measured by calculating linear approximate complexity in the number of training documents, the calculation complexity is low, and the method can be suitable for any corpus.
The document frequency of each feature in the training text set is calculated, and the item is deleted if the DF value of the item is less than a certain threshold value and is also deleted if the DF value of the item is greater than a certain threshold value. Since they represent 2 extreme cases of "no representative" and "no discrimination", respectively. The DF feature is chosen so that rare words either contain no useful information, are too few to affect the classification, or are noisy and can be eliminated. DF has the advantage of small computational complexity, but has good results in practical applications. The method has the disadvantages that rare words are not rare in a certain type of text, can also contain important judgment information, are simply discarded, and can influence the precision of the classifier.
The document frequency has the greatest advantage of high speed, and the time complexity and the text quantity of the document frequency are in linear relation, so that the document frequency is very suitable for feature selection of a super-large text data set. Moreover, the document frequency is very efficient, and its performance is comparable to the information gain and x2 statistics when 90% of the words are deleted in a supervised feature selection application. DF is the simplest feature item selection method, and the method has low computational complexity and can be competent for large-scale classification tasks.
However, if a rare entry mainly appears in a training set of a certain type, the feature of the type can be well reflected, and the entry is filtered out due to the fact that the entry is lower than a certain set threshold value, so that the classification precision is influenced to a certain extent.
Parallel Computing (Parallel Computing) refers to a process of solving a Computing problem by simultaneously using multiple Computing resources, and is an effective means for improving the Computing speed and the processing capacity of a computer system. The basic idea is to solve the same problem by using multiple processors, i.e. the problem to be solved is decomposed into several parts, each part is calculated in parallel by an independent processor. A parallel computing system may be either a specially designed supercomputer with multiple processors or a cluster of several separate computers interconnected in some fashion. And finishing the data processing through the parallel computing cluster, and returning the processing result to the user.
Parallel computing can be divided into temporal parallel and spatial parallel.
Parallel in time: the method refers to a flow line technology, for example, when food is produced in a factory, the steps are as follows:
(1) cleaning: the food is washed clean.
(2) And (3) disinfection: the food is sterilized.
(3) Cutting: the food is cut into small pieces.
(4) Packaging: packaging the food into a packaging bag.
If the assembly line is not adopted, after one food finishes the four steps, the next food is processed, which is time-consuming and affects the efficiency. But with the in-line technology, four food items can be processed simultaneously. The parallel algorithm is time-parallel, two or more operations are started at the same time, and the calculation performance is greatly improved.
Spatial parallelism: the method refers to a large-scale problem that a plurality of processors execute calculation concurrently, namely, more than two processors are connected through a network to achieve the purpose of simultaneously calculating different parts of the same task, or a single processor cannot solve.
The implementation method of the intelligent monitoring module for the annual production plan comprises the following steps: the method comprises the steps of extracting arrangement information of a test plan and a work order from a 6+1 production management system, supervising and supervising annual production plans, applying a cross exploration method, a dimension integration method and a dimension splitting method, combining an intelligent reasoning algorithm, carrying out intelligent analysis and accurate matching on arrangement data of the production plans, and realizing plan supervision.
Planning supervision involves three aspects: 1) by associating the production plan work order, the work ticket and the equipment test period time, the system is supervised to try the plan consistency in advance; 2) according to the equipment test period, monitoring whether the compiled test plan exceeds the period and the number condition of the period; 3) and monitoring the compiled test plan and whether the test object has omission or not according to the equipment standing book and the equipment test period.
An intelligent annual production plan supervision method based on an inference algorithm comprises the following specific steps:
step1: obtaining a preventive test work plan and work ticket data from a production system: combing data (pre-test plans of high-voltage, chemical and electrical measurement specialties), work ticket information and defect data related to a preventive test plan of a main device, and acquiring required data from a production system according to a combing result;
step2: managing a test plan: distinguishing equipment information from different specialties, extracting the equipment information from an equipment operation and maintenance cycle of a maintenance modification module in a production system, mainly checking whether the equipment is compiled into a system production plan, and in turn, supervising whether a machine account is complete;
and step3: plan execution management: associating power failure application form: for a production plan needing power failure, associating a power failure application form, and checking the details of the power failure application form; and (3) monitoring the inconsistency of the work ticket with the system pre-test plan: using a production plan to associate work tickets, such as a production plan of a periodic unit (1 month), and if the current day is reached, a corresponding work ticket should be available in the system every month; through statistical analysis, if the working ticket is found to be lacking, the working ticket is considered to be inconsistent with the system pre-trial plan; test report and system pre-test plan inconsistency supervision: a production plan is used for associating test reports, such as a production plan of a periodic unit (1 month), and a corresponding work ticket should be provided in the system every month from the last month by the current date (5 working days are required for uploading the test reports after the test is finished, and the test reports can be widened to 1 month in consideration of actual conditions); through statistical analysis, if the working ticket is found to be lacking, the working ticket is considered to be inconsistent with the system pre-trial plan; planning to exceed the period: comparing the planned starting time and the planned ending time of the production plan with the planned actual starting time and the planned actual ending time in different dimensions (a city station and a transformer substation) to judge whether the production plan is overdue and the amount of the overdue;
and 4, step 4: and (3) equipment expiration reminding: and (3) equipment expiration: and (3) reminding the equipment of overdue according to the equipment test period in different dimensions (a city bureau and a substation), wherein for example, the last test day of certain equipment is 2019-10-14, the period is 1 year, and if the test is not started for 2020-10-14 days, the equipment is reminded of overdue until the equipment is tested. Meanwhile, early warning grade division is carried out on the equipment overdue according to the conditions of the equipment (equipment importance, equipment health degree, risk evaluation algorithm and the like); major/emergency defect display: showing the number of major emergency defects of the equipment in the dimension of the equipment, and checking the details of each major/emergency defect; and (3) displaying a latest patrol plan: the latest patrol plan of the equipment can be displayed; developing a hand filling function: the reason for the overdue of the equipment, the control measures and the next power failure planning time are manually filled by the client.
The cross detection method comprises the following steps:
in a dimension-modeled data warehouse, there is an operation called Drill Across, where chinese is generally translated into "cross-explore".
In Bus Architecture (Bus Architecture) based dimension modeling, most dimension tables are shared by fact tables. For example, the "marketing affairs fact table" and the "inventory snapshot fact table" will have the same dimension table, "date dimension", "product dimension", and "marketplace dimension". At this time, if there is a need to compare the fact of looking up sales and inventory according to the common dimension, two SQL queries are issued to look up the sales data and inventory data counted according to the dimension. And then performing external connection based on the common dimensionality, and merging the data. The operation of sending out the multi-path SQL and then merging is cross-probing.
When the requirement of such cross-exploration is common, there is a modeling method to avoid cross-exploration, namely merging Fact tables (Consolidated Fact tables). Merging fact tables refers to a modeling method that combines facts that are at the same granularity in different fact tables. That is, a fact table is newly created, and its dimension is a set of the same dimensions of two or more fact tables, and a fact is an interesting fact in several fact tables. The data for this fact table comes from the Staging Area as does the data for the other fact tables.
Merging fact tables is better than cross-explore in both performance and ease of use, but the combined fact tables must be at the same granularity and dimension level.
Reasoning mode and classification of intelligent reasoning algorithm
Categorizing on a rational basis
Deductive reasoning: deductive reasoning is based on the known general knowledge to draw conclusions that are appropriate for a certain individual case and are included in the known knowledge. Is a general to individual reasoning method, the core of which is a three-segment theory,
inductive reasoning: is an individual to general reasoning method. The reasoning process for general conclusions is generalized from a sufficient number of cases.
And (3) default reasoning: default reasoning, also known as default reasoning, is the reasoning that is performed assuming that certain conditions are already met in the case of incomplete knowledge.
Certainty according to knowledge used in reasoning
And (3) deterministic reasoning: deterministic reasoning means that the knowledge used in reasoning is accurate, and the conclusions drawn are also definite, with the true value being either true or false, and no third case occurring.
Uncertainty reasoning: uncertainty reasoning means that the knowledge used in reasoning is not all accurate, nor is the conclusion drawn completely positive, with the true value lying between true and false.
Monotonicity in the reasoning process
Monotonic reasoning: the conclusions drawn are in a monotonically increasing trend and are closer and closer to the final goal.
Non-monotonic reasoning: due to the addition of new knowledge, not only the proposed conclusion is not strengthened, but it is rather denied.
2) Inferred control strategy
Reasoning direction: forward and reverse directions
Solving strategy including one solution, all solutions and optimal solution
Conflict resolution: sorting positive objects and sorting matching degree
And (3) limiting strategy: depth, width, time, space.
The implementation method of the test report intelligent diagnosis analysis module comprises the following steps: through the test report intelligent diagnosis analysis component, an intelligent diagnosis model is established, strong feature intelligent pairing and extraction analysis of a test report and a test management regulation specification are supported, keyword extraction, hierarchical classification and accurate reasoning are carried out by combining methods such as vocabulary standardization, named entity identification and standardized data dictionary in natural language processing, examination of test reports of a main transformer, a circuit breaker and GIS main equipment is mainly carried out, the normative of text contents of the test reports is examined, and whether a missing item exists or not and whether the examined value meets the qualification of interval criteria or not is judged; the intelligent diagnosis and analysis component can realize daily maintenance such as revision rule specification, diagnosis model and the like through software interface or file import;
the method comprises the following specific steps:
step1: establishing a test procedure library model: according to the electric power equipment maintenance test rules, a test rule library of a main transformer, a circuit breaker and GIS main equipment is established to support version maintenance; the content comprises the following steps: maintenance categories, projects, specialties, work requirements, and review rules;
step2: intelligent pairing and extraction analysis of strong features of a test procedure library model: according to the working requirements (specifically including which ones) in the test procedure library model, carrying out strong feature intelligent pairing and extraction analysis on the working requirements, generating an examination rule, quantizing the examination rule into the corresponding test procedure library model, and comparing the examination rule with the values filled in the operation process in the test report;
and step3: test report normative review: according to the examination rule in the test rule base model, examining the normalization of the text content of the test report, for example, the text content should be a number, but a character string text is filled in;
and 4, step 4: examination report item missing examination: judging whether the test report has a defect or not according to the examination rule in the test rule base model;
and 5: examination report on the eligibility of the interval of values: and (4) according to the examination rule in the test rule base model, examining whether the numerical value meets the qualification of the interval criterion.
And meanwhile, comparing the result of the last test data (whether the result of the last test data is correct or not) (the result of the last test data exists in the insulation resistance test report), and if the result of the last test data exceeds or is lower than a set threshold value, judging that the test report data interval is unqualified.
Step 6: and displaying an intelligent analysis result: and combining the normative examination results, the missing examination results and the interval eligibility examination results of the test reports to generate an intelligent analysis result report.
Preferably, the analysis method of the intelligent analysis result adopts a heuristic data analysis method, a qualitative data analysis method, an off-line data analysis method or an on-line data analysis method;
the data analysis refers to analyzing a large amount of collected data by using a proper statistical and analytical method, summarizing, understanding and digesting the data so as to maximally develop the function of the data and play the role of the data. Data analysis is the process of studying and summarizing data in detail to extract useful information and to form conclusions.
The data is also referred to as observation values and is the result of experiments, measurements, observations, investigations, and the like. The data processed in the data analysis is divided into qualitative data and quantitative data. Data that fall into only one category and cannot be measured numerically is called qualitative data. The qualitative data is represented as category, but is not sequential, and is classified data, such as gender, brand, and the like; the qualitative data is represented as categories, but is sorted sequentially, and is sequencing data such as academic calendar, quality grade of goods, and the like.
1) Type of data analysis
(1) Exploratory data analysis: exploratory data analysis refers to a method of analyzing data to form hypothesis-worthy tests, which is complementary to conventional statistical hypothesis testing approaches. The method is named by the american famous statistician John diagram base (John Tukey).
(2) And (3) qualitative data analysis: qualitative data analysis, also known as "qualitative data analysis," "qualitative research," or "qualitative research data analysis," refers to the analysis of non-numerical data (or data) such as words, photographs, observations, and the like.
(3) And (3) offline data analysis: offline data analysis is used for more complex and time-consuming data analysis and processing, and is generally built on a cloud computing platform, such as an open-source HDFS file system and a MapReduce operation framework. The Hadoop cluster comprises hundreds or even thousands of servers, stores PB or even tens of PB data, runs thousands of offline data analysis jobs every day, processes hundreds of MB to hundreds of TB or even more data for each job, and has a running time of several minutes, hours, days or even longer.
(4) And (3) online data analysis: online data analysis, also known as online analytical processing, is used to process a user's online requests and has a relatively high demand for response time (typically no more than a few seconds). In contrast to offline data analysis, online data analysis can process a user's request in real time, allowing the user to change the constraints and limitations of the analysis at any time. Online data analysis can handle much smaller amounts of data than offline data analysis, but with advances in technology, current online analysis systems have been able to handle tens of millions or even hundreds of millions of records in real time. The traditional online data analysis system is built on a data warehouse taking a relational database as a core, and the online big data analysis system is built on a NoSQL system of a cloud computing platform. If online analysis and processing of big data are not available, huge internet web pages cannot be stored and indexed, so that an existing efficient search engine cannot be provided, and the vigorous development of microblogs, blogs, social networks and the like built on the basis of big data processing cannot be realized.
2) Step of data analysis
The data analysis has an extremely wide application range. A typical data analysis may comprise the following three steps:
1) exploratory data analysis: when the data is just obtained, the data may be disordered and the regularity cannot be seen, and possible forms of regularity are explored by means of drawing, tabulation, equation fitting with various forms, calculation of certain characteristic quantities and the like, namely in what direction and in what way to search and reveal the regularity hidden in the data.
2) And (3) model selection and analysis, wherein one or more types of possible models are proposed on the basis of exploratory analysis, and then certain models are selected through further analysis.
3) And (3) inference analysis: inferences are typically made regarding the degree of reliability and accuracy of a determined model or estimate using mathematical statistical methods.
The main activities of the data analysis process consist of identifying information requirements, collecting data, analyzing data, evaluating and improving the effectiveness of the data analysis.
Identifying a demand: the identification information requirement is a primary condition for ensuring the effectiveness of the data analysis process, and can provide clear targets for collecting and analyzing data. Identifying information requirements is the manager's responsibility who should place a demand for information based on the decision making and process control requirements. In terms of process control, an administrator should identify requirements to leverage those information in order to review process inputs, process outputs, rationality of resource allocation, optimization schemes for process activities, and discovery of process anomaly variations.
Collecting data: purposeful data collection is the basis for ensuring that the data analysis process is effective. Organizations need to plan the content, channels, methods of collecting data. The planning should consider:
firstly, converting the identified requirements into specific requirements, wherein data to be collected may comprise relevant data such as process capacity, uncertainty of a measurement system and the like when a supplier is evaluated;
second, it is clear who is in and where, through what kind of channel and method to collect data;
the recording table is convenient to use; and fourthly, effective measures are taken to prevent data loss and the interference of false data to the system.
Preferably, the intelligent strong feature matching method adopts structure matching and semantic matching, precise matching and approximate matching, static graph matching and dynamic graph matching and optimal algorithm and approximate algorithm, and the graph matching problem is divided into semantic matching and structure matching according to whether graph data contains semantic information on nodes and edges.
1) Structure matching and semantic matching
The graph matching problem is classified into semantic matching and structure matching according to whether graph data contains semantic information on nodes and edges.
The structure matching mainly ensures that the matched nodes have the same connected structure, and representative algorithms comprise Ullman algorithm which is put forward at the earliest in 1976 and VF2, QuickSI, GraphQL, Spath and other algorithms which are improved on the basis of the algorithm and purchased in the year.
In semantic matching, nodes and sidebands of data have rich semantic information, and the matching result needs to be ensured to be consistent with a pattern diagram on the structure and the semantic information at the same time. The current research is mainly aimed at matching problems, such as the classical GraphGrep algorithm.
On one hand, the semantic matching algorithm can be formed by introducing semantic constraints on nodes and edges on the basis of the existing structure matching algorithm and can also realize the rapid matching of a semantic graph by designing index features based on semantic information by algorithms such as GraphGrep and the like.
2) Exact and approximate match
The precise matching means that the matching result is completely consistent with the structure and the attribute of the pattern diagram, and the matching mode is mainly applied to the field with higher requirement on the accuracy of the matching result. (both structural and semantic matching of the foregoing fall within this category)
Approximate matching is a matching algorithm that can tolerate the presence of noise and errors in the results. Representative approximate matching algorithms include SUBDUE, LAW and the like, and similarity of two graphs is measured mainly by defining editing distance, maximum common subgraph, minimum common hypergraph and the like.
3) Static graph matching and dynamic graph matching
The static graph matching requires that all data graphs do not change along with the time, and a matching algorithm generally analyzes and mines all data graphs and extracts effective features according to data characteristics to establish indexes, so that the matching efficiency is improved. The exemplary algorithm GIndex, Tree + Delta, FG-Index.
The dynamic graph matching mainly adopts an incremental processing basis, only the updated data graph is analyzed, a simple and discriminative characteristic resume index is selected, an approximate algorithm is adopted to improve the matching speed, and the dynamic graph matching is still in a starting stage at present.
4) Optimization algorithm and approximation algorithm
The optimal algorithm ensures that the matching result is completely accurate.
The approximate algorithm is different from approximate matching and is usually based on mathematical models such as probability statistics and the like, and the method has the advantages of polynomial-level time complexity, is very suitable for matching problems such as dynamic graph matching and the like, has high requirements on algorithm real-time performance and only needs to meet certain accuracy.
Preferably, the natural language processing method is computer science, artificial intelligence, linguistics, and the field of interaction between computers and human (natural) language. Natural Language Processing (Natural Language Processing) is a sub-field of Artificial Intelligence (AI). The main research directions of NLP mainly include: information extraction, text generation, question and answer systems, dialogue systems, text mining, voice recognition, voice synthesis, public opinion analysis, machine translation, and the like. The general processing flow of NLP natural language processing mainly includes:
1) obtaining corpora
The Corpus is the content of NLP task research, and is usually obtained by using a text set as a Corpus (Corpus) and through the existing data, public data sets, crawler capture and other modes.
2) Data pre-processing
The corpus preprocessing mainly comprises the following steps:
(1) and (3) corpus cleaning: useful data is kept, noise data is deleted, and common cleaning modes are as follows: manual deduplication, alignment, deletion, labeling, and the like.
(2) Word segmentation: text is segmented into words, such as by rule-based, statistical-based segmentation methods.
(3) Part of speech tagging: to label words with word class labels, such as nouns, verbs, adjectives, etc., common part-of-speech tagging methods are rule-based, statistical-based algorithms, such as: maximum entropy part-of-speech tagging, HMM part-of-speech tagging, and the like.
(4) Stop words: words that do not contribute to the text features are removed, such as: punctuation, tone, "is", etc.
3) Feature engineering
The main work of this step is to represent the participles into computer-recognized computation types, generally vectors, and commonly used representation models are: bag of words model (Bag of Word, BOW), such as: TF-IDF algorithm; word vectors such as the one-hot algorithm, the word2vec algorithm, etc.
4) Feature selection
The feature selection is mainly based on features obtained by the third feature engineering, and suitable features with strong expression capability are selected, and common feature selection methods comprise the following steps: DF. MI, IG, WFO, and the like.
5) Model selection
After the features are selected, model selection is required, and what model is selected for training. Common organic learning models, such as: KNN, SVM, Naive Bayes, decision trees, K-means, etc.; deep learning models, such as: RNN, CNN, LSTM, Seq2Seq, FastText, TextCNN, etc.
6) Model training
And after the model is selected, performing model training, wherein model fine tuning and the like are included. During the model training process, attention is paid to the overfitting problem that the model can not well fit data, and the overfitting problem that the model can well fit data, but the overfitting problem that the model can well fit data is poor. At the same time, the problems of gradient disappearance and gradient explosion are also prevented.
7) Model evaluation
The evaluation indexes of the model mainly comprise: error rate, accuracy, recall, F1 values, ROC curves, AUC curves, etc.
8) Put into production and come on line
The model has two main modes of putting into production and getting on line: one is offline training model, then the model is deployed online to provide service; the other is an online training model, and the model pickle is persisted after the online training is finished, so that external service is provided.
The implementation method of the test data intelligent analysis module comprises the following steps: based on the constructed standard test database, by combining regression analysis, clustering and correlation analysis algorithms, the test report data of all test equipment of the power-saving network company are compared with each other and subjected to clustering analysis, the problems of plans and test reports are analyzed globally, and the test data commonality rule is researched according to cities, units, equipment and test types.
The intelligent analysis method for the test data based on the data mining comprises the following specific steps:
step 1), determining a test report version corresponding to the test equipment: finding out a test report corresponding to each test device by taking the test devices as dimensions, analyzing versions of the test reports, and finally determining that the test devices have a plurality of test versions;
step 2), determining the test items in the test report: after the test report version corresponding to the test equipment is determined (for example, the main transformer has 3 total preventive test reports), specific test items in the test reports can be analyzed according to each test report, intersection of the test items can be obtained through intelligent analysis, and it is assumed that 6 items exist in all the preventive test reports corresponding to the main transformer with the 500kV voltage level;
step 3), determining test parameters in the test items: according to the test items determined in the step 2), the intersection of the test parameters in each test item can be obtained through intelligent analysis, and the intersection of the test parameters in the test items can be determined to be 60 test parameters on the assumption that the capacitance and tan & of the capacitive bushing are the same as 60 test parameters in the preventive test (electrical part) items of the 500kV oil-immersed power transformer in all the preventive test reports corresponding to the 500kV voltage class main transformer;
step 4), merging and configuring test parameters in the test items: according to the test parameters determined in the step 3), the test parameters can be combined and configured, and only the parameters subjected to combined configuration can be subjected to mutual comparison and cluster analysis;
step 5), analyzing the combined test parameters: according to the combined configuration parameters determined in the step 4), starting from two dimensions (a qualified test report, a qualified test report and an unqualified test report), performing mutual comparison and cluster analysis on the configuration parameters through an intelligent algorithm, and visually displaying the mutual comparison data;
step 6), global analysis and display of the test plan and the test report: globally analyzing the problems of the test plan and the test report, researching the test data commonalities according to the local city, the unit, the equipment and the test type, and visually displaying the commonalities;
step 7), analyzing and displaying online monitoring data: and displaying the online monitoring data in the form of a display list or a trend graph by taking the equipment as a unit.
The regression analysis algorithm technology comprises the following steps:
regression analysis is a statistical analysis method for determining the quantitative relationship of interdependence between two or more variables. In big data analysis, it is a predictive modeling technique that studies a regression model between a dependent variable y (target) and an independent variable x (predictor) that affects it, thereby predicting the development trend of the dependent variable y. When there are a plurality of independent variables, the influence strength of each independent variable x on the dependent variable y can be studied.
1) Linear Regression of Linear Regression
Linear regression, also known as least squares regression, is generally one of the techniques one chooses to use in learning predictive models. In this technique, the dependent variable is continuous, the independent variable may be continuous or discrete, and the nature of the regression line is linear.
2) Polynomial Regression Polynomial Regression
When data is analyzed, different data distribution situations are met, a linear regression method is selected to fit when data points are distributed in a strip shape, but the linear regression method is not good when the data points are a curve, and a polynomial regression method can be used at this time. The polynomial regression model is a regression model obtained by fitting data using a polynomial.
3) Stepwise Regression of Stepwise Regression
We can use this form of regression when dealing with multiple independent variables. The goal of this modeling technique is to maximize the predictive power using the fewest number of predictive variables. The process of stepwise regression to select variables involves two basic steps: firstly, removing variables which are not significant through inspection from the regression model, and secondly, introducing new variables into the regression model, wherein the common stepwise regression method comprises a forward method and a backward method.
4) Ridge Regression
Ridge regression is an important improvement of linear regression, increasing error tolerance. If the data set matrix has multiple collinearities (mathematically called ill-conditioned matrix), then the linear regression is very sensitive to the noise in the input variables, and if the input variable x has a slight variation, its response will also become very large on the output result, and its solution will be very unstable. To solve this problem, there is an optimization algorithm, ridge regression. Ridge regression solves some of the problems of linear regression by imposing a penalty on the magnitude of the coefficients.
5) Lasso Regression
Lasso regression is similar to ridge regression, and adds a penalty to the absolute value of the regression coefficients. In addition, it can reduce bias and improve the accuracy of the linear regression model. Unlike ridge regression, it uses absolute values in the penalty portion rather than square values. This results in a penalty (i.e. the sum of the absolute values used to constrain the estimates) value that makes some parameter estimates equal to zero. The larger the penalty value used, the closer the estimate approaches zero.
6) Elastic network Regression
ElasticNet is a hybrid of Lasso and Ridge regression techniques. Ridge regression is a biased analysis of the cost function using a two-norm (squared term). The Lasson regression uses a norm (absolute term) to perform a biased analysis of the cost function. While ElasticNet combines the two, using a square term and an absolute value term.
7) Bayesian Regression
Bayesian regression can be used for parameter regularization during the prediction phase: the selection of the regularization parameters is not achieved by manual selection, but by manual adjustment of the data values.
8) Robust Regression of Robust Regression
When the least square method encounters the data sample points with abnormal points, the Robust regression can be used to replace the least square method. Of course, the Robust regression can also be used for outlier detection, or to find those sample points that have the greatest impact on the model.
9) Random forest regression
Random forests can be applied to classification and regression problems. This is done depending on whether each cart tree of the random forest is a classification tree or a regression tree. If it is a regression tree, the cart tree is a regression tree, and the principle used is the minimum mean square error.
10) SVR support vector regression
SVR regression is to find a regression plane and make all data of a set nearest to the plane. Since the data may not all lie on the regression plane, the sum of the distances is still large, so the distances of all data to the regression plane can be given a tolerance value to prevent overfitting. The parameter is an empirical parameter and needs to be given manually.
11) Decision Tree regression
Decision tree models are a tree structure that is applied to classification and regression. Decision trees are composed of nodes and directed edges, and generally, a decision tree includes a root node, a plurality of internal nodes and a plurality of leaf nodes. The decision process of the decision tree needs to start from the root node of the decision tree, compare the data to be tested with the characteristic nodes in the decision tree, and select the next comparison branch according to the comparison result until the leaf node is used as the final decision result.
12) Poisson Regression
Poisson regression is used to describe the frequency distribution of the discovery of an event per unit of time, area or volume, and is typically used to describe the distribution of the number of rare events (i.e., small probability) that occur.
The clustering analysis algorithm technology comprises the following steps:
clustering analysis has a popular explanation and metaphor, namely "Clustering by clusters" and "Clustering by people". Aiming at a plurality of specific service indexes, the groups of the observed objects can be divided into different groups according to similarity and dissimilarity. After the division, the similarity between the objects in each group is high, and the objects in different groups have high dissimilarity between each other.
On one hand, the clustering technology is a model technology, and the results after effective clustering can often directly guide landing application practice; on the other hand, the clustering technology is often used as a tool for performing data background search, data cleaning and data arrangement (data conversion) in the early stage of the data analysis process, and has the characteristics of diversity, diversity and the like in practical application.
1) Typical application scenario of cluster analysis
It can be said that the typical application scenario of cluster analysis is very common and business teams are almost daily. For example, the paid users are subjected to cluster analysis according to several specific characteristics, such as interest rate contribution rate, user age, number of recharging times, and the like, so as to obtain populations with different characteristics.
For example: after the paid users are subjected to cluster analysis, the paid number of one group is 40%, and the paid users are characterized in that the age of the users is about 25 years, the profit contribution is small, but the number of recharging times is large; the other group accounts for 15% of the total paid amount, and the group with the characteristics is that the user ages to over 40 years, the profit contribution is large, but the number of recharging times is not large.
2) Primary clustering algorithm classification
A dividing Method (dividing Method);
hierarchical methods (Hierarchical methods);
density-based methods (Density-based methods);
grid-based methods;
model-based Method (Model-based Method)
(1) Method of Partitioning (Partitioning Method)
Given a data set of m objects, and the number of subdivided populations K desired to be generated, the objects can be grouped into K groups (requiring K not to exceed m) in such a way that the objects within each group are dying and the build is distinct. The most common method is the K-Means method, the specific principle of which is:
step1 selecting K objects randomly, and each object selected represents an initial mean or initial group center value of a group;
step2 assigning each of the remaining objects to the closest (most similar) subgroup based on the distance between the initial mean or initial center value of the remaining groups;
step3, recalculating the new mean value for each group;
step 4: this process is repeated until all objects find their closest group in the K-group distribution.
(2) Hierarchical Method (Hierarchical Method)
The most similar data objects are combined pairwise in sequence, and thus a cluster number is formed finally after continuous combination.
Correlation analysis algorithm technology:
correlation analysis is a simple and practical analysis technique that finds correlations or correlations that exist in a large number of data sets, describing the laws and patterns of simultaneous occurrence of certain attributes in an object.
Association analysis is the discovery of interesting associations and relevant associations between sets of items from a large amount of data. A typical example of an association analysis is a shopping basket analysis. The process analyzes the customer's buying habits by discovering the connections between different items that the customer places in their shopping basket. This correlated discovery may help retailers formulate marketing strategies by knowing which items are frequently purchased simultaneously by customers. Other applications include price list design, merchandise promotion, merchandise discharge, and customer segmentation based on purchasing patterns.
Rules in the form of "occurrences of some events cause occurrences of others" can be analyzed from the database for associations. For example, 67% of customers buy the beer and the diapers at the same time, so the service quality and the benefit of the supermarket can be improved through reasonable shelf placement or bundle sales of the beer and the diapers. For example, if the "C language" class is excellent, the possibility of learning the "data structure" is excellent up to 88% ", so that the teaching effect can be improved by enhancing the learning of the" C language ".
1) Apriori algorithm:
the Apriori algorithm is a basic algorithm for mining a frequent item set required for generating boolean association rules, and is also one of the best-known association rule mining algorithms. The Apriori algorithm is named based on a priori knowledge about the nature of the frequent itemset. It uses an iterative method called layer-by-layer search, where a set of k-terms is used to explore a set of (k + 1) -terms. First, find the set of frequent 1-item sets, denoted L1, L1 is used to find the set of frequent 2-item sets, L2, and then to find L3, and so on until no frequent k-item sets can be found. Finding each Lk requires scanning the database once.
In order to improve the processing efficiency of searching and generating the corresponding frequent item sets according to the levels, the Apriori algorithm utilizes an important property and applies the Apriori property to help effectively reduce the search space of the frequent item sets.
Apriori properties: any subset of a frequent item set should also be a frequent item set. It turns out that by definition, if a set of items I does not meet the minimum support threshold min _ sup, then I is not frequent, i.e. p (I) < min _ sup. If an item a is added to the item set I, then the resulting new item set (I $ a) is also not frequent, and the number of occurrences in the entire transaction database may not be greater than the number of occurrences of the original item set I, so that P (I $ a) < min _ sup, i.e., (I $ a) is also not frequent. Thus, it can be easily determined that the Apriori property is established according to the inverse axiom.
Aiming at the defects of Apriori algorithm, the algorithm is optimized:
(1) based on a partitioning approach. The algorithm firstly divides the database into a plurality of mutually disjoint blocks logically, considers one block at a time and generates all frequent item sets for the block, then combines the generated frequent item sets to generate all possible frequent item sets, and finally calculates the support of the item sets. The size of the blocks is here chosen such that each block can be put into main memory and only needs to be scanned once per stage. The correctness of the algorithm is ensured by that each possible frequent item set is a frequent item set at least in a certain block.
The algorithms discussed above are highly parallel. Each partition may be assigned to a respective processor to generate a frequent item set. After each cycle of generating the frequent item set is complete, the processors communicate with each other to generate a global candidate as a set of items. Generally, the communication process is the main bottleneck of algorithm execution time. On the other hand, the time for each independent processor to generate a frequent set of items is also a bottleneck. Other approaches share a hash tree between multiple processors to produce a frequent item set, and more parallelization methods for generating the frequent item set can be found therein.
(2) Hash-based methods. Park et al propose a Hash-based algorithm that efficiently generates a complex set of terms. Experiments show that the main calculation for searching the frequent item set is to generate the frequent 2-item set Lk, and Park and the like are methods for improving the generation of the frequent 2-item set by introducing a hashing technology by utilizing the characteristics.
(3) A sampling based approach. Based on the information obtained from the previous scanning, the detailed combination analysis is performed to obtain an improved algorithm, and the basic idea is as follows: the samples extracted from the database are used to derive rules that may be valid throughout the database, and the results are then verified for the remainder of the database. This algorithm is quite simple and reduces FO costs significantly, but one big drawback is the inaccuracy of the results produced, i.e. the presence of so-called data skew (dataskow). Often times, data distributed on the same page is highly correlated and does not represent the distribution of patterns throughout the database, thereby resulting in a cost similar to that of scanning through the database to sample 5% of the transaction data.
(4) The number of transactions is reduced. The basic principle behind reducing the size of a transaction set for future scanning is that when a transaction does not contain a large item set of length lineage, then the large item set of length k +1 must not be contained. These transactions can thus be deleted and the number of sets of transactions to be scanned can be reduced in the next scan pass. This is the basic idea of AprioriTid.
2) FP-growth algorithm:
even if optimized, the efficiency is still unsatisfactory due to the inherent deficiencies of the Apriori method. In 2000, Han Jianwei et al proposed an algorithm FP-growth for finding Frequent patterns based on a Frequent Pattern Tree (FP-Tree for short). In the FP-growth algorithm, the frequent items contained in each transaction are compressed and stored in the FP-tree according to the descending order of the support degree of the frequent items by scanning the transaction database twice. In the process of finding the frequent pattern later, the transaction database does not need to be scanned, the frequent pattern can be directly generated by only searching in the FP-Tree and recursively calling the FP-growth method, so that a candidate pattern does not need to be generated in the whole finding process. The algorithm overcomes the problems of Apriori algorithm, and is also significantly better than Apriori algorithm in terms of execution efficiency.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and therefore, the scope of the present invention should be determined by the scope of the claims.

Claims (10)

1. The utility model provides a preventative experimental operation management and control system with it is intelligent which characterized in that: the system comprises a standard test database module, an annual production plan intelligent supervision module, a test report intelligent diagnosis analysis module and a test data intelligent analysis module, wherein the standard test database module is used for establishing a standard data structure model, forming a data standard system and establishing a new standard test database; the annual production plan intelligent supervision module is used for carrying out intelligent analysis and accurate matching on the arrangement data of the production plan; the test report intelligent diagnosis analysis module is used for examining the normalization of the text content of the test report; the test data intelligent analysis module is used for comparing test report data of the test equipment with each other and performing cluster analysis, analyzing problems of a plan and a test report globally, and researching the test data commonality rule according to the city, the unit, the equipment and the test type.
2. The intelligent management and control system for preventive test work according to claim 1, wherein: the standard test database construction method of the standard test database module comprises the following steps:
step1: and (3) analyzing defect data: the defect data characteristics of the equipment are known through defect data analysis;
step2: constructing an equipment defect standard library according to the equipment defect data characteristics in the step1, and finishing standardized storage of defect data;
and step3: constructing a defect intelligent diagnosis model, and identifying the defect reasons and defect parts of the equipment through the defect intelligent diagnosis model to realize intelligent diagnosis of the equipment defects and classification of defect severity;
and 4, step 4: analyzing the defect diagnosis result, and recommending defect management measures;
and 5: constructing an equipment risk intelligent evaluation model based on the result obtained by analyzing the defect diagnosis result, and identifying the influence degree of the defect on the equipment risk;
step 6: and classifying the risk grade according to the influence degree of the equipment risk.
3. The intelligent preventive test operation management and control system according to claim 2, wherein: and (3) analyzing defect data: respectively analyzing the number of different years of equipment defects, the number of types of the equipment defects and the number of equipment defect manufacturers, sequencing the number of different years of defects, the number of types of defects and the number of manufacturers, and obtaining the maximum number of failure years, the maximum number of failure types and the maximum number of manufacturers with failures.
4. The intelligent preventive test operation management and control system according to claim 2, wherein: the method for constructing the equipment defect standard library in the step2 comprises the following steps:
a) collecting defect data, wherein data sources for collecting the defect data comprise historical defect reports, defect record data, equipment operation data, equipment test data and equipment online monitoring data, and field names and field contents of a defect record data table of a defect classification standard library are obtained by analyzing the data sources;
b) cleaning and de-duplicating the defect data, and cleaning and de-duplicating two or more pieces of same defect data, defect data loss, defect data messy codes, blank spaces in the defect data, full angle turning half angle of the defect data and English case and case of English in the collected data;
c) and manually marking, namely performing text analysis and manual marking on the defect appearance, the defect part, the defect reason and the processing measure according to the historical defect report to finally obtain an equipment defect standard library.
5. The intelligent preventive test operation management and control system according to claim 4, wherein: the defect entry data includes fields: unit, voltage grade, defect grade, place, equipment name, defect type, defect description, professional category, manufacturer, factory year and month, equipment model, commissioning date, defect cause category, defect cause, defect representation, discovery time, defect part and treatment measure;
the device operation data contains the fields: voltage, three-phase unbalanced current, voltage class;
equipment online monitoring data: dielectric loss, equivalent capacitance, reference voltage alarm, three-phase unbalanced current alarm, dielectric loss alarm, full current alarm, equivalent capacitance alarm, monitoring equipment communication state, monitoring equipment running state, equipment self-checking abnormity, partial discharge and iron core current;
the equipment test data contains the fields: infrared imaging temperature measurement, gas in a gas chamber, contact loop resistance, pressure resistance of an outer insulating surface and gas decomposition product test values.
6. The intelligent preventive test operation management and control system according to claim 2, wherein: the method for constructing the intelligent defect diagnosis model in the step3 comprises the following steps: (1) defect diagnosis system: summarizing the device type, the defects and the parts of the corresponding devices and the defective parts corresponding to the defects to form a defect diagnosis system table; (2) and (3) a defect diagnosis model: a) according to the defect data record table, establishing an equipment defect diagnosis data index: including index name and index description content; b) text preprocessing: performing word segmentation processing on the defect description content, and obtaining word segmentation results of the electric power field according to the electric power field dictionary; c) text distributed representation: the text distributed expression method is based on the principle that the semanteme of a word is described by adjacent words, namely, a language model expressed by a word vector of each word is trained by taking a large number of preprocessed power equipment defects as a corpus, and each dimension of the word vector represents the semantic features of the word learned through the model; d) and (3) establishing a convolutional neural network: the intelligent diagnosis of the equipment defects mainly adopts a convolutional neural network algorithm, the processed defect index data is used as an input layer of the convolutional neural network, the defect texts of the vectorized word vectors in the step c) are classified through a classifier of the convolutional neural network, and corresponding classification results are output; e) model training: the model input variables are fields of defect representation, defect description, defect reason, equipment type, defect type and defect part, and the fields are learned by using a convolutional neural network algorithm to form a final equipment defect diagnosis model.
7. The intelligent preventive test operation management and control system according to claim 2, wherein: the evaluation method of the equipment risk intelligent evaluation model comprises the following steps:
(1) and (3) risk factor analysis: obtaining equipment risk factors according to the influence factor division of the equipment; aging factors, defect factors, state factors, main transformer alarm factors, thermal aging factors and fusion factors;
(2) analyzing the relevance of defect influence factors: and performing correlation analysis by calculating a correlation coefficient according to the equipment risk factor:
(3) constructing an equipment defect deduction rule base: 1) Establishing a defect severity deduction rule base, and giving out a score T1 according to the defect severity deduction rule base; 2) Formulating a defect frequency deduction rule, counting the frequency of defect occurrence in typical, batch and repeated manner, and giving out a score T2 according to the rule range; 3) An equipment importance rule is formulated, and according to the equipment where the defect occurs, a score T3 is given by using the equipment importance deduction rule; 4) Formulating a defect grade deduction rule, and giving out a corresponding score T4 according to the defect grade; 5) Formulating a voltage grade deduction rule, and giving a corresponding fraction T5 according to the voltage grade of the defect generation equipment; 6) Formulating an equipment type deduction rule, and giving out a corresponding score T6 according to the importance degrees of different equipment types; 7) And according to the final defect evaluation score, giving the risk grade of the equipment, wherein the risk grade of the equipment is divided into: normal, general, urgent and great four grades;
(4) risk intelligent assessment: when the defect risk of the equipment is evaluated, the deduction value index and the equipment risk factor are subjected to homotrending processing, the data can be used as an input parameter of an entropy value method after the data processing is finished, an equipment risk intelligent evaluation model based on the defect is constructed, the evaluation of the influence degree of the equipment defect on the equipment risk is finished, and a risk intelligent evaluation result is obtained.
8. The intelligent management and control system for preventive test work according to claim 1, wherein: the implementation method of the intelligent monitoring module for the annual production plan comprises the following steps: the method comprises the steps of extracting layout information of a test plan and a work order from a 6+1 production management system, supervising and supervising annual production plans, applying a cross exploration, dimension integration and splitting method, combining an intelligent reasoning algorithm, carrying out intelligent analysis and accurate matching on layout data of the production plans, and realizing plan supervision; the plan supervision method comprises three aspects: 1) by associating the production plan work order, the work ticket and the equipment test period time, the system is supervised to try the plan consistency in advance; 2) according to the equipment test period, monitoring whether the compiled test plan exceeds the period and the number condition of the period; 3) and monitoring the compiled test plan and whether the test object has omission or not according to the equipment standing book and the equipment test period.
9. The intelligent management and control system for preventive test work according to claim 1, wherein: the implementation method of the test report intelligent diagnosis analysis module comprises the following steps: the test report and the test management regulation are subjected to strong feature intelligent pairing, extraction and analysis, and the method of vocabulary standardization, named entity identification and standardized data dictionary in natural language processing is combined to perform keyword extraction, hierarchical classification and accurate reasoning, so that the normalization of the text content of the test report is examined, and whether the lack of items exists and whether the examined value meets the qualification of interval criterion is judged.
10. The intelligent management and control system for preventive test work according to claim 1, wherein: the implementation method of the test data intelligent analysis module comprises the following specific steps:
step 1), determining a test report version corresponding to the test equipment: finding out a test report corresponding to each test device by taking the test devices as dimensions, analyzing versions of the test reports, and finally determining that the test devices have a plurality of test versions;
step 2), determining the test items in the test report: after the test report version corresponding to the test equipment is determined, analyzing specific test items in the test reports according to each test report, and obtaining intersection of the test items through intelligent analysis;
step 3), determining test parameters in the test items: obtaining intersection of test parameters in each test item through intelligent analysis according to the test items determined in the step 2);
step 4), merging and configuring test parameters in the test items: merging and configuring the test parameters according to the test parameters determined in the step 3), and performing mutual comparison and cluster analysis on the merged and configured parameters;
step 5), analyzing the combined test parameters: according to the combined configuration parameters determined in the step 4), starting from two dimensions of a qualified test report, a qualified test report and an unqualified test report, performing mutual comparison and cluster analysis on the configuration parameters through an intelligent algorithm of regression analysis, clustering and association analysis, and visually displaying the mutual comparison data;
step 6), global analysis and display of the test plan and the test report: globally analyzing the problems of the test plan and the test report, researching the test data commonalities according to the local city, the unit, the equipment and the test type, and visually displaying the commonalities;
step 7), analyzing and displaying online monitoring data: and displaying the online monitoring data in the form of a display list or a trend graph by taking the equipment as a unit.
CN202110747608.0A 2021-07-02 2021-07-02 Intelligent preventive test operation management and control system Active CN113379313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110747608.0A CN113379313B (en) 2021-07-02 2021-07-02 Intelligent preventive test operation management and control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110747608.0A CN113379313B (en) 2021-07-02 2021-07-02 Intelligent preventive test operation management and control system

Publications (2)

Publication Number Publication Date
CN113379313A true CN113379313A (en) 2021-09-10
CN113379313B CN113379313B (en) 2023-06-20

Family

ID=77580745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110747608.0A Active CN113379313B (en) 2021-07-02 2021-07-02 Intelligent preventive test operation management and control system

Country Status (1)

Country Link
CN (1) CN113379313B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113934172A (en) * 2021-10-20 2022-01-14 四川汉唐江电力有限公司 Intelligent information management system for preventive test of power equipment facing mobile terminal
CN113947377A (en) * 2021-10-22 2022-01-18 浙江正泰仪器仪表有限责任公司 Laboratory management system
CN114722973A (en) * 2022-06-07 2022-07-08 江苏华程工业制管股份有限公司 Defect detection method and system for steel pipe heat treatment

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040023689A1 (en) * 2002-08-02 2004-02-05 Nokia Corporation Method for arranging SIM facility to digital wireless terminal equipment and corresponding terminal equipment and server
CN101859409A (en) * 2010-05-25 2010-10-13 广西电网公司电力科学研究院 Power transmission and transformation equipment state overhauling system based on risk evaluation
US20150240986A1 (en) * 2012-10-05 2015-08-27 Lono Manfrotto + Co. S.P.A. Tripod for supporting video/photographic equipment
CN104933477A (en) * 2015-06-05 2015-09-23 国网电力科学研究院武汉南瑞有限责任公司 Method for optimizing maintenance strategy by using risk assessment of power transmission and transformation equipment
CN105389302A (en) * 2015-10-19 2016-03-09 广东电网有限责任公司电网规划研究中心 Power grid design review index structure information identification method
CN106199305A (en) * 2016-07-01 2016-12-07 太原理工大学 Underground coal mine electric power system dry-type transformer insulation health state evaluation method
US20170209180A1 (en) * 2014-10-15 2017-07-27 Medicrea International Vertebral osteosynthesis equipment
CN107180267A (en) * 2017-06-01 2017-09-19 国家电网公司 A kind of familial defect diagnostic method of secondary operation management system
CN107491381A (en) * 2017-07-04 2017-12-19 广西电网有限责任公司电力科学研究院 A kind of equipment condition monitoring quality of data evaluating system
CN108037133A (en) * 2017-12-27 2018-05-15 武汉市智勤创亿信息技术股份有限公司 A kind of power equipments defect intelligent identification Method and its system based on unmanned plane inspection image
CN108051711A (en) * 2017-12-05 2018-05-18 国网浙江省电力公司检修分公司 Solid insulation surface defect diagnostic method based on state Feature Mapping
CN108767851A (en) * 2018-06-14 2018-11-06 深圳供电局有限公司 A kind of substation's O&M intelligent operation command methods and system
CN108920609A (en) * 2018-06-28 2018-11-30 南方电网科学研究院有限责任公司 Electric power experimental data method for digging based on multi dimensional analysis
CN109490713A (en) * 2018-12-13 2019-03-19 中国电力科学研究院有限公司 A kind of method and system moving inspection and interactive diagnosis for cable run
CN110058103A (en) * 2019-05-23 2019-07-26 国电南京自动化股份有限公司 Intelligent transformer fault diagnosis system based on Vxworks platform
CN110837866A (en) * 2019-11-08 2020-02-25 国网新疆电力有限公司电力科学研究院 XGboost-based electric power secondary equipment defect degree evaluation method
CN111508603A (en) * 2019-11-26 2020-08-07 中国科学院苏州生物医学工程技术研究所 Birth defect prediction and risk assessment method and system based on machine learning and electronic equipment
CN111797146A (en) * 2020-07-20 2020-10-20 贵州电网有限责任公司电力科学研究院 Big data-based equipment defect correlation analysis method
CN112070720A (en) * 2020-08-11 2020-12-11 国网河北省电力有限公司保定供电分公司 Transformer substation equipment defect identification method based on deep learning model
CN112104083A (en) * 2020-09-17 2020-12-18 贵州电网有限责任公司 Power grid production command system based on situation awareness
CN112233193A (en) * 2020-09-30 2021-01-15 上海恒能泰企业管理有限公司 Power transformation equipment fault diagnosis method based on multispectral image processing
CN112528041A (en) * 2020-12-17 2021-03-19 贵州电网有限责任公司 Scheduling phrase specification verification method based on knowledge graph
CN112910089A (en) * 2021-01-25 2021-06-04 国网山东省电力公司青岛供电公司 Transformer substation secondary equipment fault logic visualization method and system
CN113162232A (en) * 2021-04-09 2021-07-23 北京智盟信通科技有限公司 Power transmission line equipment risk assessment and defense decision system and method

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040023689A1 (en) * 2002-08-02 2004-02-05 Nokia Corporation Method for arranging SIM facility to digital wireless terminal equipment and corresponding terminal equipment and server
CN101859409A (en) * 2010-05-25 2010-10-13 广西电网公司电力科学研究院 Power transmission and transformation equipment state overhauling system based on risk evaluation
US20150240986A1 (en) * 2012-10-05 2015-08-27 Lono Manfrotto + Co. S.P.A. Tripod for supporting video/photographic equipment
US20170209180A1 (en) * 2014-10-15 2017-07-27 Medicrea International Vertebral osteosynthesis equipment
CN104933477A (en) * 2015-06-05 2015-09-23 国网电力科学研究院武汉南瑞有限责任公司 Method for optimizing maintenance strategy by using risk assessment of power transmission and transformation equipment
CN105389302A (en) * 2015-10-19 2016-03-09 广东电网有限责任公司电网规划研究中心 Power grid design review index structure information identification method
CN106199305A (en) * 2016-07-01 2016-12-07 太原理工大学 Underground coal mine electric power system dry-type transformer insulation health state evaluation method
CN107180267A (en) * 2017-06-01 2017-09-19 国家电网公司 A kind of familial defect diagnostic method of secondary operation management system
CN107491381A (en) * 2017-07-04 2017-12-19 广西电网有限责任公司电力科学研究院 A kind of equipment condition monitoring quality of data evaluating system
CN108051711A (en) * 2017-12-05 2018-05-18 国网浙江省电力公司检修分公司 Solid insulation surface defect diagnostic method based on state Feature Mapping
CN108037133A (en) * 2017-12-27 2018-05-15 武汉市智勤创亿信息技术股份有限公司 A kind of power equipments defect intelligent identification Method and its system based on unmanned plane inspection image
CN108767851A (en) * 2018-06-14 2018-11-06 深圳供电局有限公司 A kind of substation's O&M intelligent operation command methods and system
CN108920609A (en) * 2018-06-28 2018-11-30 南方电网科学研究院有限责任公司 Electric power experimental data method for digging based on multi dimensional analysis
CN109490713A (en) * 2018-12-13 2019-03-19 中国电力科学研究院有限公司 A kind of method and system moving inspection and interactive diagnosis for cable run
CN110058103A (en) * 2019-05-23 2019-07-26 国电南京自动化股份有限公司 Intelligent transformer fault diagnosis system based on Vxworks platform
CN110837866A (en) * 2019-11-08 2020-02-25 国网新疆电力有限公司电力科学研究院 XGboost-based electric power secondary equipment defect degree evaluation method
CN111508603A (en) * 2019-11-26 2020-08-07 中国科学院苏州生物医学工程技术研究所 Birth defect prediction and risk assessment method and system based on machine learning and electronic equipment
CN111797146A (en) * 2020-07-20 2020-10-20 贵州电网有限责任公司电力科学研究院 Big data-based equipment defect correlation analysis method
CN112070720A (en) * 2020-08-11 2020-12-11 国网河北省电力有限公司保定供电分公司 Transformer substation equipment defect identification method based on deep learning model
CN112104083A (en) * 2020-09-17 2020-12-18 贵州电网有限责任公司 Power grid production command system based on situation awareness
CN112233193A (en) * 2020-09-30 2021-01-15 上海恒能泰企业管理有限公司 Power transformation equipment fault diagnosis method based on multispectral image processing
CN112528041A (en) * 2020-12-17 2021-03-19 贵州电网有限责任公司 Scheduling phrase specification verification method based on knowledge graph
CN112910089A (en) * 2021-01-25 2021-06-04 国网山东省电力公司青岛供电公司 Transformer substation secondary equipment fault logic visualization method and system
CN113162232A (en) * 2021-04-09 2021-07-23 北京智盟信通科技有限公司 Power transmission line equipment risk assessment and defense decision system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王廷银;林明贵;陈达;吴允平;: "基于北斗RDSS的核辐射监测应急通讯方法" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113934172A (en) * 2021-10-20 2022-01-14 四川汉唐江电力有限公司 Intelligent information management system for preventive test of power equipment facing mobile terminal
CN113934172B (en) * 2021-10-20 2024-05-28 四川汉唐江电力有限公司 Intelligent informatization management system for preventive test of power equipment for mobile terminal
CN113947377A (en) * 2021-10-22 2022-01-18 浙江正泰仪器仪表有限责任公司 Laboratory management system
CN114722973A (en) * 2022-06-07 2022-07-08 江苏华程工业制管股份有限公司 Defect detection method and system for steel pipe heat treatment

Also Published As

Publication number Publication date
CN113379313B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
Chu et al. A global supply chain risk management framework: An application of text-mining to identify region-specific supply chain risks
Ahmed et al. Sentiment analysis of online food reviews using big data analytics
Wu et al. Effective crude oil price forecasting using new text-based and big-data-driven model
CN113379313B (en) Intelligent preventive test operation management and control system
Bhardwaj et al. Review of text mining techniques
CN112966259A (en) Power monitoring system operation and maintenance behavior security threat assessment method and equipment
Xu et al. Data-driven causal knowledge graph construction for root cause analysis in quality problem solving
Dong et al. Exploring the linear and nonlinear causality between internet big data and stock markets
Karaoğlu et al. Applications of machine learning in aircraft maintenance
ARMEL et al. Fraud detection using apache spark
Koh Design change prediction based on social media sentiment analysis
Al-Ghalibi et al. NLP based sentiment analysis for Twitter's opinion mining and visualization
Edris Abadi et al. A clustering approach for data quality results of research information systems
Hu et al. A classification model of power operation inspection defect texts based on graph convolutional network
Muir et al. Using Machine Learning to Improve Public Reporting on US Government Contracts
CN115034762A (en) Post recommendation method and device, storage medium, electronic equipment and product
CN113377746B (en) Test report database construction and intelligent diagnosis analysis system
Wang et al. Improving failures prediction by exploring weighted shape‐based time‐series clustering
Chang [Retracted] Evaluation Model of Enterprise Lean Management Effect Based on Data Mining
Zhu et al. A Text Classification Algorithm for Power Equipment Defects Based on Random Forest
Liang et al. Automatic Database Alignment Method to Improve Failure Data Quality
CN113378560B (en) Test report intelligent diagnosis analysis method based on natural language processing
Chigarev Why IEEE xplore matters for research trend analysis in the energy sector
Song et al. A new approach to risk assessment in failure mode and effect analysis based on engineering textual data
CN113378978A (en) Test data intelligent analysis method based on data mining

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant