CN113379313B - Intelligent preventive test operation management and control system - Google Patents

Intelligent preventive test operation management and control system Download PDF

Info

Publication number
CN113379313B
CN113379313B CN202110747608.0A CN202110747608A CN113379313B CN 113379313 B CN113379313 B CN 113379313B CN 202110747608 A CN202110747608 A CN 202110747608A CN 113379313 B CN113379313 B CN 113379313B
Authority
CN
China
Prior art keywords
test
equipment
defect
data
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110747608.0A
Other languages
Chinese (zh)
Other versions
CN113379313A (en
Inventor
赵超
文屹
吕黔苏
张迅
王冕
黄军凯
范强
陈沛龙
李欣
吴建蓉
丁江桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Power Grid Co Ltd
Original Assignee
Guizhou Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Power Grid Co Ltd filed Critical Guizhou Power Grid Co Ltd
Priority to CN202110747608.0A priority Critical patent/CN113379313B/en
Publication of CN113379313A publication Critical patent/CN113379313A/en
Application granted granted Critical
Publication of CN113379313B publication Critical patent/CN113379313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/42Data-driven translation
    • G06F40/44Statistical methods, e.g. probability models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/042Backward inferencing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Strategic Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Quality & Reliability (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Bioinformatics & Computational Biology (AREA)

Abstract

The invention discloses an intelligent preventive test operation management and control system, which comprises a standard test database module, an annual production plan intelligent supervision module, a test report intelligent diagnosis and analysis module and a test data intelligent analysis module, wherein the standard test database module is used for establishing a standard data structure model to form a data standard system and constructing a new standard test database; the annual production plan intelligent supervision module is used for carrying out intelligent analysis and accurate matching on the scheduling data of the production plan; the test report intelligent diagnosis analysis module is used for checking normalization of test report text content; the intelligent test data analysis module is used for carrying out mutual comparison and cluster analysis on test report data of test equipment, globally analyzing problems of a plan and test reports, and researching test data commonality rules according to the places, units, equipment and test types. The intelligent operation management and control system for the test data integrates planning, management, supervision and analysis, changes the traditional working mode, fully uses the existing technological means, greatly improves the management efficiency and quality of the test operation, brings substantial changes to the overhaul test operation, and comprehensively improves the informatization degree of the overhaul test operation.

Description

Intelligent preventive test operation management and control system
Technical Field
The invention relates to the technical field of equipment risk assessment, in particular to an intelligent preventive test operation management and control system.
Background
Device defect diagnosis: in recent years, more researches on the defect diagnosis of power grid equipment at home and abroad are carried out, and partial scholars in China mainly carry out intelligent diagnosis research on equipment defects based on structural data such as test data, operation data and the like of the equipment, for example, the national power grid and traffic university cooperate, and a GIS switch defect diagnosis method based on a radiation electric field characteristic parameter support vector machine is developed in 2019, and the research is a GIS switch defect diagnosis method based on the radiation electric field characteristic parameter support vector machine, and comprises the steps of 1, preprocessing experimental data; 2. constructing a signal case knowledge base; 3. obtaining an SVM defect diagnosis model; 4. support vector machine defect diagnosis process. The operation transient radiation electric field in the operation process of the GIS isolating switch is collected by the research, the collected operation transient radiation electric field is processed, the signal feature vector corresponding to the SVM defect diagnosis model with the optimal identification precision is obtained, the obtained signal feature vector is input into the SVM defect diagnosis model with the optimal identification precision, the classification result of the GIS isolating switch is obtained, the judgment of the operation condition of GIS equipment is realized, and the safe operation of a power grid is ensured.
The GIS equipment defect diagnosis research based on the support vector machine mainly aims at solving the problem that the selected data source is single, and the mode can lead to a good research conclusion effect but cannot be applied to the floor.
At present, the defect analysis research and practice based on the big data mining technology are applied to many countries, such as America, japanese, english, germany and the like, and reports on the application of the technology are provided. The japan starts from the 80 s and carries out predictive maintenance based on condition monitoring. The japanese power generation equipment overhaul institute has focused on the study of the data mining rule pattern, and in the overhaul, the technologies such as association analysis, cluster analysis, time series analysis and the like are adopted to perform defect analysis and life assessment on equipment. The maintenance strategy taking reliability as the center is proposed by a research and development center of the electric power institute in the United states, a series of technical schemes and related systems based on the optimized maintenance of the big data mining technology are provided, and the maintenance strategy is popularized and applied in multiple household appliances, so that good effects are achieved. Germany also actively adopts data mining technology to improve maintenance efficiency. In recent years, germany has also studied the maintenance work of a power plant, and on the basis of the monitoring and diagnosis technology of power plant development equipment, state maintenance by the data mining technology is carried out, and the potential of large data mining is exerted in equipment detection.
Based on the problems and the research conditions, the integrated multi-service field data are used for carrying out intelligent comprehensive diagnosis on the defects of the primary equipment, carrying out deep analysis on the basis of the existing research, giving the severity of the defects of the primary equipment, supporting the actual work of service personnel, and improving the defect solving capability of the service personnel.
Device risk assessment: the equipment risk assessment is to analyze and judge the equipment risk according to the characteristics and the change condition of the equipment risk influence factors, accurately assess the risk level of the equipment risk, reasonably predict the development trend of defects or risks and provide a basis for reducing the equipment risk. At present, scientific research institutions, equipment operation units and manufacturers at home and abroad have conducted a great deal of research work in the related fields, and abundant research results are obtained in the aspects of evaluation methods, system construction and the like. Intelligent evaluation methods such as fuzzy comprehensive evaluation, rough set theory, neural network, support vector machine, evidence theory, expert system and the like. In aspect of system construction, a state evaluation and risk evaluation guide of a series of power grid equipment are issued by a national power grid company and a southern power grid company successively in 2008. The research results and the system powerfully ensure the safe and reliable operation of the primary equipment of the power grid.
However, because the primary equipment of the power grid has a complex structure, high integration level and complex and changeable running environment, and is often influenced by external bad working conditions and system scheduling mode changes, the difficulty of equipment risk assessment work is greatly increased. The main aspects are as follows:
1) Most of the existing risk assessment methods based on the equipment test data are single or limited, the comprehensive influence degree of the internal influence factors of the equipment on the equipment risk cannot be comprehensively considered, and the accuracy and pertinence of the assessment result are to be improved.
2) Because the defects or faults belong to the small probability event, the existing defect or fault sample data cannot meet the requirements of the intelligent evaluation method on the modeling sample, and the association relation and the evolution rule between the state parameters and the equipment risk are difficult to obtain, so that the key parameters of the evaluation model are mainly selected by experience, and the accuracy of the evaluation result and the practicability of the evaluation method are seriously restricted.
3) The existing equipment risk assessment method relies on manual judgment, accuracy and efficiency are urgently improved, and accuracy of equipment risk assessment is severely restricted.
Based on the problems, a new risk assessment method needs to be explored, a risk assessment model is established, accuracy of an assessment result is improved, and fine assessment of equipment risks is achieved.
Disclosure of Invention
The invention aims to solve the technical problems that: the intelligent preventive test operation management and control system is provided to solve the technical problems in the prior art.
The technical scheme adopted by the invention is as follows: the intelligent preventive test operation management and control system comprises a standard test database module, an annual production plan intelligent supervision module, a test report intelligent diagnosis and analysis module and a test data intelligent analysis module, wherein the standard test database module is used for establishing a standard data structure model to form a data standard system and constructing a new standard test database; the annual production plan intelligent supervision module is used for carrying out intelligent analysis and accurate matching on the scheduling data of the production plan; the test report intelligent diagnosis analysis module is used for checking normalization of test report text content; the intelligent test data analysis module is used for carrying out mutual comparison and cluster analysis on test report data of test equipment, globally analyzing problems of a plan and test reports, and researching test data commonality rules according to the places, units, equipment and test types.
The standard test database construction method of the standard test database module comprises the following steps:
Step 1: defect data analysis: analyzing and knowing the defect data characteristics of the equipment through defect data;
step 2: constructing an equipment defect standard library according to the equipment defect data characteristics of the step 1, and finishing the standardized storage of the defect data;
step 3: constructing a defect intelligent diagnosis model, and identifying the defect reasons and defect positions of the equipment through the defect intelligent diagnosis model to realize intelligent diagnosis of the equipment defects and division of the severity of the defects;
step 4: analyzing a defect diagnosis result, and recommending defect management measures;
step 5: constructing an equipment risk intelligent evaluation model based on a result obtained by analyzing the defect diagnosis result, and identifying the influence degree of the defect on the equipment risk;
step 6: and classifying the risk grades according to the influence degree of the equipment risks.
Defect data analysis: and respectively analyzing the number of different years of the defects of the equipment, the distribution number of the types of the defects of the equipment and the distribution number of manufacturers of the defects of the equipment, and sequencing the number of the types of the defects of the different years and the number of the manufacturers to obtain the maximum number of fault years, the maximum number of fault types and the maximum number of manufacturers with faults.
The construction method of the equipment defect standard library in the step 2 comprises the following steps:
a) Collecting defect data, wherein the data sources of the defect data collection comprise historical defect reports, defect record data, equipment operation data, equipment test data and equipment on-line monitoring data, and obtaining field names and field contents of a defect record data table of a defect classification standard library by analyzing the data sources;
b) And cleaning and de-duplicating the defect data, and cleaning and de-duplicating two or more pieces of the same defect data, defect data deletion, defect data mess-code, blank existence in the defect data, full-angle half-angle conversion and English case of the defect data.
c) And (3) manually marking, namely manually marking text analysis on defect images, defect positions, defect reasons and treatment measures according to the historical defect report, and finally obtaining an equipment defect standard library.
The defect record data contains fields: units, voltage levels, defect levels, places, equipment names, defect types, defect descriptions, major classes, manufacturers, factory years and months, equipment models, commissioning dates, defect cause types, defect causes, defect appearances, discovery time, defect parts and treatment measures;
the device operation data contains fields: voltage, three-phase unbalanced current, voltage class;
on-line monitoring data of equipment: dielectric loss, equivalent capacitance, reference voltage alarm, three-phase unbalanced current alarm, dielectric loss alarm, full current alarm, equivalent capacitance alarm, monitoring equipment communication state, monitoring equipment operation state, equipment self-checking abnormality, partial discharge and iron core current;
The device test data contains fields: infrared imaging temperature measurement, gas in a gas chamber, contact loop resistance, external insulation surface withstand voltage and gas decomposition product test value.
The method for constructing the intelligent defect diagnosis model in the step 3 comprises the following steps: (1) a defect diagnosis system: summarizing the equipment type, the defects corresponding to the equipment and the defects corresponding to the defects and parts of the defects to form a defect diagnosis system table; (2) defect diagnosis model: a) Establishing equipment defect diagnosis data indexes according to the defect data record table: includes index names and index descriptive contents; b) Text preprocessing: performing word segmentation processing on the defect description content, and obtaining a word segmentation result of the electric power field according to the dictionary of the electric power field; c) Text distributed representation: the text distributed representation method is based on the principle that the semantics of a word are characterized by the adjacent words, namely, a large number of preprocessed power equipment defects are recorded as a corpus, a language model represented by the word vector of each word is trained, and each dimension of the word vector represents the semantic characteristics of the word learned by the model; d) And (3) building a convolutional neural network: the intelligent diagnosis of the equipment defects mainly adopts a convolutional neural network algorithm, the processed defect index data is used as an input layer of the convolutional neural network, the defect text of the vectorized word vector in the step c) is classified through a classifier of the convolutional neural network, and a corresponding classification result is output; e) Model training: the model input variables are fields of defect appearance, defect description, defect reason, equipment category, defect type and defect position, and the final equipment defect diagnosis model is formed by learning through a convolutional neural network algorithm.
The assessment method of the equipment risk intelligent assessment model comprises the following steps:
(1) Risk factor analysis: obtaining equipment risk factors according to the influence factor division of the equipment; aging factors, defect factors, status factors, main transformer alarm factors, thermal aging factors and fusion factors;
(2) And (3) defect influence factor correlation analysis: performing correlation analysis by calculating correlation coefficients according to the equipment risk factors:
(3) Constructing a defect deduction rule base of equipment: 1) Establishing a defect severity deduction rule base, and giving a score T1 according to the defect severity deduction rule base; 2) Setting a defect number deduction rule, counting the number of typical, batch and repeated occurrence defects, and giving a score T2 according to a rule range; 3) Formulating an equipment importance rule, and giving a score T3 by utilizing the equipment importance deduction rule according to the equipment where the defect occurs; 4) Setting a defect level deduction rule, and giving a corresponding score T4 according to the defect level; 5) Setting a voltage class deduction rule, and giving a corresponding score T5 according to the voltage class of the defect generating equipment; 6) Formulating a device type deduction rule, and giving corresponding scores T6 according to the importance degrees of different device types; 7) And according to the final defect evaluation score, giving the risk grade of the equipment, wherein the risk grade of the equipment is as follows: normal, general, emergency, major four grades;
(4) Risk intelligent assessment: when the defect risk of the equipment is evaluated, the score index and the equipment risk factor are subjected to co-trend processing, the input parameters of an entropy method can be used after the data processing is completed, an intelligent equipment risk evaluation model based on the defect is constructed, the influence degree evaluation of the equipment defect on the equipment risk is completed, and an intelligent risk evaluation result is obtained.
The implementation method of the annual production plan intelligent supervision module comprises the following steps: the annual production plan is supervised and supervised by extracting the arrangement information of the test plan and the work ticket from the 6+1 production management system, and the intelligent analysis and accurate matching are carried out on the arrangement data of the production plan by combining an intelligent reasoning algorithm through the cross exploration, dimension integration and splitting method, so that the plan supervision is realized; the planning supervision method comprises three aspects: 1) The supervision system pre-tests the plan consistency by associating the production plan work order, the work order and the equipment test cycle time; 2) According to the equipment test period, monitoring whether the compiled test plan exceeds the limit and the number of the exceeding limit; 3) And according to the equipment ledgers and the equipment test period, supervising the compiled test plan and whether the test object is missed.
The implementation method of the intelligent diagnosis and analysis module of the test report comprises the following steps: and carrying out strong characteristic intelligent pairing and extraction analysis on the test report and the test management rule specification, and carrying out keyword extraction, hierarchical classification and accurate reasoning by combining with vocabulary standardization, named entity recognition and standardized data dictionary methods in natural language processing, so as to examine the standardability of the text content of the test report, and judging whether a defect exists or not and whether the examined value accords with the section criterion or not.
The implementation method of the intelligent analysis module for the test data comprises the following specific steps:
step 1), determining a corresponding test report version of test equipment: taking test equipment as dimensions, finding out test reports corresponding to each test equipment, analyzing versions of the test reports, and finally determining that the test equipment shares a plurality of test versions;
step 2), test item determination in a test report: after the corresponding test report version of the test equipment is determined, analyzing specific test items in the test report according to each test report, and obtaining intersection of the test items through intelligent analysis;
step 3), determining test parameters in test items: obtaining intersection of test parameters in each test item through intelligent analysis according to the test items determined in the step 2);
Step 4), combining and configuring test parameters in test items: combining and configuring the test parameters according to the test parameters determined in the step 3), and performing mutual comparison and cluster analysis on the parameters of the combining and configuring;
step 5), analyzing the combined test parameters: according to the combined configuration parameters determined in the step 4), proceeding from two dimensions of a qualified test report and a disqualified test report, performing mutual comparison and cluster analysis on the configuration parameters through intelligent algorithms of regression analysis, clustering and association analysis, and visually displaying the mutual comparison data;
step 6), global analysis and display of test plans and test reports: globally analyzing the problems of a test plan and a test report, researching test data commonalities according to the places, units, equipment and test types, and visually displaying the commonalities;
step 7), on-line monitoring data analysis and display: and displaying the online monitoring data in the form of a display list or a trend chart by taking the equipment as a unit.
The invention has the beneficial effects that: compared with the prior art, the intelligent operation management and control system for the test data integrates planning, management, supervision and analysis, changes the traditional working mode, fully uses the existing technological means, greatly improves the management efficiency and quality of the test operation, brings substantial changes to the overhaul test operation, and comprehensively improves the informatization degree of the overhaul test operation.
1) Based on a unified test data structure system, the system provides sufficient data support for subsequent data analysis and diagnosis.
2) The annual production plan intelligent supervision based on the reasoning algorithm realizes the association analysis of the production plan and the equipment test period, and improves the supervision efficiency of the annual production plan.
3) Based on intelligent diagnosis and analysis of the test report processed by natural language, the comparison relation between the test report and the rule specification is realized, and the normalization and qualification of the test report are accurately judged.
4) Based on intelligent analysis of test data of data mining, the deep analysis of test result data of all test equipment in the whole province is realized, data conditions and trends are known in time for corresponding personnel, and auxiliary support is provided for subsequent decision analysis.
Drawings
FIG. 1 is a schematic diagram of a management and control system of the present invention;
FIG. 2 is a flow chart of the construction of a standard laboratory database;
FIG. 3 is an annual production schedule intelligent supervision flow chart;
FIG. 4 is a flow chart of a test report intelligent diagnostic analysis;
FIG. 5 is a flow chart of intelligent analysis of test data.
Detailed Description
The invention will be further described with reference to specific examples.
Example 1: the intelligent preventive test operation management and control system comprises a standard test database module, an annual production plan intelligent supervision module, a test report intelligent diagnosis and analysis module and a test data intelligent analysis module, wherein the standard test database module is used for establishing a standard data structure model to form a data standard system and constructing a new standard test database; the annual production plan intelligent supervision module is used for carrying out intelligent analysis and accurate matching on the scheduling data of the production plan; the test report intelligent diagnosis analysis module is used for checking normalization of test report text content; the intelligent test data analysis module is used for carrying out mutual comparison and cluster analysis on test report data of test equipment, globally analyzing problems of a plan and test reports, and researching test data commonality rules according to the places, units, equipment and test types.
The standard test database module comprises the following implementation methods: and extracting text element data characteristics, combining parallel computing technology, establishing a standard data structure model based on various devices, forming a data standard system, and constructing a new standard test database.
The construction method of the unified test data structure system comprises the following specific steps:
step 1: obtaining a test data structure architecture model from a production management system: carding all the equipment to make operation instruction books related to preventive tests, and acquiring carded operation instruction book templates and preventive test data from a production system;
step 2: constructing a test data structure system model based on unified standards: the method comprises the steps of forming an operation instruction book template by an operation instruction book template and preventive test data acquired from a production system; analyzing an operation instruction template obtained from a production system, perfecting the template according to actual needs, and forming a unified standard template; meanwhile, for the test data template for the factory delivery of the equipment, a factory delivery test word template is obtained from a manufacturer (when the factory delivery test is made, the manufacturer has the factory delivery test word template, meanwhile, the factory delivery test data of the equipment is filled in the template), a delivery test version template is generated in the system, and finally, a test data structure system model based on the unified standard is constructed by the operation instruction book template, the unified standard template and the test data template.
Step 3: test data complement: the test data sources in the test data mining intelligent operation management and control system are two:
1) The existing test data of the external system is directly obtained from the external system through an interface, and mainly comprises the steps of taking historical test data from a previous old system at one time and obtaining real-time test data from a production system every day.
2) And the test data missing from the external system is required to be subjected to the supplementary recording in the test data mining intelligent operation management and control system, wherein the supplementary recording is to select a corresponding operation instruction book template in a test data structure system model based on a unified standard, and realize the function of supplementary recording of the test data in the system according to the customized template.
The text data feature extraction method comprises the following steps: text data is obtained from a test data mechanism system model based on the unified standard through a data interface, and a field with more occurrence times is found out by adopting a document frequency characteristic selection algorithm technology to form a data standard system.
Document frequency (Document Frequency, DF) is the simplest feature selection algorithm, which refers to how many texts contain this word in the whole dataset. A document frequency is calculated for each feature in the training text set, and features whose document frequency is particularly low and particularly high are removed according to a predetermined aperture value. The document frequency is used for measuring a huge document set by calculating linear approximate complexity in the number of training documents, the calculation complexity is low, and the method can be applied to any corpus, so that the method is a common method for feature dimension reduction.
For each feature in the training text set, calculating its document frequency, deleting the item if its DF value is less than a certain threshold value, and removing the item if its DF value is greater than a certain threshold value. As they represent the "no representation" and "no differentiation" 2 extreme cases, respectively. DF feature extraction allows rare words to be either not informative or too little to affect classification or noisy and therefore can be eliminated. The DF has the advantage of small calculation amount and has good effect in practical application. The disadvantage is that rare words may not be rare in a certain type of text, may also contain important judgment information, and may affect the accuracy of the classifier by simply discarding.
The greatest advantage of the document frequency is that the speed is high, the time complexity and the text quantity of the document frequency are in linear relation, and the document frequency is very suitable for characteristic selection of a very large-scale text data set. Moreover, document frequency is also very efficient, performance in supervised feature selection applications when 90% of words are deleted is not comparable to that of information gain and x2 statistics. DF is the simplest feature item selection method, and the method has low calculation complexity and can be used for large-scale classification tasks.
However, if a certain rare term is mainly found in a certain training set, the characteristics of the category can be well reflected, and the rare term is filtered out due to the fact that the rare term is lower than a certain set threshold value, so that the classification accuracy is affected to a certain extent.
Parallel computing (Parallel Computing) refers to a process of solving a computing problem by using multiple computing resources simultaneously, and is an effective means for improving the computing speed and processing capacity of a computer system. The basic idea is to solve the same problem cooperatively by using a plurality of processors, namely decomposing the solved problem into a plurality of parts, wherein each part is calculated in parallel by an independent processor. Parallel computing systems may be either specially designed supercomputers containing multiple processors, or clusters of individual computers interconnected in some fashion. And finishing data processing through the parallel computing clusters, and returning the processed result to the user.
Parallel computing can be divided into temporal parallelism and spatial parallelism.
Time parallelism: the method refers to a pipeline technology, for example, when a factory is used for producing foods, the steps are as follows:
(1) Cleaning: the food is washed clean.
(2) And (3) disinfection: and (5) sterilizing the food.
(3) Cutting: cutting the food into small pieces.
(4) And (3) packaging: packaging the food into a packaging bag.
If a pipeline is not used, after one food product completes the four steps, the next food product is processed, which is time-consuming and affects efficiency. However, four foods can be processed simultaneously by adopting the pipeline technology. The method is time parallelism in the parallel algorithm, and two or more operations are started at the same time, so that the computing performance is greatly improved.
Spatially parallel: the method refers to the concurrent execution calculation of a plurality of processors, namely, more than two processors are connected through a network to simultaneously calculate different parts of the same task or solve the large problem which cannot be solved by a single processor.
The implementation method of the annual production plan intelligent supervision module comprises the following steps: the annual production plan is supervised and supervised by extracting the arrangement information of the test plan and the work ticket from the 6+1 production management system, and the intelligent analysis and accurate matching are carried out on the arrangement data of the production plan by combining an intelligent reasoning algorithm through the cross exploration, dimension integration and splitting method, so that the plan supervision is realized.
Planning supervision includes three aspects: 1) The supervision system pre-tests the plan consistency by associating the production plan work order, the work order and the equipment test cycle time; 2) According to the equipment test period, monitoring whether the compiled test plan exceeds the limit and the number of the exceeding limit; 3) And according to the equipment ledgers and the equipment test period, supervising the compiled test plan and whether the test object is missed.
The annual production plan intelligent supervision method based on the reasoning algorithm comprises the following specific steps:
step 1: acquiring preventive test work plans and work ticket data from a production system: carding data (pre-test plan of high-voltage, chemical and electrical testing profession), work ticket information and defect data related to a main equipment preventive test plan, and acquiring required data from a production system according to a carding result;
step 2: test plan management: distinguishing equipment information from different professions, extracting the equipment information from equipment operation and maintenance periods of a maintenance and modification module in a production system, mainly checking whether equipment is subjected to a system production plan or not, and checking whether a station account is complete or not in the reverse;
step 3: planning execution management: and (5) associating a power failure application form: for a production plan requiring power failure, associating a power failure application form, and checking details of the power failure application form; work ticket and system pre-test plan inconsistency supervision: the production plan is used for removing the associated work ticket, such as the production plan of a period unit (1 month), and when the current day is cut off, a corresponding work ticket should be provided in the system every month; through statistical analysis, if the work ticket is found to be absent, the work ticket is considered to be inconsistent with the system pre-test plan; test report and system pre-test plan inconsistency supervision: the production plan is used for removing the associated test report, such as the production plan of a period unit (1 month), the current day is cut off, and the test report is uploaded from the last month (5 working days are required to be uploaded after the test is completed, and the actual situation can be considered to be relaxed to 1 month), so that a corresponding work ticket is required to be provided in the system each month; through statistical analysis, if the work ticket is found to be absent, the work ticket is considered to be inconsistent with the system pre-test plan; planning out of date: in different dimensions (ground city bureau, transformer substation), comparing the planned starting time and the planned ending time of the production plan with the actual starting time and the actual ending time of the plan to judge whether the time exceeds the time and the number of the time exceeds the time;
Step 4: and (5) reminding the equipment of exceeding the period: the equipment is out of date: in different dimensions (ground city bureau, transformer substation), according to the equipment test period, reminding the equipment of the over-period, for example, the last test day of a certain equipment is 2019-10-14, the period is 1 year, if the test is not started by 2020-10-14 days, reminding the equipment of the over-period until the equipment is tested. Meanwhile, early warning grades are classified according to the conditions of the equipment (equipment importance, equipment health, risk assessment algorithm and the like); major/emergency defect display: displaying the number of major emergency defects of the equipment in the dimension of the equipment, and individually checking the details of each major/emergency defect; the latest patrol plan shows: the latest patrol plan of the equipment can be displayed; developing hand filling functions: the reasons for the over-period of the equipment, the management and control measures and the next power failure planning time are manually filled by the clients.
Cross-probe method:
in dimension modeled data warehouses, there is an operation called drug Across, which chinese translates into "cross-probe".
In Bus Architecture (Bus Architecture) based dimension modeling, most of the dimension tables are shared by fact tables. For example, "marketing transaction fact table" and "inventory snapshot fact table" would have the same dimension tables, "date dimension", "product dimension" and "store dimension". At this time, if there is a fact that it is desired to compare sales and inventory in a common dimension, two SQL's are required to be issued to find sales data and inventory data counted in the dimension, respectively. The data is then merged by externally connecting based on the common dimension. This operation of issuing multiple SQL's and then merging is cross-probing.
While the need for such cross-probing is common, there is a modeling approach to avoid cross-probing, namely merging fact tables (Consolidated Fact Table). Merging fact tables refers to a modeling method that combines facts at the same granularity in different fact tables. I.e., a fact table is newly built whose dimensions are a set of the same dimensions of two or more fact tables, facts being facts of interest in several fact tables. The data of this fact table is from the same staring Area as the data of the other fact tables.
Merging fact tables is better than cross-probing in both performance and ease of use, but the fact tables that are combined must be at the same granularity and dimension level.
Reasoning mode and classification of intelligent reasoning algorithm
Classification by inference logic basis
Deduction reasoning: deductive reasoning is the conclusion underlying these known knowledge that is appropriate for a particular situation, starting from the known general knowledge. Is a general to individual reasoning method, the core of which is three-section theory,
induction reasoning: is an individual to general reasoning method. The reasoning process that concludes the general conclusions from a sufficient number of cases.
Default reasoning: default reasoning, also called default reasoning, is the reasoning that is performed assuming that certain conditions are already in possession of the knowledge in case of incomplete knowledge.
Based on the certainty of knowledge used in reasoning
Deterministic reasoning: deterministic reasoning means that the knowledge used in reasoning is accurate and the conclusions drawn are also definite, with true or false values, and no third case.
Uncertainty reasoning: uncertainty reasoning means that the knowledge used in reasoning is not all accurate, nor is the conclusion drawn exactly positive, with the true value lying between true and false.
Monotonicity in the reasoning process
Monotonic reasoning: the conclusion drawn is in a monotonically increasing trend and is approaching the final goal.
Non-monotonic reasoning: due to the addition of new knowledge, not only does there be no enhancement to the conclusion that has been drawn, but it is negated.
2) Inferred control strategy
Inference direction: forward and backward direction
Solving strategy, one solution, all solutions and optimal solution
Conflict resolution: positive object ordering and matching degree ordering
Restriction strategy: depth, width, time, space.
The implementation method of the intelligent diagnosis and analysis module of the test report comprises the following steps: through a test report intelligent diagnosis analysis component, an intelligent diagnosis model is established, strong characteristic intelligent pairing and extraction analysis are supported to be carried out on a test report and test management regulation specification, keyword extraction, hierarchical classification, accurate reasoning are carried out by combining methods of vocabulary standardization, named entity recognition, standardized data dictionary and the like in natural language processing, examination of a main transformer, a breaker and GIS main equipment test report is carried out in an important way, normalization of test report text content is examined, and whether a defect exists or not and whether examination numerical values accord with interval criteria or not is judged; the intelligent diagnosis analysis component can realize routine maintenance of repair and editing rule specifications, diagnosis models and the like through software interfaces or file importing;
The method comprises the following specific steps:
step 1: establishing a test procedure library model: according to the overhaul test rules of the power equipment, a test rule base of the main transformer, the breaker and the GIS main equipment is established, and version maintenance is supported; the content comprises: maintenance category, project, professional, job requirements, and censoring rules;
step 2: and (3) intelligent pairing and extraction analysis of strong characteristics of the test procedure library model: according to the working requirements (particularly comprising those) in the test procedure library model, carrying out strong characteristic intelligent pairing and extraction analysis on the working requirements, generating an examination rule, quantifying the examination rule into a corresponding test procedure library model, and comparing the examination rule with a value filled in an operation process in a test report;
step 3: test report normalization review: according to the examination rules in the test procedure library model, examining normalization of text contents of the test report, for example, the text contents are numbers, and filling in a character string text;
step 4: test report absence item review: judging whether a defect exists in the test report according to the examination rules in the test rule library model;
step 5: test report value interval qualification screening: and checking whether the numerical value meets the qualification of the interval criterion according to the checking rule in the test procedure library model.
And meanwhile, comparing according to the last test data result (whether the last test data result is correct or not) (the last test data result exists in the insulation resistance test report), and if the insulation resistance test report exceeds or falls below a set threshold value, judging that the test report data interval is not qualified.
Step 6: and (5) displaying intelligent analysis results: and merging the test report normalization examination, test report shortage examination and test report numerical value interval qualification examination results, and generating an intelligent analysis result report.
Preferably, the intelligent analysis result analysis method adopts a exploratory data analysis method, a qualitative data analysis method, an off-line data analysis method or an on-line data analysis method;
the data analysis means that a large amount of collected data is analyzed by a proper statistical and analysis method, and the collected data are summarized, understood and digested to maximally develop the function of the data and play a role of the data. Data analysis is the process of detailed research and summarization of data in order to extract useful information and form conclusions.
Data, also called observations, are the result of experiments, measurements, observations, surveys, and the like. The data processed in the data analysis is divided into qualitative data and quantitative data. Data that can only fall into a certain class and cannot be measured by numerical values is called qualitative data. Qualitative data is represented by category, but is not ordered, and is classified data such as gender, brand and the like; qualitative data is represented as categories, but in order, is ordered data such as an academy, quality level of the commodity, etc.
1) Type of data analysis
(1) Exploratory data analysis: exploratory data analysis refers to a method of analyzing data to form hypothesis-worthy tests, which is complementary to traditional statistical hypothesis testing approaches. The method is named by the United states famous collectionist John Tukey.
(2) Qualitative data analysis: qualitative data analysis, also referred to as "qualitative data analysis", "qualitative research" or "qualitative research data analysis", refers to the analysis of non-numeric data (or data) such as words, photographs, observations, and the like.
(3) Offline data analysis: offline data analysis is used for more complex and time-consuming data analysis and processing, and is typically built on top of cloud computing platforms, such as the open-source HDFS file system and the MapReduce operation framework. The Hadoop cluster contains hundreds or even thousands of servers, stores data of several PB or even tens of PB, runs thousands of offline data analysis jobs each day, each job processes data of several hundred MB to several hundred TB or more, and runs for several minutes, hours, days or even longer.
(4) On-line data analysis: online data analysis, also known as online analysis processing, is used to process users' online requests, and requires relatively high response times (typically no more than a few seconds). In contrast to offline data analysis, online data analysis is capable of processing a user's request in real-time, allowing the user to change the constraints and constraints of the analysis at any time. The amount of data that can be processed by online data analysis is much smaller than offline data analysis, but current online analysis systems have been able to process tens of millions or even hundreds of millions of records in real time as technology advances. Traditional online data analysis systems are built on top of relational database-centric data warehouses, while online big data analysis systems are built on top of the NoSQL system of cloud computing platforms. If the online analysis and processing of big data are not carried out, the huge number of internet web pages cannot be stored and indexed, no high-efficiency search engine exists at present, and no vigorous development of microblogs, blogs, social networks and the like based on big data processing exists.
2) Data analysis step
Data analysis has a very wide range of applications. A typical data analysis may involve the following three steps:
1) Exploratory data analysis: when data is just acquired, the data may be disordered and irregular, rules are not seen, and possible forms of the rules are explored by means of drawing, tabulation, fitting by equations of various forms, calculating certain characteristic quantities and the like, namely, what direction and in what way to search for and reveal the rules implicit in the data.
2) Model selection analysis, one or more types of possible models are proposed on the basis of exploratory analysis, and then a certain model is selected from the models through further analysis.
3) Inference analysis: mathematical statistical methods are typically used to infer the degree of reliability and accuracy of the model or estimate.
The primary activities of the data analysis process consist of identifying information needs, collecting data, analyzing the data, evaluating and improving the effectiveness of the data analysis.
Identifying a demand: the requirement of identification information is a primary condition for ensuring the effectiveness of the data analysis process, and can provide clear targets for collecting data and analyzing the data. Identifying information requirements is the requirement that the manager of responsibility should place on the information based on decision making and process control requirements. In terms of process control, the manager should identify those information that are required to support review of process inputs, process outputs, rationality of resource configuration, optimization of process activities, and discovery of process anomaly variance.
Collecting data: the purposeful collection of data is the basis for ensuring that the data analysis process is efficient. The organization needs to plan the content, channel, method of collecting data. Planning should consider:
(1) converting the identified demand into a specific demand, wherein the data to be collected may include data such as process capability, uncertainty of a measurement system and the like when evaluating a supplier;
(2) to determine who is where and when, through what channels and methods to collect data;
(3) the record list is convenient to use; (4) and effective measures are taken to prevent data loss and interference of false data on the system.
Preferably, the strong feature intelligent pairing method adopts structure matching and semantic matching, accurate matching and approximate matching, static image matching and dynamic image matching, and an optimal algorithm and an approximate algorithm, and the image matching problem is divided into semantic matching and structure matching according to whether image data contains semantic information on nodes and edges.
1) Structure matching and semantic matching
The graph matching problem is classified into semantic matching and structural matching according to whether the graph data contains semantic information on nodes and edges.
Structural matching mainly ensures that the matched nodes have the same communication structure, and representative algorithms comprise Ullman algorithm which is proposed in 1976 in the earliest and VF2, quickSI, graphQL, spath and other algorithms which are improved on the basis of the algorithm.
In semantic matching, nodes and sidebands of data have rich semantic information, and the matching result is required to be ensured to be consistent with a pattern diagram in structure and semantic information. Current research is mainly directed to matching problems such as the typical GraphGrep algorithm.
On the one hand, the semantic matching algorithm can be formed by introducing semantic constraint improvement on nodes and edges on the basis of the existing structure matching algorithm, and can also realize rapid matching of semantic graphs by designing index features based on semantic information as in algorithms such as GraphGrep and the like.
2) Exact match and approximate match
The accurate matching means that the matching result is completely consistent with the structure and the attribute of the pattern diagram, and the matching mode is mainly applied to the field with higher accuracy requirement on the matching result. (both the foregoing structural and semantic matches belong to this class)
Approximate matching is a matching algorithm that can tolerate the presence of noise and errors in the results. Representative approximate matching algorithms comprise SUBDUE, LAW and the like, and the similarity degree of the two graphs is measured mainly by defining methods of editing distance, maximum public subgraph, minimum public hypergraph and the like.
3) Static graph matching and dynamic graph matching
Static map matching requires that all data maps do not change over time, and a matching algorithm generally analyzes and mines all data maps, extracts effective features according to data characteristics and builds indexes, so that matching efficiency is improved. The typical algorithm GIndex, tree+Delta, FG-Index.
The dynamic graph matching mainly adopts an increment processing basis, only analyzes the updated data graph, selects simple and discernable feature resume indexes, adopts an approximation algorithm to improve the matching speed, and is still in a starting stage at present.
4) Optimization algorithm and approximation algorithm
The optimal algorithm ensures that the matching result is completely accurate.
The approximation algorithm is different from approximation matching, is generally based on mathematical models such as probability statistics, has the advantages of polynomial-level time complexity, and is very suitable for matching problems of high algorithm instantaneity requirement, and only certain accuracy rate needs to be met, such as dynamic graph matching.
Preferably, the above-described natural language processing method is the field of computer science, artificial intelligence, linguistics focusing on interactions between computer and human (natural) language. Natural language processing (Natural Language Processing) is a sub-domain of Artificial Intelligence (AI). The main directions of investigation of NLP mainly include: information extraction, text generation, question and answer systems, dialogue systems, text mining, speech recognition, speech synthesis, public opinion analysis, machine translation, etc. The general processing flow of NLP natural language processing mainly comprises:
1) Obtaining corpus
Corpus is the content of NLP task research, a text set is usually used as Corpus (Corpus), and the Corpus can be obtained by means of existing data, public data sets, crawler crawling and the like.
2) Data preprocessing
The corpus preprocessing mainly comprises the following steps:
(1) Corpus cleaning: the useful data is preserved, the noise data is deleted, and common cleaning modes are as follows: manual deduplication, alignment, deletion, labeling, etc.
(2) Word segmentation: the text is divided into words, such as by a rule-based, statistical-based word segmentation method.
(3) Part of speech tagging: words are tagged with part-of-speech labels, such as nouns, verbs, adjectives, etc., and common part-of-speech tagging methods include rule-based, statistical-based algorithms, such as: maximum entropy part of speech tagging, HMM part of speech tagging, etc.
(4) Decommissioning word: words that do not contribute to text feature are removed, such as: punctuation marks, mood, "etc.
3) Feature engineering
The main work of this step is to represent the word into computer-identified calculation types, typically vectors, and the common representation models are: bag of words model (bog), such as: a TF-IDF algorithm; word vectors such as one-hot algorithm, word2vec algorithm, etc.
4) Feature selection
The feature selection is mainly based on the features obtained by the third feature engineering, and the features with proper and strong expression capability are selected, and the common feature selection method comprises the following steps: DF. MI, IG, WFO, etc.
5) Model selection
After the features are selected, model selection is needed, and what model is selected for training. Common organic machine learning models, such as: KNN, SVM, naive Bayes, decision trees, K-means, etc.; deep learning models such as: RNN, CNN, LSTM, seq2Seq, fastText, textCNN, etc.
6) Model training
When the model is selected, model training is performed, wherein fine tuning of the model and the like are included. During model training, care should be taken that the over-fitting problem performed well on the training set, but poorly on the test set, and the under-fitting problem that the model did not fit the data well. At the same time, the problems of gradient disappearance and gradient explosion are also prevented.
7) Model evaluation
The evaluation indexes of the model mainly comprise: error rate, accuracy, precision, accuracy, recall, F1 value, ROC curve, AUC curve, etc.
8) Production line
The mode of putting the model on line mainly comprises two modes: one is to train the model offline, and then deploy the model online to provide service; the other model is an online training model, and the model is persistence after online training is completed, so that external service is provided.
The implementation method of the intelligent analysis module for the test data comprises the following steps: based on the constructed standard test database, the test report data of all test equipment of the provincial power grid company are compared with each other and subjected to cluster analysis by combining regression analysis, clustering and association analysis algorithms, so that the problems of planning and test report are globally analyzed, and the test data commonality rules are researched according to the places, units, equipment and test types.
The intelligent analysis method for the test data based on the data mining comprises the following specific steps:
step 1), determining a corresponding test report version of test equipment: taking test equipment as dimensions, finding out test reports corresponding to each test equipment, analyzing versions of the test reports, and finally determining that the test equipment shares a plurality of test versions;
step 2), test item determination in a test report: after the corresponding test report version of the test equipment is determined (for example, the main transformer has 3 preventive test reports in total), specific test items in the test reports can be analyzed according to each test report, the intersection of the test items can be obtained through intelligent analysis, and the assumption is made that 6 items exist in all the next preventive test reports corresponding to the main transformer with the voltage level of 500 kV;
Step 3), determining test parameters in test items: according to the test items determined in the step 2), the intersection of test parameters in each test item can be obtained through intelligent analysis, and the intersection of test parameters in the test item can be determined to be 60 test parameters on the assumption that the electric capacity and tan of a 500kV oil immersed power transformer preventive test (electric part) item-measurement capacitive sleeve in all the next preventive test reports corresponding to the 500kV voltage class main transformer are 60 test parameters;
step 4), combining and configuring test parameters in test items: according to the test parameters determined in the step 3), the test parameters can be combined and configured, and only the parameters subjected to the combined and configured can be mutually compared and subjected to cluster analysis;
step 5), analyzing the combined test parameters: according to the combined configuration parameters determined in the step 4), starting from two dimensions (qualified test report, qualified test report and unqualified test report), mutually comparing and clustering analysis can be carried out on the configuration parameters through an intelligent algorithm, and the mutually compared data can be visually displayed;
Step 6), global analysis and display of test plans and test reports: globally analyzing the problems of a test plan and a test report, researching test data commonalities according to the places, units, equipment and test types, and visually displaying the commonalities;
step 7), on-line monitoring data analysis and display: and displaying the online monitoring data in the form of a display list or a trend chart by taking the equipment as a unit.
Regression analysis algorithm technique:
regression analysis is a statistical analysis method that determines the quantitative relationship of interdependence between two or more variables. In big data analysis, it is a predictive modeling technique that studies a regression model between the dependent variable y (target) and the independent variable x (predictor) that affects it, thereby predicting the trend of development of the dependent variable y. When there are a plurality of independent variables, the influence intensity of each independent variable x on the dependent variable y can be studied.
1) Linear Regression Linear regression
Linear regression, also known as least squares regression, is often one of the techniques that one prefers when learning predictive models. In this technique, the dependent variable is continuous, the independent variable may be continuous or discrete, and the regression line is linear in nature.
2) Polynomial Regression polynomial regression
When analyzing data, we can meet different data distribution conditions, when the data points are in strip distribution, we can choose a linear regression method to fit, but how the data points are a curve is not so good when the linear regression method is used to fit, and when we can use a polynomial regression method. The polynomial regression model is a regression model obtained by fitting the data by using a polynomial.
3) Stepwise Regression stepwise regression
In processing multiple independent variables, we can use this form of regression. The goal of this modeling technique is to maximize predictive power using a minimum number of predictive variables. The process of stepwise regression to select variables involves two basic steps: firstly, the non-obvious variables are removed from the regression model, and secondly, new variables are introduced into the regression model, and the common stepwise regression method comprises a forward method and a backward method.
4) Ridge Regression Ridge
Ridge regression is an important improvement of linear regression, increasing error tolerance. If there is multiple collinearity of the data set matrix (mathematically called a ill-conditioned matrix), then linear regression is very sensitive to noise in the input variable, and if there is a small variation in the input variable x, its response will also become very large in the output result, and its solution will be very unstable. To solve this problem, there is an optimization algorithm, ridge regression. Ridge regression solves some of the problems of linear regression by imposing a penalty on the magnitude of the coefficients.
5) Lasso Regression
Lasso regression is similar to ridge regression, with a penalty added to the absolute value of the regression coefficients. In addition, it can reduce the bias and improve the accuracy of the linear regression model. Unlike ridge regression, it uses absolute values instead of square values in the penalty part. This results in penalty (i.e., the sum of absolute values used to constrain the estimates) values to have some parameter estimates equal to zero. The larger the penalty used, the more towards zero the estimate will be.
6) ElasticNet Regression elastic network regression
Elastic Net is a mixture of Lasso and Ridge regression techniques. Ridge regression is the use of a binary norm (square term) to bias the cost function. Lasson regression uses a norm (absolute term) to analyze the cost function in a biased way. Whereas elastic net combines both, i.e. using both square terms and absolute terms.
7) Bayesian Regression Bayesian regression
Bayesian regression can be used for parameter regularization at the pre-estimation stage: the selection of regularization parameters is not by human selection, but by manually adjusting the data values.
8) Robust Regression robust regression
When the least squares method encounters a data sample point where there is an outlier, a Robust regression may be used in place of the least squares method. Of course, the Robust regression can also be used for outlier detection, or to find those sample points that have the greatest impact on the model.
9) Random forest regression by RandomforstregRessor
Random forests can be applied to classification and regression problems. This is achieved depending on whether each of the carts of the random forest is a classification tree or a regression tree. If the regression tree is used, the carb tree is the regression tree, and the principle adopted is the minimum mean square error.
10 SVR support vector regression
SVR regression is to find a regression plane, and let all data of a set get the closest to that plane. Since it is not possible that the data are all on the regression plane, the sum of the distances is still very large, so that the distances of all the data to the regression plane can be given a tolerance value to prevent overfitting. The parameter is an empirical parameter that needs to be given manually.
11 Decision Tree Regressor decision tree regression
The decision tree model is a tree structure applied to classification and regression. The decision tree consists of nodes and directed edges, and typically a decision tree contains a root node, internal nodes and leaf nodes. The decision process of the decision tree needs to start from the root node of the decision tree, the data to be tested is compared with the characteristic nodes in the decision tree, and the next comparison branch is selected according to the comparison result until the leaf node is used as the final decision result.
12 Poisson Regression poisson regression
Poisson regression is used to describe the frequency distribution of the discovery of an event in a unit time, unit area, or unit volume, and is generally used to describe the distribution of the number of rare event (i.e., low probability) occurrences.
Cluster analysis algorithm technique:
clustering (Clustering) analysis has a popular explanation and metaphor, namely, "classics, people are grouped together. For several specific business indexes, the groups of the observation objects can be divided into different groups according to similarity and dissimilarity. After division, the similarity between the objects within each group will be high, while the objects between different groups will have a high degree of dissimilarity with each other.
On one hand, the clustering technology is a model technology, and the result after effective clustering can often direct the application practice of the floor; on the other hand, the clustering technology is often used as a tool for data priming and data cleaning (data conversion) and data finishing (data conversion) in the early stage of the data analysis process, and has the characteristics of diversity, pluripotency and the like in practical application.
1) Typical application scenario of cluster analysis
It can be said that the typical application scenario of cluster analysis is very common, and business teams encounter it almost every day. For example, the paying users are subjected to cluster analysis according to a plurality of specific wealth, such as interest rate contribution rate, user age, renewal times and the like, so as to obtain groups with different characteristics.
For example: after clustering analysis is carried out on paid users, the number of paid persons occupied by one group is 40%, and the paid persons are about 25 years old, so that profit contribution is not great, but the number of renewal times is large; the other group accounts for 15% of the total paid number, and the group with the characteristic is that the user ages above 40 years, the profit contribution is relatively large, but the number of the renewing fees is not large.
2) Primary clustering algorithm classification
A method of partitioning (Partitioning Method);
a hierarchical method (Hierarchical Method);
density-based methods (Density-based methods);
grid-based methods (Grid-based methods);
model-based Method
(1) Method of partitioning (Partitioning Method)
Given a data set of m objects, and the number of sub-populations K desired to be generated, the objects can be divided into K groups (requiring K not to exceed m) in such a way that the objects within each group are imagined dead while the organization is distinct. The most commonly used method is the K-Means method, the specific principle of which is:
step1, randomly selecting K objects, wherein each selected object represents an initial average value of a group or an initial group center value;
step2, for each of the remaining objects, assigning them to the nearest (most similar) subgroup based on the distance between the initial mean or initial center values of the remaining groups;
step3, recalculating a new mean value of each group;
step4: this process is repeated until all objects find their nearest group in the K-group distribution.
(2) Hierarchical method (Hierarchical Method)
The method sequentially merges the most similar data objects two by two, so that the data objects are continuously merged to finally form a cluster number.
Correlation analysis algorithm technology:
correlation analysis is a simple and practical analysis technique that finds correlations or relatedness that exist in a large number of data sets, describing the laws and patterns in which certain attributes appear simultaneously in a thing.
Correlation analysis is the discovery of interesting correlations and related links between item sets from a large amount of data. A typical example of a correlation analysis is shopping basket analysis. The process analyzes the purchasing habits of the customer by finding the contact between the different items that the customer places in his shopping basket. The discovery of such associations may help retailers formulate marketing strategies by knowing which items are frequently purchased by customers simultaneously. Other applications also include tariff design, commodity promotions, commodity emissions, and customer demarcation based on purchasing patterns.
Rules such as "occurrence of some events due to occurrence of other events" may be parsed from the database in association. Such as "67% of customers will purchase diapers while buying beer", so the quality of service and benefits of the supermarket can be improved by reasonable shelf placement or bundled sales of beer and diapers. For example, the students with excellent courses of 'C language' have the excellent possibility of 88 percent when learning 'data structures', so that the teaching effect can be improved by strengthening the learning of 'C language'.
1) Apriori algorithm:
the Apriori algorithm is a basic algorithm for mining frequent item sets required for generating boolean association rules, and is also one of the most well-known association rule mining algorithms. The Apriori algorithm is named based on a priori knowledge about the nature of frequent item sets. It uses an iterative method called layer-by-layer search, k-term sets are used to explore (k+1) -term sets. First, the set of frequent 1-item sets is found, denoted as L1, L1 is used to find the set of frequent 2-item sets, L2, and then L3, and so on until the frequent k-item set cannot be found. Finding each Lk requires scanning the database once.
To increase the processing efficiency of hierarchical searches and the generation of corresponding frequent item sets, the Apriori algorithm utilizes an important property and applies Apriori properties to help effectively reduce the search space of the frequent item sets.
Apriori properties: any subset of a frequent item set should also be a frequent item set. It turns out that by definition, if an item set I does not meet the minimum support threshold min_sup, I is not frequent, i.e. P (I) < min_sup. If an item A is added to the item set I, the new item set (I U.A) is not frequent, and the occurrence times of the new item set (I U.A) in the whole transaction database are not possible to be more than the occurrence times of the original item set I, so that P (I U.A) < min < - > is not frequent, namely (I U.A). Thus, the Apriori property can be easily determined to be true according to the inverse axiom.
Aiming at the defects of the Apriori algorithm, the method is optimized:
(1) A partitioning-based approach. The algorithm firstly logically divides the database into a plurality of mutually exclusive blocks, each time considers a block separately and generates all frequent item sets for the block, then combines the generated frequent item sets to generate all possible frequent item sets, and finally calculates the support degree of the item sets. The size of the tiles is chosen here so that each tile can be put into main memory, only once per stage. The correctness of the algorithm is ensured by each possible frequent item set being a frequent item set at least in a certain partition.
The algorithms discussed above are highly parallelizable. Each partition may be assigned to a respective processor to generate a frequent set of terms. The communication between processors to generate global candidates is a set of items after each cycle of generating frequent sets of items is completed. Typically the communication process here is the main bottleneck in algorithm execution time. On the other hand, the time for each individual processor to generate frequent item sets is also a bottleneck. Other approaches also share a hash tree among multiple processors to generate frequent item sets, where more parallelization approaches to generating frequent item sets can be found.
(2) Hash-based methods. Park et al propose a Hash-based algorithm that efficiently generates frequent item sets. It has been found through experimentation that the primary calculation of finding frequent item sets is to generate frequent 2-item sets Lk, park et al, or the like, that is to use this property to introduce hashing techniques to improve the method of generating frequent 2-item sets.
(3) Sampling-based methods. Based on the information obtained from the previous scan, a detailed combinatorial analysis of this can be performed, which results in an improved algorithm, the basic idea of which is: the samples extracted from the database are used to derive rules that may be valid throughout the database, and the results are then validated against the remainder of the database. This algorithm is quite simple and significantly reduces the FO cost, but one significant disadvantage is that the resulting result is inaccurate, i.e. there is a so-called data skew. Data that is distributed on the same page is often highly correlated and does not represent the distribution of patterns throughout the database, resulting in sampling 5% of the transaction data at a cost similar to scanning through the database.
(4) The number of transactions is reduced. The basic principle of reducing the size of a transaction set for future scanning is that when a transaction does not contain a large set of entries of length lineage, then it must not contain a large set of entries of length walk k+1. So that the transactions can be deleted and the number of transaction sets to be scanned can be reduced in the next pass. This is the basic idea of AprioriTid.
2) FP-growth algorithm:
the efficiency of the Apriori method is still unsatisfactory, even if optimized. In 2000, han Jiawei et al proposed an algorithm FP-growth that found frequent patterns based on a frequent pattern tree (Frequent Pattern Tree, abbreviated as FP-tree). In the FP-growth algorithm, frequent items contained in each transaction are stored in the FP-tree in descending order of their support by scanning the transaction database twice. In the process of finding frequent patterns later, the transaction database is not required to be scanned, but only the transaction database is searched in the FP-Tree, and the frequent patterns are directly generated by recursively calling the FP-growth method, so that candidate patterns are not required to be generated in the whole finding process. The algorithm overcomes the problems existing in the Apriori algorithm and is significantly better in terms of execution efficiency than the Apriori algorithm.
The foregoing is merely illustrative of the present invention, and the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present invention, and therefore, the scope of the present invention shall be defined by the scope of the appended claims.

Claims (6)

1. An intelligent preventive test operation management and control system is characterized in that: the system comprises a standard test database module, an annual production plan intelligent supervision module, a test report intelligent diagnosis analysis module and a test data intelligent analysis module, wherein the standard test database module is used for establishing a standard data structure model to form a data standard system and constructing a new standard test database; the annual production plan intelligent supervision module is used for carrying out intelligent analysis and accurate matching on the scheduling data of the production plan; the test report intelligent diagnosis analysis module is used for checking normalization of test report text content; the intelligent test data analysis module is used for carrying out mutual comparison and cluster analysis on test report data of test equipment, globally analyzing the problems of a plan and a test report, and researching test data commonality rules according to the places, units, equipment and test types;
The implementation method of the annual production plan intelligent supervision module comprises the following specific steps:
step 1: acquiring preventive test work plans and work ticket data from a production system: data related to a preventive test plan of the carding master device, work ticket information and defect data, and acquiring required data from a production system according to a carding result;
step 2: test plan management: distinguishing equipment information from different professions, extracting the equipment information from equipment operation and maintenance periods of a maintenance and modification module in a production system, mainly checking whether equipment is subjected to a system production plan or not, and checking whether a station account is complete or not in the reverse;
step 3: planning execution management: and (5) associating a power failure application form: for a production plan requiring power failure, associating a power failure application form, and checking details of the power failure application form; work ticket and system pre-test plan inconsistency supervision: removing the associated work ticket by using the production plan; through statistical analysis, if the work ticket is found to be absent, the work ticket is considered to be inconsistent with the system pre-test plan; test report and system pre-test plan inconsistency supervision: using a production plan to disassociate test reports; through statistical analysis, if the work ticket is found to be absent, the work ticket is considered to be inconsistent with the system pre-test plan; planning out of date: in the dimensions of different local municipalities and substations, comparing the planned starting time and the planned ending time of the production plan with the actual starting time and the actual ending time of the plan to judge whether the time exceeds the time and the number of the time exceeds the time;
Step 4: and (5) reminding the equipment of exceeding the period: the equipment is out of date: reminding the equipment of the overtime according to the equipment test period in the dimensionalities of different local municipalities and substations, and meanwhile, classifying the early warning grades of the equipment of the overtime according to the equipment importance of the equipment per se condition by an equipment health degree and risk assessment algorithm; major/emergency defect display: displaying the number of major emergency defects of the equipment in the dimension of the equipment, and individually checking the details of each major/emergency defect; the latest patrol plan shows: the latest patrol plan of the equipment can be displayed;
the standard test database construction method of the standard test database module comprises the following steps:
step 1: defect data analysis: analyzing and knowing the defect data characteristics of the equipment through defect data;
step 2: constructing an equipment defect standard library according to the equipment defect data characteristics of the step 1, and finishing the standardized storage of the defect data;
step 3: constructing a defect intelligent diagnosis model, and identifying the defect reasons and defect positions of the equipment through the defect intelligent diagnosis model to realize intelligent diagnosis of the equipment defects and division of the severity of the defects;
step 4: analyzing a defect diagnosis result, and recommending defect management measures;
Step 5: constructing an equipment risk intelligent evaluation model based on a result obtained by analyzing the defect diagnosis result, and identifying the influence degree of the defect on the equipment risk;
step 6: dividing risk grades according to the influence degree of equipment risks;
the assessment method of the equipment risk intelligent assessment model comprises the following steps:
(1) Risk factor analysis: obtaining equipment risk factors according to the influence factor division of the equipment; aging factors, defect factors, status factors, main transformer alarm factors, thermal aging factors and fusion factors;
(2) And (3) defect influence factor correlation analysis: performing correlation analysis by calculating correlation coefficients according to the equipment risk factors:
(3) Constructing a defect deduction rule base of equipment: 1) Establishing a defect severity deduction rule base, and giving a score T1 according to the defect severity deduction rule base; 2) Setting a defect number deduction rule, counting the number of typical, batch and repeated occurrence defects, and giving a score T2 according to a rule range; 3) Formulating an equipment importance rule, and giving a score T3 by utilizing the equipment importance deduction rule according to the equipment where the defect occurs; 4) Setting a defect level deduction rule, and giving a corresponding score T4 according to the defect level; 5) Setting a voltage class deduction rule, and giving a corresponding score T5 according to the voltage class of the defect generating equipment; 6) Formulating a device type deduction rule, and giving corresponding scores T6 according to the importance degrees of different device types; 7) And according to the final defect evaluation score, giving the risk grade of the equipment, wherein the risk grade of the equipment is as follows: normal, general, emergency, major four grades;
(4) Risk intelligent assessment: when the defect risk of the equipment is evaluated, the score index and the equipment risk factor are subjected to co-trend treatment, the data processing is completed and then can be used as input parameters of an entropy method, an intelligent equipment risk evaluation model based on the defect is constructed, the influence degree evaluation of the equipment defect on the equipment risk is completed, and an intelligent risk evaluation result is obtained;
the implementation method of the intelligent analysis module for the test data comprises the following specific steps:
step 1), determining a corresponding test report version of test equipment: taking test equipment as dimensions, finding out test reports corresponding to each test equipment, analyzing versions of the test reports, and finally determining that the test equipment shares a plurality of test versions;
step 2), test item determination in a test report: after the corresponding test report version of the test equipment is determined, analyzing specific test items in the test report according to each test report, and obtaining intersection of the test items through intelligent analysis;
step 3), determining test parameters in test items: obtaining intersection of test parameters in each test item through intelligent analysis according to the test items determined in the step 2);
Step 4), combining and configuring test parameters in test items: combining and configuring the test parameters according to the test parameters determined in the step 3), and performing mutual comparison and cluster analysis on the parameters of the combining and configuring;
step 5), analyzing the combined test parameters: according to the combined configuration parameters determined in the step 4), proceeding from two dimensions of a qualified test report and a disqualified test report, performing mutual comparison and cluster analysis on the configuration parameters through intelligent algorithms of regression analysis, clustering and association analysis, and visually displaying the mutual comparison data;
step 6), global analysis and display of test plans and test reports: globally analyzing the problems of a test plan and a test report, researching test data commonalities according to the places, units, equipment and test types, and visually displaying the commonalities;
step 7), on-line monitoring data analysis and display: and displaying the online monitoring data in the form of a display list or a trend chart by taking the equipment as a unit.
2. A preventive test operation management system with intelligence as claimed in claim 1, wherein: defect data analysis: and respectively analyzing the number of different years of the defects of the equipment, the distribution number of the types of the defects of the equipment and the distribution number of manufacturers of the defects of the equipment, and sequencing the number of the types of the defects of the different years and the number of the manufacturers to obtain the maximum number of fault years, the maximum number of fault types and the maximum number of manufacturers with faults.
3. A preventive test operation management system with intelligence as claimed in claim 1, wherein: the construction method of the equipment defect standard library in the step 2 comprises the following steps:
a) Collecting defect data, wherein the data sources of the defect data collection comprise historical defect reports, defect record data, equipment operation data, equipment test data and equipment on-line monitoring data, and obtaining field names and field contents of a defect record data table of a defect classification standard library by analyzing the data sources;
b) Cleaning and de-duplicating the defect data, and cleaning and de-duplicating two or more pieces of the same defect data, defect data deletion, defect data mess-code, blank existence in the defect data, full-angle half-angle conversion and English case of the defect data;
c) And (3) manually marking, namely manually marking text analysis on defect images, defect positions, defect reasons and treatment measures according to the historical defect report, and finally obtaining an equipment defect standard library.
4. A preventive test operation management system with intelligence according to claim 3, characterized in that: the defect record data contains fields: units, voltage levels, defect levels, places, equipment names, defect types, defect descriptions, major classes, manufacturers, factory years and months, equipment models, commissioning dates, defect cause types, defect causes, defect appearances, discovery time, defect parts and treatment measures;
The device operation data contains fields: voltage, three-phase unbalanced current, voltage class;
on-line monitoring data of equipment: dielectric loss, equivalent capacitance, reference voltage alarm, three-phase unbalanced current alarm, dielectric loss alarm, full current alarm, equivalent capacitance alarm, monitoring equipment communication state, monitoring equipment operation state, equipment self-checking abnormality, partial discharge and iron core current;
the device test data contains fields: infrared imaging temperature measurement, gas in a gas chamber, contact loop resistance, external insulation surface withstand voltage and gas decomposition product test value.
5. A preventive test operation management system with intelligence as claimed in claim 1, wherein: the method for constructing the intelligent defect diagnosis model in the step 3 comprises the following steps: (1) a defect diagnosis system: summarizing the equipment type, the defects corresponding to the equipment and the defects corresponding to the defects and parts of the defects to form a defect diagnosis system table; (2) defect diagnosis model: a) Establishing equipment defect diagnosis data indexes according to the defect data record table: includes index names and index descriptive contents; b) Text preprocessing: performing word segmentation processing on the defect description content, and obtaining a word segmentation result of the electric power field according to the dictionary of the electric power field; c) Text distributed representation: the text distributed representation method is based on the principle that the semantics of a word are characterized by the adjacent words, namely, a large number of preprocessed power equipment defects are recorded as a corpus, a language model represented by the word vector of each word is trained, and each dimension of the word vector represents the semantic characteristics of the word learned by the model; d) And (3) building a convolutional neural network: the intelligent diagnosis of the equipment defects mainly adopts a convolutional neural network algorithm, the processed defect index data is used as an input layer of the convolutional neural network, the defect text of the vectorized word vector in the step c) is classified through a classifier of the convolutional neural network, and a corresponding classification result is output; e) Model training: the model input variables are fields of defect appearance, defect description, defect reason, equipment category, defect type and defect position, and the final equipment defect diagnosis model is formed by learning through a convolutional neural network algorithm.
6. A preventive test operation management system with intelligence as claimed in claim 1, wherein: the implementation method of the intelligent diagnosis and analysis module of the test report comprises the following steps: and carrying out strong characteristic intelligent pairing and extraction analysis on the test report and the test management rule specification, and carrying out keyword extraction, hierarchical classification and accurate reasoning by combining with vocabulary standardization, named entity recognition and standardized data dictionary methods in natural language processing, so as to examine the standardability of the text content of the test report, and judging whether a defect exists or not and whether the examined value accords with the section criterion or not.
CN202110747608.0A 2021-07-02 2021-07-02 Intelligent preventive test operation management and control system Active CN113379313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110747608.0A CN113379313B (en) 2021-07-02 2021-07-02 Intelligent preventive test operation management and control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110747608.0A CN113379313B (en) 2021-07-02 2021-07-02 Intelligent preventive test operation management and control system

Publications (2)

Publication Number Publication Date
CN113379313A CN113379313A (en) 2021-09-10
CN113379313B true CN113379313B (en) 2023-06-20

Family

ID=77580745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110747608.0A Active CN113379313B (en) 2021-07-02 2021-07-02 Intelligent preventive test operation management and control system

Country Status (1)

Country Link
CN (1) CN113379313B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113934172A (en) * 2021-10-20 2022-01-14 四川汉唐江电力有限公司 Intelligent information management system for preventive test of power equipment facing mobile terminal
CN113947377B (en) * 2021-10-22 2023-05-30 浙江正泰仪器仪表有限责任公司 Laboratory management system
CN114722973B (en) * 2022-06-07 2022-08-26 江苏华程工业制管股份有限公司 Defect detection method and system for steel pipe heat treatment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859409A (en) * 2010-05-25 2010-10-13 广西电网公司电力科学研究院 Power transmission and transformation equipment state overhauling system based on risk evaluation
CN106199305A (en) * 2016-07-01 2016-12-07 太原理工大学 Underground coal mine electric power system dry-type transformer insulation health state evaluation method
CN107180267A (en) * 2017-06-01 2017-09-19 国家电网公司 A kind of familial defect diagnostic method of secondary operation management system
CN108037133A (en) * 2017-12-27 2018-05-15 武汉市智勤创亿信息技术股份有限公司 A kind of power equipments defect intelligent identification Method and its system based on unmanned plane inspection image
CN108051711A (en) * 2017-12-05 2018-05-18 国网浙江省电力公司检修分公司 Solid insulation surface defect diagnostic method based on state Feature Mapping
CN108767851A (en) * 2018-06-14 2018-11-06 深圳供电局有限公司 A kind of substation's O&M intelligent operation command methods and system
CN109490713A (en) * 2018-12-13 2019-03-19 中国电力科学研究院有限公司 A kind of method and system moving inspection and interactive diagnosis for cable run
CN110058103A (en) * 2019-05-23 2019-07-26 国电南京自动化股份有限公司 Intelligent transformer fault diagnosis system based on Vxworks platform
CN111508603A (en) * 2019-11-26 2020-08-07 中国科学院苏州生物医学工程技术研究所 Birth defect prediction and risk assessment method and system based on machine learning and electronic equipment
CN112233193A (en) * 2020-09-30 2021-01-15 上海恒能泰企业管理有限公司 Power transformation equipment fault diagnosis method based on multispectral image processing
CN113162232A (en) * 2021-04-09 2021-07-23 北京智盟信通科技有限公司 Power transmission line equipment risk assessment and defense decision system and method

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI117586B (en) * 2002-08-02 2006-11-30 Nokia Corp Method for arranging a SIM function in a digital wireless terminal device as well as the corresponding terminal device and server
ITPD20120291A1 (en) * 2012-10-05 2014-04-06 Manfrotto Lino & C Spa TRIPPED SUPPORT TROLLEY WITH DRAINABLE COLUMN, PARTICULARLY FOR PHOTOGRAPHIC EQUIPMENT
FR3027208B1 (en) * 2014-10-15 2016-12-23 Medicrea Int MATERIAL OF VERTEBRAL OSTEOSYNTHESIS
CN104933477A (en) * 2015-06-05 2015-09-23 国网电力科学研究院武汉南瑞有限责任公司 Method for optimizing maintenance strategy by using risk assessment of power transmission and transformation equipment
CN105389302B (en) * 2015-10-19 2017-11-28 广东电网有限责任公司电网规划研究中心 A kind of electrical reticulation design appraised index structural information recognition methods
CN107491381A (en) * 2017-07-04 2017-12-19 广西电网有限责任公司电力科学研究院 A kind of equipment condition monitoring quality of data evaluating system
CN108920609A (en) * 2018-06-28 2018-11-30 南方电网科学研究院有限责任公司 Electric power experimental data method for digging based on multi dimensional analysis
CN110837866A (en) * 2019-11-08 2020-02-25 国网新疆电力有限公司电力科学研究院 XGboost-based electric power secondary equipment defect degree evaluation method
CN111797146A (en) * 2020-07-20 2020-10-20 贵州电网有限责任公司电力科学研究院 Big data-based equipment defect correlation analysis method
CN112070720A (en) * 2020-08-11 2020-12-11 国网河北省电力有限公司保定供电分公司 Transformer substation equipment defect identification method based on deep learning model
CN112104083B (en) * 2020-09-17 2022-05-03 贵州电网有限责任公司 Power grid production command system based on situation awareness
CN112528041B (en) * 2020-12-17 2023-05-30 贵州电网有限责任公司 Scheduling term specification verification method based on knowledge graph
CN112910089A (en) * 2021-01-25 2021-06-04 国网山东省电力公司青岛供电公司 Transformer substation secondary equipment fault logic visualization method and system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101859409A (en) * 2010-05-25 2010-10-13 广西电网公司电力科学研究院 Power transmission and transformation equipment state overhauling system based on risk evaluation
CN106199305A (en) * 2016-07-01 2016-12-07 太原理工大学 Underground coal mine electric power system dry-type transformer insulation health state evaluation method
CN107180267A (en) * 2017-06-01 2017-09-19 国家电网公司 A kind of familial defect diagnostic method of secondary operation management system
CN108051711A (en) * 2017-12-05 2018-05-18 国网浙江省电力公司检修分公司 Solid insulation surface defect diagnostic method based on state Feature Mapping
CN108037133A (en) * 2017-12-27 2018-05-15 武汉市智勤创亿信息技术股份有限公司 A kind of power equipments defect intelligent identification Method and its system based on unmanned plane inspection image
CN108767851A (en) * 2018-06-14 2018-11-06 深圳供电局有限公司 A kind of substation's O&M intelligent operation command methods and system
CN109490713A (en) * 2018-12-13 2019-03-19 中国电力科学研究院有限公司 A kind of method and system moving inspection and interactive diagnosis for cable run
CN110058103A (en) * 2019-05-23 2019-07-26 国电南京自动化股份有限公司 Intelligent transformer fault diagnosis system based on Vxworks platform
CN111508603A (en) * 2019-11-26 2020-08-07 中国科学院苏州生物医学工程技术研究所 Birth defect prediction and risk assessment method and system based on machine learning and electronic equipment
CN112233193A (en) * 2020-09-30 2021-01-15 上海恒能泰企业管理有限公司 Power transformation equipment fault diagnosis method based on multispectral image processing
CN113162232A (en) * 2021-04-09 2021-07-23 北京智盟信通科技有限公司 Power transmission line equipment risk assessment and defense decision system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王廷银 ; 林明贵 ; 陈达 ; 吴允平 ; .基于北斗RDSS的核辐射监测应急通讯方法.计算机系统应用.2019,(第12期),第252-256页. *

Also Published As

Publication number Publication date
CN113379313A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
Yan et al. Data mining in the construction industry: Present status, opportunities, and future trends
McArthur et al. Machine learning and BIM visualization for maintenance issue classification and enhanced data collection
CN113379313B (en) Intelligent preventive test operation management and control system
Wu et al. Effective crude oil price forecasting using new text-based and big-data-driven model
Grawe et al. Automated patent classification using word embedding
Irudeen et al. Big data solution for Sri Lankan development: A case study from travel and tourism
Gürbüz et al. Data mining and preprocessing application on component reports of an airline company in Turkey
Akerkar Advanced data analytics for business
Zhou et al. Corporate communication network and stock price movements: insights from data mining
Jonathan et al. Sentiment analysis of customer reviews in zomato bangalore restaurants using random forest classifier
Giri et al. Exploitation of social network data for forecasting garment sales
Zhang et al. Analysis and research on library user behavior based on apriori algorithm
Rožanec et al. Semantic XAI for contextualized demand forecasting explanations
ARMEL et al. Fraud detection using apache spark
Tao et al. Can online consumer reviews signal restaurant closure: A deep learning-based time-series analysis
CN117216150A (en) Data mining system based on data warehouse
Nikitin et al. Human-in-the-loop large-scale predictive maintenance of workstations
Gunawan et al. C4. 5, K-Nearest Neighbor, Naïve Bayes, and Random Forest Algorithms Comparison to Predict Students' on TIME Graduation
Koh Design change prediction based on social media sentiment analysis
CN113377746B (en) Test report database construction and intelligent diagnosis analysis system
Wang et al. Improving failures prediction by exploring weighted shape‐based time‐series clustering
CN113378978B (en) Test data intelligent analysis method based on data mining
Mumtaz et al. Frequency-Based vs. Knowledge-Based Similarity Measures for Categorical Data.
Kavitha Assessing teacher’s performance evaluation and prediction model using cloud computing over multi-dimensional dataset
Yang et al. Evaluation and assessment of machine learning based user story grouping: A framework and empirical studies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant