CN109636185A - A kind of energy-consumption monitoring method based on neural network transparence - Google Patents

A kind of energy-consumption monitoring method based on neural network transparence Download PDF

Info

Publication number
CN109636185A
CN109636185A CN201811523525.8A CN201811523525A CN109636185A CN 109636185 A CN109636185 A CN 109636185A CN 201811523525 A CN201811523525 A CN 201811523525A CN 109636185 A CN109636185 A CN 109636185A
Authority
CN
China
Prior art keywords
neural network
connection weight
paraphrase
weight
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811523525.8A
Other languages
Chinese (zh)
Inventor
黄开林
李太福
段棠少
李开术
尹蝶
姚立忠
黄柏凯
许霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yongneng Oil And Gas Technology Development Co Ltd
Chongqing University of Science and Technology
Original Assignee
Sichuan Yongneng Oil And Gas Technology Development Co Ltd
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yongneng Oil And Gas Technology Development Co Ltd, Chongqing University of Science and Technology filed Critical Sichuan Yongneng Oil And Gas Technology Development Co Ltd
Priority to CN201811523525.8A priority Critical patent/CN109636185A/en
Publication of CN109636185A publication Critical patent/CN109636185A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/80Management or planning
    • Y02P90/82Energy audits or management systems therefor

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • General Business, Economics & Management (AREA)
  • Biophysics (AREA)
  • Operations Research (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Game Theory and Decision Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention provides a kind of energy-consumption monitoring method based on neural network transparence, includes the following steps, generates neural network model according to creation data;Generate neural network paraphrase figure;Obtain connection weight achievement data;Significance test and the deletion inapparent connection weight in neural network paraphrase figure are carried out to connection weight according to power achievement data is connect.Three kinds of methods are test in present invention combination neural network paraphrase figure, connection weight method and improved randomization, provide a kind of good approach for the transparence of complex industrial process neural network model, are provided for the energy consumption analysis of enterprise's creation data and effective are instructed foundation.

Description

A kind of energy-consumption monitoring method based on neural network transparence
Technical field
The present invention relates to a kind of energy consumption analysis methods, and in particular to a kind of energy consumption monitoring side based on neural network transparence Method.
Background technique
The modeling and optimization of complex industrial process have raising product quality, reduction production cost and energy-saving and emission-reduction important Meaning.With machine learning, the development of data mining theories and methods, artificial neural network is right as one of main stream approach In applying the incompetent many problems of conventional method, its unique superiority is shown.But work is established using neural network There are a biggish defect, i.e. "black box" characteristic, models obtained not to have its corresponding physics for industry process model System is converted into the ability of the knowledge representation to make sense, and the parameter in model is unable to visual interpretation real system, it is difficult to The sensitivity of observing and nursing input variable.As input variable dimension and hidden nodes purpose increase, model structure is answered Miscellaneous degree sharply increases, and the figure of neural network paraphrase at this time is to model almost without interpretability.
Such as application No. is 2016211050935.6, denomination of invention runs energy based on the monkey of neural network model Consumption analysis method uses BP neural network model by each non-energy consumption that can directly calculate, is planned for each physical process In the middle, and not it whether accurate talks this planning, due to the invisibility of BP neural network, causes the parameter in model cannot be intuitive System is explained.
Summary of the invention
The purpose of the present invention is to provide the transparences for realizing complex industrial process neural network model, produce number for enterprise According to a kind of energy-consumption monitoring method based on neural network transparence for being instructed of energy consumption analysis include the following steps,
Neural network model is generated according to creation data;
Generate neural network paraphrase figure;
Obtain connection weight achievement data;
Foundation connects power achievement data and carries out significance test to connection weight and delete not significant in neural network paraphrase figure Connection weight.
Further,
The creation data is by acquiring industry spot measuring instrument or/and by acquiring in enterprise's DCS control system.
Further,
The neural network paraphrase figure includes connection weight lines, and the thickness of the order of magnitude lines of connection weight indicates, The connection weight that thick line indicates is greater than the connection weight that filament indicates;
The type of lines indicates the state of connection weight, and solid line indicates to have connected the stimulation state of positive interaction, and dotted line indicates to connect Pick up the holddown of negative interaction.
Further,
Connection weight achievement data includes the connection weight matrix data being input to hidden layer and hidden layer to output, input-hidden layer- Export connection weight contribution degree data, comprehensive connection weight contribution degree data and relative contribution rate data.
Further,
It is described that connection weight is carried out significance test and delete inapparent connection weight to include following step according to statistical indicator Suddenly,
S1 constructs multiple neural network models according to the creation data sample after standardization, and each neural network model is adopted It is trained with the initial weight of small random number and the training method with momentum term and learning rate.
S2 selects the neural network model with optimum prediction performance in multiple neural network models, and records The initial weight and termination weight of the model, obtain connection weight achievement data;
The acquisition connection weight achievement data includes the following steps,
S21 calculates input-hidden layer-output connection weight contribution degree C;
S22 calculates the synthesis connection weight contribution degree OI of each variable;
S23 calculates the relative contribution rate RI of each variable;
S3 changes the sequence of training sample output collection at random;
S4 is recorded with the initial weight recorded in the sample and S2 after change sequence, re -training neural network model The termination weight of model.
S5 be repeated several times S3 and S4, and record number of repetition be COUNT, according to the termination weight recorded in S4, obtain with C, OI, RI of machine.
S6 calculates separately input-hidden layer-output connection weight contribution degree C, comprehensive connection weight contribution degree OI, relative contribution rate RI Significance degree P, include the following steps
If S61 standard value is greater than 0, P=(N+1)/(COUNT+1), N is the number that randomization value is more than or equal to standard value;
If S62 standard value is the number that randomization value is less than or equal to standard value less than 0, P=(M+1)/(COUNT+1), M;
S7, if the p of connection weight is less than preset value, retains the connection weight connecting line, otherwise in neural network paraphrase figure It deletes the connection weight connecting line and generates the neural network paraphrase figure after building.
The invention has the advantages that
(1) neural network paraphrase figure is utilized, connection weight " interpretable " ability is assigned, quantization is carried out to connection weight and with connection The size of line is shown, realizes the visualization of industrial process model.
(2) connection weight method is utilized, realizes industrial process neural network model decision parameters to target variable importance Quantitative analysis.
(3) it is test using improved randomization, has trimmed complex industrial process neural network model, eliminate redundancy letter Breath, improves the transparence degree of neural network model.
(4) present invention combines neural network paraphrase figure, connection weight method and improved randomization to test three kinds of methods, for complexity The transparence of industrial process neural network model provides a kind of good approach, and the energy consumption analysis for enterprise's creation data provides Effectively instruct foundation.
Detailed description of the invention
Fig. 1 is one embodiment of the invention neural network paraphrase figure.
Fig. 2 is neural network paraphrase figure before one embodiment of the invention is trimmed.
Fig. 3 is neural network paraphrase figure after one embodiment of the invention trimming.
Fig. 4 is energy-consumption monitoring method flow chart of the one embodiment of the invention based on neural network transparence.
Specific embodiment
The present invention solve the problems, such as the thinking in background technique first is that, using neural network paraphrase figure, assigning connection weight " can Explain " ability, realize the visualization of industrial process model;Using connection weight method, realizes industrial process neural network model and determine The quantitative analysis of plan parameters on target variable importance;It is test using improved randomization, has trimmed complex industrial process mind Through network model, redundancy is eliminated, improves the transparence degree of neural network model, is finally complex industrial process mind Transparence through network model provides a kind of good approach, provides effective guidance for the energy consumption analysis of enterprise's creation data Foundation.
A kind of energy-consumption monitoring method based on neural network transparence of the present invention includes the following steps as shown in Figure 4,
Neural network model is generated according to creation data;
Generate neural network paraphrase figure;
Obtain connection weight achievement data;
Foundation connects power achievement data and carries out significance test to connection weight and delete not significant in neural network paraphrase figure Connection weight.
The present invention is acquired by data, data storage, model foundation, energy consumption analysis, in conjunction with neural network paraphrase figure, connection Three kinds of methods are test in power method and improved randomization, are carried out transparence to complex industrial process energy consumption analysis neural network model and are ground Study carefully.First with neural network paraphrase figure Visualization Model, then with connection weight method to decision parameters contribution rate quantitative analysis, finally It is test, the connection weight of model, the comprehensive contribution degree of decision parameters and relative contribution rate is carried out significant using improved randomization Property examine, and then trim model.Compared with the existing technology compared with this method obtains the internal information of process variable, greatly mentions High " being appreciated that " ability of model can provide for the energy consumption analysis of enterprise's creation data and effective instruct foundation.
The creation data is by acquiring industry spot measuring instrument or/and by acquiring in enterprise's DCS control system.
The neural network paraphrase figure includes connection weight lines, and the thickness of the order of magnitude lines of connection weight indicates, The connection weight that thick line indicates is greater than the connection weight that filament indicates;
The type of lines indicates the state of connection weight, and solid line indicates to have connected the stimulation state of positive interaction, and dotted line indicates to connect Pick up the holddown of negative interaction.
Pass through the tracking to connection weight size and state, it is possible to authenticate go out between single variable or multiple variables and become to target The influence of amount.
Connection weight achievement data obtaining step in one embodiment of the invention is illustrated below:
(1) record is input to hidden layer and hidden layer to the connection weight matrix exported;
1 connection weight matrix of table
Table 1 Connection weights matrix
(2) input-hidden layer-output connection weight contribution degree C is calculated
It is big to the contribution of output by hidden neuron to characterize each variable for input-hidden layer-output connection weight contribution degree It is small.Its value be input to hidden layer connection weight and hidden layer to output connection weight product, expression formula are as follows:
Cij=Wij×WYi, i=A, B;J=1,2,3; (1)
Example: CA1=WA1×WYA=0.8147 × (- 0.6557)=- 0.5342, shows decision variable X1Pass through hidden layer nerve First A is -0.5342 to the contribution degree of output Y.Input-hidden layer-output contribution degree such as table 2.
Table 2 inputs hidden layer and exports contribution degree
Table 2 The contribution of input-hidden-output
(3) comprehensive connection weight contribution degree OI
OI characterizes each input variable to the contribution of output variable.The expression of '+' plays positive stimulation;'-' Negative sense inhibiting effect is indicated.The bigger expression of absolute value, expression formula bigger to the contribution degree of output are as follows:
Example:Show X1Be to the comprehensive contribution degree of Y- 0.6001。
(4) relative contribution rate RI
RI shows that each input variable integrally to the significance level of output variable, is provided with percents.If it is greater than 0, indicate that the variable plays positive interaction to output variable;If it is less than the 0 expression variable to having exported negative interaction.If it is equal to 0, table Show that the variable does not influence output variable.Its calculation formula is:
Synthesis connection weight contribution degree OI and relative contribution rate the RI such as table 3 of calculating.
Table 3 comprehensive contribution degree OI and relative contribution rate RI
Table 3 Overall contribution(OI)and relative contribution rate(RI)
According to table 3, it can be deduced that X1、X3To output Y rise negative sense inhibiting effect, relative contribution rate be respectively -60.43% and - 29.24%;X2Positive stimulation is played to Y, relative contribution rate is 10.33%.Therefore, connection weight method compensates for neural network and releases The defect of adopted figure realizes quantitative analysis of the input variable to target variable contribution rate.
Connection weight achievement data includes the connection weight matrix data being input to hidden layer and hidden layer to output, input-hidden layer- Export connection weight contribution degree data, comprehensive connection weight contribution degree data and relative contribution rate data.
It is described that connection weight is carried out significance test and delete inapparent connection weight to include following step according to statistical indicator Suddenly,
S1 constructs multiple neural network models according to the creation data sample after standardization, and each neural network model is adopted It is trained with the initial weight of small random number and the training method with momentum term and learning rate.
S2 selects the neural network model with optimum prediction performance in multiple neural network models, and records The initial weight and termination weight of the model, obtain connection weight achievement data;
The acquisition connection weight achievement data includes the following steps,
S21 calculates input-hidden layer-output connection weight contribution degree C;
S22 calculates the synthesis connection weight contribution degree OI of each variable;
S23 calculates the relative contribution rate RI of each variable;
S3 changes the sequence of training sample output collection at random;
S4 is recorded with the initial weight recorded in the sample and S2 after change sequence, re -training neural network model The termination weight of model.
S5 be repeated several times S3 and S4, and record number of repetition be COUNT, according to the termination weight recorded in S4, obtain with C, OI, RI of machine.
In an embodiment of the present invention, number of repetition is 999 times, and COUNT value is 999.
S6 calculates separately input-hidden layer-output connection weight contribution degree C, comprehensive connection weight contribution degree OI, relative contribution rate RI Significance degree P, include the following steps
If S61 standard value is greater than 0, P=(N+1)/(COUNT+1), N is the number that randomization value is more than or equal to standard value;
If S62 standard value is the number that randomization value is less than or equal to standard value less than 0, P=(M+1)/(COUNT+1), M;
Standard value in the present invention refers to the value of OI, RI, and for a certain input, the sign of the two is consistent, even certain The OI of one variable is positive value, then RI is also positive value, and the expression of '+' plays positive stimulation;The expression of '-' plays negative sense and inhibits to make With.Absolute value is bigger to indicate bigger to the contribution degree of output.
S7, if the p of connection weight is less than preset value, retains the connection weight connecting line, otherwise in neural network paraphrase figure It deletes the connection weight connecting line and generates the neural network paraphrase figure after building.
Preset value is 0.05 in an embodiment of the present invention.
Below by a specific embodiment, next the present invention will be described.
In enterprise's hydrogen cyanide production process in the present embodiment, the complication the Worker's Stadium system is built using neural network Mould inputs the compensating flowrate (Nm of compensation temperature (DEG C) for ammonia, ammonia3·h-1), natural gas/ammonia volume ratio, air/ammonia gas Product ratio, the compensation pressure (Mpa) of ammonia, the compensation pressure (Mpa) of natural gas and 9 decisions of big mixer outlet temperature (DEG C) are joined Number, corresponding variable are TN, FN, CN, AN, PN, PC, PA, PP, TD, are exported as hydrogen cyanide conversion ratio η (HCN).In production process Decision parameters, HCN conversion ratio (η (HCN)) and sample data such as table 4.Sample data is divided into training set and inspection set, through anti- It is 9-7-1 that the final topological structure for determining network is practiced in refreshment, as shown in Figure 2.
4 HCN production process variable of table and data set
Table 4 Process variables and data sets of HCN
However, specific physical message cannot be obtained in HCN production process using the model in the prior art to explain this Chemical industry system, also without the relationship between 9 decision parameters of method interpretation and hydrogen cyanide conversion ratio.Therefore, it is utilized respectively connection weight method The contribution rate for quantitatively calculating the model decision parameter is test using improved randomization to trim hydrogen cyanide conversion ratio η(HCN)Nerve Paraphrase figure in network further increases hydrogen cyanide conversion ratio η(HCN)The transparence degree of neural network model.
To hydrogen cyanide conversion ratio η(HCN)Neural network model application connection weight method has obtained 9 input decision parameters to hydrogen Cyanic acid conversion ratio η(HCN)Comprehensive contribution degree and relative contribution rate, as shown in table 5.
The comprehensive contribution rate OI and relative contribution rate RI of 5 decision variable of table
Table 5 Overall contribution and relative contribution rate of decision variables
By to hydrogen cyanide conversion ratio η(HCN)Neural network paraphrase figure carries out randomization test, and it is defeated to have obtained input-hidden layer- The randomization P value of connection weight out, as shown in table 6 (both preset value is 0.05 for α=0.05).
The P value (α=0.05) that table 6 is randomized
Table 6 P value of randomization
Hydrogen cyanide conversion ratio η is removed according to the P value in table 6(HCN)Inapparent connection weight in model has obtained new hydrogen cyanogen Sour conversion ratio η(HCN)Neural network paraphrase figure, as shown in Figure 3.When α=0.1 both preset value is 0.01, it can be found that model is most Pipe removes the inapparent connection weight in part, but does not still obtain satisfactory neural network paraphrase figure.In α=0.05 Both neural network paraphrase figure when preset value is 0.05, compared with Fig. 2, hydrogen cyanide conversion ratio η (HCN) neural network paraphrase pattern Relatively succinct, transparence degree is high, it is easier to explain the pass between decision parameters and decision parameters, decision parameters and response variable System.Compared with the existing technology compared with, the present invention by obtaining the internal information of process variable, greatly improve model " can Understand " ability, it can be provided for the energy consumption analysis of enterprise's creation data and effectively instruct foundation.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it is still Technical solution documented by foregoing embodiments is modified, or is equally replaced to some or all of the technical features It changes;And these are modified or replaceed, the model for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution It encloses, should all cover within the scope of the claims and the description of the invention.

Claims (5)

1. a kind of energy-consumption monitoring method based on neural network transparence, which is characterized in that include the following steps,
Neural network model is generated according to creation data;
Generate neural network paraphrase figure;
Obtain connection weight achievement data;
Significance test and the deletion inapparent company in neural network paraphrase figure are carried out to connection weight according to power achievement data is connect Connect power.
2. a kind of energy-consumption monitoring method based on neural network transparence as described in claim 1, which is characterized in that the life Data are produced by acquiring to industry spot measuring instrument or/and by acquiring in enterprise's DCS control system.
3. a kind of energy-consumption monitoring method based on neural network transparence as described in claim 1, which is characterized in that the mind It include connection weight lines through network paraphrase figure, the thickness of the order of magnitude lines of connection weight indicates, the connection that thick line indicates Power is greater than the connection weight that filament indicates;
The type of lines indicates the state of connection weight, and solid line indicates to have connected the stimulation state of positive interaction, and dotted line expression connects The holddown of negative interaction.
4. a kind of energy-consumption monitoring method based on neural network transparence as described in claim 1, which is characterized in that connection weight Achievement data includes the connection weight matrix data being input to hidden layer and hidden layer to output, input-hidden layer-output connection weight contribution Degree evidence, comprehensive connection weight contribution degree data and relative contribution rate data.
5. a kind of energy-consumption monitoring method based on neural network transparence as described in claim 1, which is characterized in that it is described according to Significance test and the deletion inapparent connection weight packet in neural network paraphrase figure are carried out to connection weight according to power achievement data is connect Include following steps,
S1 constructs multiple neural network models, each neural network model is using small according to the creation data sample after standardization The initial weight of random number and training method with momentum term and learning rate are trained;
S2 selects the neural network model with optimum prediction performance in multiple neural network models, and records the mould The initial weight and termination weight of type, obtain connection weight achievement data;
The acquisition connection weight achievement data includes the following steps,
S21 calculates input-hidden layer-output connection weight contribution degree C;
S22 calculates the synthesis connection weight contribution degree OI of each variable;
S23 calculates the relative contribution rate RI of each variable;
S3 changes the sequence of training sample output collection at random;
S4 is with the initial weight recorded in the sample and S2 after change sequence, re -training neural network model, and record cast Termination weight;
S3 and S4 is repeated several times in S5, and recording number of repetition is that COUNT is randomized according to the termination weight recorded in S4 C, OI, RI;
S6 calculate separately input-hidden layer-output connection weight contribution degree C, comprehensive connection weight contribution degree OI, relative contribution rate RI it is aobvious Work degree P, includes the following steps,
If S61 standard value is greater than 0, P=(N+1)/(COUNT+1), N is the number that randomization value is more than or equal to standard value;
If S62 standard value is the number that randomization value is less than or equal to standard value less than 0, P=(M+1)/(COUNT+1), M;
S7, if the p of connection weight is less than preset value, retains the connection weight connecting line, otherwise deletes in neural network paraphrase figure The connection weight connecting line generates the neural network paraphrase figure after building.
CN201811523525.8A 2018-12-13 2018-12-13 A kind of energy-consumption monitoring method based on neural network transparence Pending CN109636185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811523525.8A CN109636185A (en) 2018-12-13 2018-12-13 A kind of energy-consumption monitoring method based on neural network transparence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811523525.8A CN109636185A (en) 2018-12-13 2018-12-13 A kind of energy-consumption monitoring method based on neural network transparence

Publications (1)

Publication Number Publication Date
CN109636185A true CN109636185A (en) 2019-04-16

Family

ID=66073488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811523525.8A Pending CN109636185A (en) 2018-12-13 2018-12-13 A kind of energy-consumption monitoring method based on neural network transparence

Country Status (1)

Country Link
CN (1) CN109636185A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745279A (en) * 2014-01-24 2014-04-23 广东工业大学 Method and device for monitoring energy consumption abnormity
US20180349256A1 (en) * 2017-06-01 2018-12-06 Royal Bank Of Canada System and method for test generation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745279A (en) * 2014-01-24 2014-04-23 广东工业大学 Method and device for monitoring energy consumption abnormity
US20180349256A1 (en) * 2017-06-01 2018-12-06 Royal Bank Of Canada System and method for test generation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚立忠等: "神经网络模型的透明化及输入变量约简", 《计算机科学》 *
李太福等: "复杂化工过程神经网络模型的透明化", 《控制工程》 *

Similar Documents

Publication Publication Date Title
CN112684379A (en) Transformer fault diagnosis system and method based on digital twinning
CN111199270B (en) Regional wave height forecasting method and terminal based on deep learning
CN110096810B (en) Industrial process soft measurement method based on layer-by-layer data expansion deep learning
CN109613898A (en) A kind of enterprise's creation data monitoring method based on industrial Internet of Things
CN109556863B (en) MSPAO-VMD-based large turntable bearing weak vibration signal acquisition and processing method
CN106599417A (en) Method for identifying urban power grid feeder load based on artificial neural network
CN107993012A (en) A kind of adaptive electric system on-line transient stability appraisal procedure of time
Khosrowshahi Simulation of expenditure patterns of construction projects
CN107545307A (en) Predicting model for dissolved gas in transformer oil method and system based on depth belief network
CN112163371A (en) Transformer bushing state evaluation method
CN112307677A (en) Power grid oscillation mode evaluation and safety active early warning method based on deep learning
Ibrahim et al. Selection criteria for oil transformer measurements to calculate the health index
CN111695452A (en) Parallel reactor internal aging degree evaluation method based on RBF neural network
CN114862035B (en) Combined bay water temperature prediction method based on transfer learning
CN115438726A (en) Device life and fault type prediction method and system based on digital twin technology
CN104834975A (en) Power network load factor prediction method based on intelligent algorithm optimization combination
Yang et al. A multi-feature weighting based K-means algorithm for MOOC learner classification
Niu et al. Application of AHP and EIE in reliability analysis of complex production lines systems
CN114595883A (en) Oil-immersed transformer residual life personalized dynamic prediction method based on meta-learning
CN109636185A (en) A kind of energy-consumption monitoring method based on neural network transparence
CN110110784B (en) Transformer fault identification method based on transformer related operation data
CN112733340A (en) Well selection method and equipment for modifying candidate well based on data-driven reservoir
Thorve et al. Fidelity and diversity metrics for validating hierarchical synthetic data: Application to residential energy demand
CN113570165B (en) Intelligent prediction method for permeability of coal reservoir based on particle swarm optimization
CN114154686A (en) Dam deformation prediction method based on ensemble learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190416