CN109636185A - A kind of energy-consumption monitoring method based on neural network transparence - Google Patents
A kind of energy-consumption monitoring method based on neural network transparence Download PDFInfo
- Publication number
- CN109636185A CN109636185A CN201811523525.8A CN201811523525A CN109636185A CN 109636185 A CN109636185 A CN 109636185A CN 201811523525 A CN201811523525 A CN 201811523525A CN 109636185 A CN109636185 A CN 109636185A
- Authority
- CN
- China
- Prior art keywords
- neural network
- connection weight
- paraphrase
- weight
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000005265 energy consumption Methods 0.000 title claims abstract description 22
- 238000012544 monitoring process Methods 0.000 title claims abstract description 12
- 238000003062 neural network model Methods 0.000 claims abstract description 32
- 238000012360 testing method Methods 0.000 claims abstract description 14
- 238000012217 deletion Methods 0.000 claims abstract 3
- 230000037430 deletion Effects 0.000 claims abstract 3
- 238000012549 training Methods 0.000 claims description 10
- 230000003993 interaction Effects 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000000638 stimulation Effects 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 abstract description 17
- 238000004458 analytical method Methods 0.000 abstract description 10
- 238000013459 approach Methods 0.000 abstract description 4
- LELOWRISYMNNSU-UHFFFAOYSA-N hydrogen cyanide Chemical compound N#C LELOWRISYMNNSU-UHFFFAOYSA-N 0.000 description 18
- 238000006243 chemical reaction Methods 0.000 description 11
- QGZKDVFQNNGYKY-UHFFFAOYSA-N Ammonia Chemical compound N QGZKDVFQNNGYKY-UHFFFAOYSA-N 0.000 description 9
- 230000008569 process Effects 0.000 description 5
- 229910021529 ammonia Inorganic materials 0.000 description 4
- VNWKTOKETHGBQD-UHFFFAOYSA-N methane Chemical compound C VNWKTOKETHGBQD-UHFFFAOYSA-N 0.000 description 4
- 238000004445 quantitative analysis Methods 0.000 description 4
- 238000012800 visualization Methods 0.000 description 3
- XLJMAIOERFSOGZ-UHFFFAOYSA-N cyanic acid Chemical compound OC#N XLJMAIOERFSOGZ-UHFFFAOYSA-N 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000001257 hydrogen Substances 0.000 description 2
- 229910052739 hydrogen Inorganic materials 0.000 description 2
- 125000004435 hydrogen atom Chemical class [H]* 0.000 description 2
- 230000002401 inhibitory effect Effects 0.000 description 2
- 239000003345 natural gas Substances 0.000 description 2
- 210000005036 nerve Anatomy 0.000 description 2
- JMANVNJQNLATNU-UHFFFAOYSA-N oxalonitrile Chemical compound N#CC#N JMANVNJQNLATNU-UHFFFAOYSA-N 0.000 description 2
- 241000282693 Cercopithecidae Species 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000001558 permutation test Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/80—Management or planning
- Y02P90/82—Energy audits or management systems therefor
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- General Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Educational Administration (AREA)
- General Business, Economics & Management (AREA)
- Biophysics (AREA)
- Operations Research (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Game Theory and Decision Science (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Quality & Reliability (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The present invention provides a kind of energy-consumption monitoring method based on neural network transparence, includes the following steps, generates neural network model according to creation data;Generate neural network paraphrase figure;Obtain connection weight achievement data;Significance test and the deletion inapparent connection weight in neural network paraphrase figure are carried out to connection weight according to power achievement data is connect.Three kinds of methods are test in present invention combination neural network paraphrase figure, connection weight method and improved randomization, provide a kind of good approach for the transparence of complex industrial process neural network model, are provided for the energy consumption analysis of enterprise's creation data and effective are instructed foundation.
Description
Technical field
The present invention relates to a kind of energy consumption analysis methods, and in particular to a kind of energy consumption monitoring side based on neural network transparence
Method.
Background technique
The modeling and optimization of complex industrial process have raising product quality, reduction production cost and energy-saving and emission-reduction important
Meaning.With machine learning, the development of data mining theories and methods, artificial neural network is right as one of main stream approach
In applying the incompetent many problems of conventional method, its unique superiority is shown.But work is established using neural network
There are a biggish defect, i.e. "black box" characteristic, models obtained not to have its corresponding physics for industry process model
System is converted into the ability of the knowledge representation to make sense, and the parameter in model is unable to visual interpretation real system, it is difficult to
The sensitivity of observing and nursing input variable.As input variable dimension and hidden nodes purpose increase, model structure is answered
Miscellaneous degree sharply increases, and the figure of neural network paraphrase at this time is to model almost without interpretability.
Such as application No. is 2016211050935.6, denomination of invention runs energy based on the monkey of neural network model
Consumption analysis method uses BP neural network model by each non-energy consumption that can directly calculate, is planned for each physical process
In the middle, and not it whether accurate talks this planning, due to the invisibility of BP neural network, causes the parameter in model cannot be intuitive
System is explained.
Summary of the invention
The purpose of the present invention is to provide the transparences for realizing complex industrial process neural network model, produce number for enterprise
According to a kind of energy-consumption monitoring method based on neural network transparence for being instructed of energy consumption analysis include the following steps,
Neural network model is generated according to creation data;
Generate neural network paraphrase figure;
Obtain connection weight achievement data;
Foundation connects power achievement data and carries out significance test to connection weight and delete not significant in neural network paraphrase figure
Connection weight.
Further,
The creation data is by acquiring industry spot measuring instrument or/and by acquiring in enterprise's DCS control system.
Further,
The neural network paraphrase figure includes connection weight lines, and the thickness of the order of magnitude lines of connection weight indicates,
The connection weight that thick line indicates is greater than the connection weight that filament indicates;
The type of lines indicates the state of connection weight, and solid line indicates to have connected the stimulation state of positive interaction, and dotted line indicates to connect
Pick up the holddown of negative interaction.
Further,
Connection weight achievement data includes the connection weight matrix data being input to hidden layer and hidden layer to output, input-hidden layer-
Export connection weight contribution degree data, comprehensive connection weight contribution degree data and relative contribution rate data.
Further,
It is described that connection weight is carried out significance test and delete inapparent connection weight to include following step according to statistical indicator
Suddenly,
S1 constructs multiple neural network models according to the creation data sample after standardization, and each neural network model is adopted
It is trained with the initial weight of small random number and the training method with momentum term and learning rate.
S2 selects the neural network model with optimum prediction performance in multiple neural network models, and records
The initial weight and termination weight of the model, obtain connection weight achievement data;
The acquisition connection weight achievement data includes the following steps,
S21 calculates input-hidden layer-output connection weight contribution degree C;
S22 calculates the synthesis connection weight contribution degree OI of each variable;
S23 calculates the relative contribution rate RI of each variable;
S3 changes the sequence of training sample output collection at random;
S4 is recorded with the initial weight recorded in the sample and S2 after change sequence, re -training neural network model
The termination weight of model.
S5 be repeated several times S3 and S4, and record number of repetition be COUNT, according to the termination weight recorded in S4, obtain with
C, OI, RI of machine.
S6 calculates separately input-hidden layer-output connection weight contribution degree C, comprehensive connection weight contribution degree OI, relative contribution rate RI
Significance degree P, include the following steps
If S61 standard value is greater than 0, P=(N+1)/(COUNT+1), N is the number that randomization value is more than or equal to standard value;
If S62 standard value is the number that randomization value is less than or equal to standard value less than 0, P=(M+1)/(COUNT+1), M;
S7, if the p of connection weight is less than preset value, retains the connection weight connecting line, otherwise in neural network paraphrase figure
It deletes the connection weight connecting line and generates the neural network paraphrase figure after building.
The invention has the advantages that
(1) neural network paraphrase figure is utilized, connection weight " interpretable " ability is assigned, quantization is carried out to connection weight and with connection
The size of line is shown, realizes the visualization of industrial process model.
(2) connection weight method is utilized, realizes industrial process neural network model decision parameters to target variable importance
Quantitative analysis.
(3) it is test using improved randomization, has trimmed complex industrial process neural network model, eliminate redundancy letter
Breath, improves the transparence degree of neural network model.
(4) present invention combines neural network paraphrase figure, connection weight method and improved randomization to test three kinds of methods, for complexity
The transparence of industrial process neural network model provides a kind of good approach, and the energy consumption analysis for enterprise's creation data provides
Effectively instruct foundation.
Detailed description of the invention
Fig. 1 is one embodiment of the invention neural network paraphrase figure.
Fig. 2 is neural network paraphrase figure before one embodiment of the invention is trimmed.
Fig. 3 is neural network paraphrase figure after one embodiment of the invention trimming.
Fig. 4 is energy-consumption monitoring method flow chart of the one embodiment of the invention based on neural network transparence.
Specific embodiment
The present invention solve the problems, such as the thinking in background technique first is that, using neural network paraphrase figure, assigning connection weight " can
Explain " ability, realize the visualization of industrial process model;Using connection weight method, realizes industrial process neural network model and determine
The quantitative analysis of plan parameters on target variable importance;It is test using improved randomization, has trimmed complex industrial process mind
Through network model, redundancy is eliminated, improves the transparence degree of neural network model, is finally complex industrial process mind
Transparence through network model provides a kind of good approach, provides effective guidance for the energy consumption analysis of enterprise's creation data
Foundation.
A kind of energy-consumption monitoring method based on neural network transparence of the present invention includes the following steps as shown in Figure 4,
Neural network model is generated according to creation data;
Generate neural network paraphrase figure;
Obtain connection weight achievement data;
Foundation connects power achievement data and carries out significance test to connection weight and delete not significant in neural network paraphrase figure
Connection weight.
The present invention is acquired by data, data storage, model foundation, energy consumption analysis, in conjunction with neural network paraphrase figure, connection
Three kinds of methods are test in power method and improved randomization, are carried out transparence to complex industrial process energy consumption analysis neural network model and are ground
Study carefully.First with neural network paraphrase figure Visualization Model, then with connection weight method to decision parameters contribution rate quantitative analysis, finally
It is test, the connection weight of model, the comprehensive contribution degree of decision parameters and relative contribution rate is carried out significant using improved randomization
Property examine, and then trim model.Compared with the existing technology compared with this method obtains the internal information of process variable, greatly mentions
High " being appreciated that " ability of model can provide for the energy consumption analysis of enterprise's creation data and effective instruct foundation.
The creation data is by acquiring industry spot measuring instrument or/and by acquiring in enterprise's DCS control system.
The neural network paraphrase figure includes connection weight lines, and the thickness of the order of magnitude lines of connection weight indicates,
The connection weight that thick line indicates is greater than the connection weight that filament indicates;
The type of lines indicates the state of connection weight, and solid line indicates to have connected the stimulation state of positive interaction, and dotted line indicates to connect
Pick up the holddown of negative interaction.
Pass through the tracking to connection weight size and state, it is possible to authenticate go out between single variable or multiple variables and become to target
The influence of amount.
Connection weight achievement data obtaining step in one embodiment of the invention is illustrated below:
(1) record is input to hidden layer and hidden layer to the connection weight matrix exported;
1 connection weight matrix of table
Table 1 Connection weights matrix
(2) input-hidden layer-output connection weight contribution degree C is calculated
It is big to the contribution of output by hidden neuron to characterize each variable for input-hidden layer-output connection weight contribution degree
It is small.Its value be input to hidden layer connection weight and hidden layer to output connection weight product, expression formula are as follows:
Cij=Wij×WYi, i=A, B;J=1,2,3; (1)
Example: CA1=WA1×WYA=0.8147 × (- 0.6557)=- 0.5342, shows decision variable X1Pass through hidden layer nerve
First A is -0.5342 to the contribution degree of output Y.Input-hidden layer-output contribution degree such as table 2.
Table 2 inputs hidden layer and exports contribution degree
Table 2 The contribution of input-hidden-output
(3) comprehensive connection weight contribution degree OI
OI characterizes each input variable to the contribution of output variable.The expression of '+' plays positive stimulation;'-'
Negative sense inhibiting effect is indicated.The bigger expression of absolute value, expression formula bigger to the contribution degree of output are as follows:
Example:Show X1Be to the comprehensive contribution degree of Y-
0.6001。
(4) relative contribution rate RI
RI shows that each input variable integrally to the significance level of output variable, is provided with percents.If it is greater than
0, indicate that the variable plays positive interaction to output variable;If it is less than the 0 expression variable to having exported negative interaction.If it is equal to 0, table
Show that the variable does not influence output variable.Its calculation formula is:
Synthesis connection weight contribution degree OI and relative contribution rate the RI such as table 3 of calculating.
Table 3 comprehensive contribution degree OI and relative contribution rate RI
Table 3 Overall contribution(OI)and relative contribution rate(RI)
According to table 3, it can be deduced that X1、X3To output Y rise negative sense inhibiting effect, relative contribution rate be respectively -60.43% and -
29.24%;X2Positive stimulation is played to Y, relative contribution rate is 10.33%.Therefore, connection weight method compensates for neural network and releases
The defect of adopted figure realizes quantitative analysis of the input variable to target variable contribution rate.
Connection weight achievement data includes the connection weight matrix data being input to hidden layer and hidden layer to output, input-hidden layer-
Export connection weight contribution degree data, comprehensive connection weight contribution degree data and relative contribution rate data.
It is described that connection weight is carried out significance test and delete inapparent connection weight to include following step according to statistical indicator
Suddenly,
S1 constructs multiple neural network models according to the creation data sample after standardization, and each neural network model is adopted
It is trained with the initial weight of small random number and the training method with momentum term and learning rate.
S2 selects the neural network model with optimum prediction performance in multiple neural network models, and records
The initial weight and termination weight of the model, obtain connection weight achievement data;
The acquisition connection weight achievement data includes the following steps,
S21 calculates input-hidden layer-output connection weight contribution degree C;
S22 calculates the synthesis connection weight contribution degree OI of each variable;
S23 calculates the relative contribution rate RI of each variable;
S3 changes the sequence of training sample output collection at random;
S4 is recorded with the initial weight recorded in the sample and S2 after change sequence, re -training neural network model
The termination weight of model.
S5 be repeated several times S3 and S4, and record number of repetition be COUNT, according to the termination weight recorded in S4, obtain with
C, OI, RI of machine.
In an embodiment of the present invention, number of repetition is 999 times, and COUNT value is 999.
S6 calculates separately input-hidden layer-output connection weight contribution degree C, comprehensive connection weight contribution degree OI, relative contribution rate RI
Significance degree P, include the following steps
If S61 standard value is greater than 0, P=(N+1)/(COUNT+1), N is the number that randomization value is more than or equal to standard value;
If S62 standard value is the number that randomization value is less than or equal to standard value less than 0, P=(M+1)/(COUNT+1), M;
Standard value in the present invention refers to the value of OI, RI, and for a certain input, the sign of the two is consistent, even certain
The OI of one variable is positive value, then RI is also positive value, and the expression of '+' plays positive stimulation;The expression of '-' plays negative sense and inhibits to make
With.Absolute value is bigger to indicate bigger to the contribution degree of output.
S7, if the p of connection weight is less than preset value, retains the connection weight connecting line, otherwise in neural network paraphrase figure
It deletes the connection weight connecting line and generates the neural network paraphrase figure after building.
Preset value is 0.05 in an embodiment of the present invention.
Below by a specific embodiment, next the present invention will be described.
In enterprise's hydrogen cyanide production process in the present embodiment, the complication the Worker's Stadium system is built using neural network
Mould inputs the compensating flowrate (Nm of compensation temperature (DEG C) for ammonia, ammonia3·h-1), natural gas/ammonia volume ratio, air/ammonia gas
Product ratio, the compensation pressure (Mpa) of ammonia, the compensation pressure (Mpa) of natural gas and 9 decisions of big mixer outlet temperature (DEG C) are joined
Number, corresponding variable are TN, FN, CN, AN, PN, PC, PA, PP, TD, are exported as hydrogen cyanide conversion ratio η (HCN).In production process
Decision parameters, HCN conversion ratio (η (HCN)) and sample data such as table 4.Sample data is divided into training set and inspection set, through anti-
It is 9-7-1 that the final topological structure for determining network is practiced in refreshment, as shown in Figure 2.
4 HCN production process variable of table and data set
Table 4 Process variables and data sets of HCN
However, specific physical message cannot be obtained in HCN production process using the model in the prior art to explain this
Chemical industry system, also without the relationship between 9 decision parameters of method interpretation and hydrogen cyanide conversion ratio.Therefore, it is utilized respectively connection weight method
The contribution rate for quantitatively calculating the model decision parameter is test using improved randomization to trim hydrogen cyanide conversion ratio η(HCN)Nerve
Paraphrase figure in network further increases hydrogen cyanide conversion ratio η(HCN)The transparence degree of neural network model.
To hydrogen cyanide conversion ratio η(HCN)Neural network model application connection weight method has obtained 9 input decision parameters to hydrogen
Cyanic acid conversion ratio η(HCN)Comprehensive contribution degree and relative contribution rate, as shown in table 5.
The comprehensive contribution rate OI and relative contribution rate RI of 5 decision variable of table
Table 5 Overall contribution and relative contribution rate of
decision variables
By to hydrogen cyanide conversion ratio η(HCN)Neural network paraphrase figure carries out randomization test, and it is defeated to have obtained input-hidden layer-
The randomization P value of connection weight out, as shown in table 6 (both preset value is 0.05 for α=0.05).
The P value (α=0.05) that table 6 is randomized
Table 6 P value of randomization
Hydrogen cyanide conversion ratio η is removed according to the P value in table 6(HCN)Inapparent connection weight in model has obtained new hydrogen cyanogen
Sour conversion ratio η(HCN)Neural network paraphrase figure, as shown in Figure 3.When α=0.1 both preset value is 0.01, it can be found that model is most
Pipe removes the inapparent connection weight in part, but does not still obtain satisfactory neural network paraphrase figure.In α=0.05
Both neural network paraphrase figure when preset value is 0.05, compared with Fig. 2, hydrogen cyanide conversion ratio η (HCN) neural network paraphrase pattern
Relatively succinct, transparence degree is high, it is easier to explain the pass between decision parameters and decision parameters, decision parameters and response variable
System.Compared with the existing technology compared with, the present invention by obtaining the internal information of process variable, greatly improve model " can
Understand " ability, it can be provided for the energy consumption analysis of enterprise's creation data and effectively instruct foundation.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it is still
Technical solution documented by foregoing embodiments is modified, or is equally replaced to some or all of the technical features
It changes;And these are modified or replaceed, the model for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution
It encloses, should all cover within the scope of the claims and the description of the invention.
Claims (5)
1. a kind of energy-consumption monitoring method based on neural network transparence, which is characterized in that include the following steps,
Neural network model is generated according to creation data;
Generate neural network paraphrase figure;
Obtain connection weight achievement data;
Significance test and the deletion inapparent company in neural network paraphrase figure are carried out to connection weight according to power achievement data is connect
Connect power.
2. a kind of energy-consumption monitoring method based on neural network transparence as described in claim 1, which is characterized in that the life
Data are produced by acquiring to industry spot measuring instrument or/and by acquiring in enterprise's DCS control system.
3. a kind of energy-consumption monitoring method based on neural network transparence as described in claim 1, which is characterized in that the mind
It include connection weight lines through network paraphrase figure, the thickness of the order of magnitude lines of connection weight indicates, the connection that thick line indicates
Power is greater than the connection weight that filament indicates;
The type of lines indicates the state of connection weight, and solid line indicates to have connected the stimulation state of positive interaction, and dotted line expression connects
The holddown of negative interaction.
4. a kind of energy-consumption monitoring method based on neural network transparence as described in claim 1, which is characterized in that connection weight
Achievement data includes the connection weight matrix data being input to hidden layer and hidden layer to output, input-hidden layer-output connection weight contribution
Degree evidence, comprehensive connection weight contribution degree data and relative contribution rate data.
5. a kind of energy-consumption monitoring method based on neural network transparence as described in claim 1, which is characterized in that it is described according to
Significance test and the deletion inapparent connection weight packet in neural network paraphrase figure are carried out to connection weight according to power achievement data is connect
Include following steps,
S1 constructs multiple neural network models, each neural network model is using small according to the creation data sample after standardization
The initial weight of random number and training method with momentum term and learning rate are trained;
S2 selects the neural network model with optimum prediction performance in multiple neural network models, and records the mould
The initial weight and termination weight of type, obtain connection weight achievement data;
The acquisition connection weight achievement data includes the following steps,
S21 calculates input-hidden layer-output connection weight contribution degree C;
S22 calculates the synthesis connection weight contribution degree OI of each variable;
S23 calculates the relative contribution rate RI of each variable;
S3 changes the sequence of training sample output collection at random;
S4 is with the initial weight recorded in the sample and S2 after change sequence, re -training neural network model, and record cast
Termination weight;
S3 and S4 is repeated several times in S5, and recording number of repetition is that COUNT is randomized according to the termination weight recorded in S4
C, OI, RI;
S6 calculate separately input-hidden layer-output connection weight contribution degree C, comprehensive connection weight contribution degree OI, relative contribution rate RI it is aobvious
Work degree P, includes the following steps,
If S61 standard value is greater than 0, P=(N+1)/(COUNT+1), N is the number that randomization value is more than or equal to standard value;
If S62 standard value is the number that randomization value is less than or equal to standard value less than 0, P=(M+1)/(COUNT+1), M;
S7, if the p of connection weight is less than preset value, retains the connection weight connecting line, otherwise deletes in neural network paraphrase figure
The connection weight connecting line generates the neural network paraphrase figure after building.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811523525.8A CN109636185A (en) | 2018-12-13 | 2018-12-13 | A kind of energy-consumption monitoring method based on neural network transparence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811523525.8A CN109636185A (en) | 2018-12-13 | 2018-12-13 | A kind of energy-consumption monitoring method based on neural network transparence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109636185A true CN109636185A (en) | 2019-04-16 |
Family
ID=66073488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811523525.8A Pending CN109636185A (en) | 2018-12-13 | 2018-12-13 | A kind of energy-consumption monitoring method based on neural network transparence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109636185A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745279A (en) * | 2014-01-24 | 2014-04-23 | 广东工业大学 | Method and device for monitoring energy consumption abnormity |
US20180349256A1 (en) * | 2017-06-01 | 2018-12-06 | Royal Bank Of Canada | System and method for test generation |
-
2018
- 2018-12-13 CN CN201811523525.8A patent/CN109636185A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103745279A (en) * | 2014-01-24 | 2014-04-23 | 广东工业大学 | Method and device for monitoring energy consumption abnormity |
US20180349256A1 (en) * | 2017-06-01 | 2018-12-06 | Royal Bank Of Canada | System and method for test generation |
Non-Patent Citations (2)
Title |
---|
姚立忠等: "神经网络模型的透明化及输入变量约简", 《计算机科学》 * |
李太福等: "复杂化工过程神经网络模型的透明化", 《控制工程》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101699477B (en) | Neural network method for accurately predicting dam deformation | |
Zhang et al. | Sound quality prediction of vehicle interior noise and mathematical modeling using a back propagation neural network (BPNN) based on particle swarm optimization (PSO) | |
CN111199270B (en) | Regional wave height forecasting method and terminal based on deep learning | |
CN110096810B (en) | Industrial process soft measurement method based on layer-by-layer data expansion deep learning | |
CN109613898A (en) | A kind of enterprise's creation data monitoring method based on industrial Internet of Things | |
Gadhavi et al. | Student final grade prediction based on linear regression | |
CN109556863B (en) | MSPAO-VMD-based large turntable bearing weak vibration signal acquisition and processing method | |
CN110544051B (en) | Real-time economic evaluation method for large condensing steam turbine of thermal power plant | |
CN106599417A (en) | Method for identifying urban power grid feeder load based on artificial neural network | |
Khosrowshahi | Simulation of expenditure patterns of construction projects | |
CN115438726A (en) | Device life and fault type prediction method and system based on digital twin technology | |
CN112307677A (en) | Power grid oscillation mode evaluation and safety active early warning method based on deep learning | |
CN107545307A (en) | Predicting model for dissolved gas in transformer oil method and system based on depth belief network | |
CN114547976B (en) | Multi-sampling rate data soft measurement modeling method based on pyramid variation self-encoder | |
CN114862035B (en) | Combined bay water temperature prediction method based on transfer learning | |
CN104834975A (en) | Power network load factor prediction method based on intelligent algorithm optimization combination | |
Yang et al. | A multi-feature weighting based K-means algorithm for MOOC learner classification | |
CN117494037A (en) | Transformer fault diagnosis method based on variable-weight VAE and dual-channel feature fusion | |
CN117669388B (en) | Fault sample generation method, device and computer medium | |
CN113570165B (en) | Intelligent prediction method for permeability of coal reservoir based on particle swarm optimization | |
CN111275204A (en) | Transformer state identification method based on hybrid sampling and ensemble learning | |
CN114595883A (en) | Oil-immersed transformer residual life personalized dynamic prediction method based on meta-learning | |
CN109636185A (en) | A kind of energy-consumption monitoring method based on neural network transparence | |
CN117195647A (en) | Method, apparatus, device, medium and program product for post-earthquake evaluation of transformer bushings | |
CN110110784B (en) | Transformer fault identification method based on transformer related operation data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190416 |