CN116703128B - Natural language processing method suitable for power dispatching - Google Patents

Natural language processing method suitable for power dispatching Download PDF

Info

Publication number
CN116703128B
CN116703128B CN202310979904.2A CN202310979904A CN116703128B CN 116703128 B CN116703128 B CN 116703128B CN 202310979904 A CN202310979904 A CN 202310979904A CN 116703128 B CN116703128 B CN 116703128B
Authority
CN
China
Prior art keywords
power generation
generation unit
entity
power
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310979904.2A
Other languages
Chinese (zh)
Other versions
CN116703128A (en
Inventor
李强
赵峰
庄莉
王秋琳
张晓东
王燕蓉
陈江海
邱镇
黄晓光
吴佩颖
丘志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Information and Telecommunication Co Ltd
Fujian Yirong Information Technology Co Ltd
Original Assignee
State Grid Information and Telecommunication Co Ltd
Fujian Yirong Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Information and Telecommunication Co Ltd, Fujian Yirong Information Technology Co Ltd filed Critical State Grid Information and Telecommunication Co Ltd
Priority to CN202310979904.2A priority Critical patent/CN116703128B/en
Publication of CN116703128A publication Critical patent/CN116703128A/en
Application granted granted Critical
Publication of CN116703128B publication Critical patent/CN116703128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/353Clustering; Classification into predefined classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Biomedical Technology (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Biophysics (AREA)
  • Primary Health Care (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Animal Behavior & Ethology (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention relates to the technical field of power dispatching, and discloses a natural language processing method suitable for power dispatching, which comprises the following steps: collecting data of a power generation unit to generate a feature vector of the power generation unit; constructing a knowledge graph based on the circuit layout, inputting a first neural network, outputting grid-connected point voltage at the next time point, generating a loss value, back-propagating an updated parameter feature vector, and then re-inputting the updated parameter feature vector into a fourth hidden layer; the invention generates the reactive power scheduling parameter for each power generation unit by a deep learning method, and can utilize the reactive power generated by the power generation unit to adjust the power balance of the power generation system.

Description

Natural language processing method suitable for power dispatching
Technical Field
The invention relates to the technical field of power dispatching, in particular to a natural language processing method suitable for power dispatching.
Background
The new energy fully utilizes natural resources to generate power, is an indispensable part in the carbon neutralization target realization process, but natural energy has uncontrollable characteristics, so that the fluctuation range of active power output by a power generation system to a power grid is large, reactive power of the power grid is required to be consumed during grid connection, the stability of the power grid is influenced, the grid connection point is adopted in a traditional mode for centralized compensation, and reactive power which can be generated by the power generation unit cannot be utilized.
Disclosure of Invention
The invention provides a natural language processing method suitable for power dispatching, which solves the technical problem that the reactive power generated by a power generation unit cannot be utilized by adopting grid-connected point centralized compensation in the traditional mode in the related technology.
The invention provides a natural language processing method suitable for power dispatching, which comprises the following steps: step 101, collecting power generation unit data, and generating a power generation unit feature vector based on the power generation unit data.
Step 102, constructing a knowledge graph based on the circuit layout, wherein the knowledge graph comprises an entity of a power generation unit, an entity of a converter, an entity of an energy storage unit, an entity of an inverter and a grid-connected point entity, and the relation between the entities of the power generation unit is consistent with the relation between the power generation units on the circuit layout; and encoding the entity of the knowledge graph to obtain an entity vector.
Step 103, inputting the knowledge graph and the entity vector into a first neural network, wherein the first neural network comprises a first hidden layer, a second hidden layer, a splicing layer, a third hidden layer and a fourth hidden layer, the first hidden layer inputs the knowledge graph and the entity vector, and the first hidden layer outputs the entity feature vector for each entity; the second hidden layer inputs the feature vector of the power generation unit, the second hidden layer outputs the first intermediate vector of the power generation unit, the first intermediate vector of the power generation unit and the entity feature vector are input into the splicing layer, the splicing layer outputs the feature vector of the total set, the feature vector of the total set is input into the third hidden layer, the third hidden layer outputs the parameter feature vector, the dimension of the parameter feature vector is equal to the total number of the power generation units, the fourth hidden layer inputs the parameter feature vector, the final feature vector is output to the full connection layer, and the full connection layer outputs the classification label of the grid-connected point voltage representing the next time point.
And 104, generating a loss value through a difference value between the grid-connected point voltage at the next time output by the first neural network and the calibrated grid-connected point voltage, reversely transmitting an updated parameter characteristic vector, and then re-inputting the updated parameter characteristic vector into the fourth hidden layer.
Step 105, iteratively executing step 104 until the difference value between the grid-connected point voltage of the next time output by the first neural network and the calibrated grid-connected point voltage is smaller than a preset value, respectively generating a scheduling parameter from the components of the parameter feature vector input in the last iteration, and taking the scheduling parameter as the reactive power required to be output by the corresponding power generation unit.
Further, the power generation unit data comprises wind quantity, wind pressure, rotor active power, stator active power, reactive power, power system frequency, generator stator reactance, stator-rotor mutual reactance, slip, rotor resistance, stator self inductance, a component of rotor current on d axis, a component of rotor current on q axis, a component of rotor self inductance on d axis, a component of rotor self inductance on q axis, stator self inductance and rotor side converter rated current.
Further, the generating unit feature vector is expressed as
Wherein->Representing rotor active power of the ith power generation unit,/->Represents the stator active power of the ith power generation unit,/->Representing reactive power, < > of the ith power generation unit>Represents the air volume of the ith generating unit, +.>Represents the wind pressure of the ith power generation unit, +.>Representing power system frequency->Generator stator reactance representing the i-th power generation unit, and>stator-rotor interaction representing the ith power generation unit、/>Indicating slip of the ith power generation unit,/-or->Represents the rotor resistance of the ith power generation unit, < >>Stator self-inductance of the ith generating unit,/-for the generating unit>Representing the component of the rotor current of the ith power generation unit in the d-axis, and>representing the component of the rotor current of the ith power generation unit in the q-axis, and>representing the component of the rotor self-induction flux linkage of the ith generating unit in the d-axis,/->Representing the component of the rotor self-induction flux of the ith power generation unit in the q-axis,/-, and>stator self-induction flux representing the ith power generation unit,/->The rated current of the rotor-side converter of the i-th power generation unit is represented, and g represents the gravitational acceleration.
Further, the entities in the knowledge graph are named entities.
Further, the splicing layer is used for splicing the first intermediate vector of each power generation unit with the corresponding entity characteristic vector to obtain a second intermediate vector of the power generation unit, and then splicing the second intermediate vectors of all the power generation units to obtain a total set characteristic vector.
Further, the calculation formula of the first hidden layer is as follows:
wherein->And->Representing an entity feature vector matrix and an entity vector matrix, respectively,/->Representing the sum of the physical adjacency matrix and the identity matrix, < >>Representation->Degree matrix of->Weight matrix representing the first hidden layer, +.>Representing the ReLU activation function, the entity vector matrix and the entity feature vector matrix are tensor representations of the entity vector and the entity feature vector.
Further, the scheduling parameters generated by the kth component of the parameter feature vector are distributed to the power generation units corresponding to the kth entity vector of the entity vector matrix.
Further, the value range of 0.5p.u. -2.0p.u. is subjected to mean discretization to obtain 100 point values, which respectively correspond to 100 classification labels of the classification space of the full-connection layer.
Further, setting a speed response value and an iteration number threshold on the basis of the preset value, and if the iteration number of the step 104 exceeds the iteration number threshold, replacing the preset value with the speed response value to serve as a condition for ending the iteration of the step 104, wherein the speed response value is larger than the preset value.
Further, the second hidden layer performs pre-training, and is connected with a first classifier during pre-training, and a classification label of the first classifier represents the efficiency of the power generation unit.
The invention has the beneficial effects that: the invention generates the reactive power scheduling parameters for each power generation unit by combining deep learning with natural semantic processing, and can utilize the reactive power generated by the power generation unit to adjust the power balance of the power generation system.
Drawings
Fig. 1 is a flow chart of a natural language processing method suitable for power dispatching in accordance with the present invention.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
As shown in fig. 1, the invention provides a natural language processing method suitable for power dispatching based on a scene of power dispatching of a wind farm, which comprises the following steps: step 101, collecting power generation unit data, wherein the power generation unit data comprise wind quantity, wind pressure, rotor active power, stator active power, reactive power, power system frequency, generator stator reactance, stator-rotor interaction, slip, rotor resistance, stator self-inductance, component of rotor current on d axis, component of rotor current on q axis, component of rotor self-inductance linkage on d axis, component of rotor self-inductance linkage on q axis, stator self-inductance linkage and rated current of a rotor side converter.
The above parameters are all defined under a two-phase rotation dq coordinate system.
A wind power generator is used as a power generation unit, wherein a rotor, a stator and the like are structures of doubly-fed generators belonging to the wind power generator.
Generating a power generation unit based on power generation unit dataFeature vector
Wherein->Representing rotor active power of the ith power generation unit,/->Represents the stator active power of the ith power generation unit,/->Representing reactive power, < > of the ith power generation unit>Represents the air volume of the ith generating unit, +.>Represents the wind pressure of the ith power generation unit, +.>Represents the frequency of the power system (power system with grid-connected power generation units), a power system with grid-connected power generation units>Generator stator reactance representing the i-th power generation unit, and>stator-rotor interaction representing the ith power generation unit,/->Indicating slip of the ith power generation unit,/-or->Represents the rotor resistance of the ith power generation unit, < >>Stator self-inductance of the ith generating unit,/-for the generating unit>Representing the component of the rotor current of the ith power generation unit in the d-axis, and>representing the component of the rotor current of the ith power generation unit in the q-axis, and>representing the component of the rotor self-induction flux linkage of the ith generating unit in the d-axis,/->Representing the component of the rotor self-induction flux of the ith power generation unit in the q-axis,/-, and>stator self-induction flux representing the ith power generation unit,/->The rated current of the rotor-side converter of the i-th power generation unit is represented, and g represents the gravitational acceleration.
For a power generation unit employing a brushless doubly-fed asynchronous motor as a power generation device, the power generation unit data further includes a component of a current of a power winding on a d-axis, a component of a current of a power winding on a q-axis, a component of a current of a control winding on a d-axis, a component of a current of a control winding on a q-axis, a component of a current of a control winding on a d-axis, mutual inductance between the power and control windings, self inductance of the power winding, self inductance of the control winding, a component of a self inductance linkage of the power winding on a d-axis, a component of a self inductance linkage of the power winding on a q-axis, and dimensions representing these parameters are added in a power generation unit feature vector.
Step 102, constructing a knowledge graph based on the circuit layout, wherein the knowledge graph comprises an entity of a power generation unit, an entity of a converter, an entity of an energy storage unit, an entity of an inverter and a grid-connected point entity, and the relation between the entities of the power generation unit is consistent with the relation between the power generation units on the circuit layout; and encoding the entity of the knowledge graph to obtain an entity vector.
In one embodiment of the invention, the entities in the knowledge graph are named entities, and the entity vector is obtained by encoding in a natural semantic encoding mode such as single-hot encoding.
In one embodiment of the present invention, the two power generation units are directly connected on the circuit layout, which indicates that there is a connection between the entities of the two power generation units, and the converter is directly connected with the power generation units, which indicates that there is a connection between the converter and the power generation units.
In one embodiment of the invention, the entity of the power generation unit, the entity of the converter, the entity of the energy storage unit and the entity of the inverter are fully connected directly, namely, any two entities are connected.
Step 103, inputting the knowledge graph and the entity vector into a first neural network, wherein the first neural network comprises a first hidden layer, a second hidden layer, a splicing layer, a third hidden layer and a fourth hidden layer, the first hidden layer inputs the knowledge graph and the entity vector, and the first hidden layer outputs the entity feature vector for each entity; the second hidden layer inputs the feature vector of the power generation unit, the second hidden layer outputs the first intermediate vector of the power generation unit, the first intermediate vector of the power generation unit and the entity feature vector are input into the splicing layer, the splicing layer outputs the feature vector of the total set, the feature vector of the total set is input into the third hidden layer, the third hidden layer outputs the parameter feature vector, the dimension of the parameter feature vector is equal to the total number of the power generation units, the fourth hidden layer inputs the parameter feature vector, the final feature vector is output to the full connection layer, and the full connection layer outputs the classification label of the grid-connected point voltage representing the next time point.
In one embodiment of the present invention, the stitching layer is configured to stitch the first intermediate vector of each power generation unit with the corresponding entity feature vector to obtain a second intermediate vector of the power generation unit, and then stitch all the second intermediate vectors of the power generation units to obtain a total set feature vector.
In one embodiment of the present invention, the calculation formula of the first hidden layer is as follows:
wherein->And->Representing an entity feature vector matrix and an entity vector matrix, respectively,/->Representing the sum of the physical adjacency matrix and the identity matrix, < >>Representation->Degree matrix of->Weight matrix representing the first hidden layer, +.>Representing the ReLU activation function.
Wherein the entity vector matrix and the entity feature vector matrix are tensor representations of the entity vector and the entity feature vector, e.g. the row vector of row 3 of the entity vector matrix represents the entity vector of the 3 rd entity.
The entity adjacency matrix represents the connection relation of the entities in the knowledge graph, the element of the nth row of the a-th row represents the connection of the a-th entity and the nth entity, the value of the element is 1, and the value of the element is not 0.
In one embodiment of the invention, the second hidden layer is pre-trained, and the first classifier is connected during pre-training, and the classification label of the first classifier represents the efficiency of the power generation unit. For example, the device comprises 101 classification labels, which respectively correspond to 101 point values with 0% -100% (the value range of efficiency) of mean value dispersion. The first neural network is added after the second hidden layer pretraining is completed, and the initial weight parameter of the second hidden layer can be also regarded as being obtained through the pretraining. Such pre-training mainly takes into account efficiency factors of the power generation unit that may be considered in the generation of the final scheduling parameters.
In one embodiment of the invention, the second, third and fourth hidden layers each employ MLP (multi-layer perceptron).
And 104, generating a loss value through a difference value between the grid-connected point voltage at the next time output by the first neural network and the calibrated grid-connected point voltage, reversely transmitting an updated parameter characteristic vector, and then re-inputting the updated parameter characteristic vector into the fourth hidden layer.
Unlike the training process back-propagation, only the parameter feature vectors are updated here.
Step 105, iteratively executing step 104 until the difference value between the grid-connected point voltage of the next time output by the first neural network and the calibrated grid-connected point voltage is smaller than a preset value, respectively generating a scheduling parameter from the components of the parameter feature vector input in the last iteration, and taking the scheduling parameter as the reactive power required to be output by the corresponding power generation unit.
In one embodiment of the present invention, the scheduling parameters generated by the kth component of the parametric feature vector are assigned to the power generation unit corresponding to the kth entity vector of the entity vector matrix.
In one embodiment of the present invention, in order to reduce the number of iterations of step 104 to increase the corresponding speed, a speed response value and an iteration number threshold are set on the basis of a preset value, and if the iteration number of step 104 exceeds the iteration number threshold, the speed response value is used as a condition for ending the iteration of step 104 instead of the preset value, where the speed response value is greater than the preset value.
The voltage of the grid-connected point output by the first neural network is consistent with the voltage of the calibration grid-connected point, for example, p.u. can be adopted as a unit, and the preset value can be 0.1p.u.. The value range of 0.5p.u. -2.0p.u. can be subjected to mean discretization to obtain 100 point values, which respectively correspond to 100 classification labels of the classification space of the full-connection layer. Although the classification space of the full-connection layer contains 100 classification labels, only one classification label is selected as the output of the first neural network finally, for example, the classification label corresponding to the largest one of the 100 output values of the full-connection layer is selected as the output, if the full-connection layer is connected by adopting the softmax classifier, the softmax classifier outputs the probability of the classification label, and the classification label with the largest probability value is selected as the output of the first neural network.
The training of the first neural network is the same as that of a common neural network, and a common cross entropy loss function or a difference value between the grid-connected point voltage at the next time and the actual grid-connected point voltage at the next time can be adopted as a loss value during the training.
In one embodiment of the invention, as a measure to improve the robustness of the system, the reactive power limit that can be generated by the power generating unit is calculated each time the dispatching is performed, and if the reactive power generated by the dispatching parameter is greater than the calculated reactive power limit, the reactive power limit is taken as the reactive power used in the actual dispatching.
The next time in the above embodiment is actually a time point value, and the time difference between the next time and the previous time may be 1s. In particular, may be adjusted according to the scheduling period.
In the embodiment, the distribution of the power generation units of the wind farm is considered, particularly, for some wide-area wind farms or scenes where the power generation units are close to the ground, the specificity of the operation parameters of each power generation unit is strong, and the method for directly carrying out complex calculation on the operation parameters of one power generation unit to obtain the scheduling parameters is rough, so that frequent and wide-range fluctuation of the voltage of the grid-connected point is easy to cause.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.

Claims (8)

1. The natural language processing method suitable for power dispatching is characterized by comprising the following steps:
step 101, collecting power generation unit data, and generating a power generation unit feature vector based on the power generation unit data;
step 102, constructing a knowledge graph based on the circuit layout, wherein the knowledge graph comprises an entity of a power generation unit, an entity of a converter, an entity of an energy storage unit, an entity of an inverter and a grid-connected point entity, and the relation between the entities of the power generation unit is consistent with the relation between the power generation units on the circuit layout; encoding the entity of the knowledge graph to obtain an entity vector;
step 103, inputting the knowledge graph and the entity vector into a first neural network, wherein the first neural network comprises a first hidden layer, a second hidden layer, a splicing layer, a third hidden layer and a fourth hidden layer, the first hidden layer inputs the knowledge graph and the entity vector, and the first hidden layer outputs the entity feature vector for each entity; the second hidden layer inputs the feature vector of the power generation unit, the second hidden layer outputs the first intermediate vector of the power generation unit, the first intermediate vector of the power generation unit and the entity feature vector are input into the splicing layer, the splicing layer outputs the feature vector of the total set, the feature vector of the total set is input into the third hidden layer, the third hidden layer outputs the parameter feature vector, the dimension of the parameter feature vector is equal to the total number of the power generation units, the fourth hidden layer inputs the parameter feature vector, the final feature vector is output to the full connection layer, and the full connection layer outputs the classification label of the grid-connected point voltage representing the next time point;
the splicing layer is used for splicing the first intermediate vector of each power generation unit with the corresponding entity characteristic vector to obtain a second intermediate vector of the power generation unit, and then splicing the second intermediate vectors of all the power generation units to obtain a total set characteristic vector;
the calculation formula of the first hidden layer is as follows:
wherein->And->Representing an entity feature vector matrix and an entity vector matrix, respectively,/->Representing the sum of the physical adjacency matrix and the identity matrix, < >>Representation->Degree matrix of->Weight matrix representing the first hidden layer, +.>Representing a ReLU activation function, the entity vector matrix and the entity feature vector matrix being tensor representations of the entity vector and the entity feature vector;
104, generating a loss value through a difference value between the grid-connected point voltage at the next time output by the first neural network and the calibrated grid-connected point voltage, reversely transmitting an updated parameter feature vector, and then re-inputting the updated parameter feature vector into a fourth hidden layer;
step 105, iteratively executing step 104 until the difference value between the grid-connected point voltage of the next time output by the first neural network and the calibrated grid-connected point voltage is smaller than a preset value, respectively generating a scheduling parameter from the components of the parameter feature vector input in the last iteration, and taking the scheduling parameter as the reactive power required to be output by the corresponding power generation unit.
2. The method of claim 1, wherein the power generation unit data comprises wind volume, wind pressure, rotor active power, stator active power, reactive power, power system frequency, generator stator reactance, stator-rotor mutual reactance, slip, rotor resistance, stator self inductance, a component of rotor current on d-axis, a component of rotor current on q-axis, a component of rotor self inductance on d-axis, a component of rotor self inductance on q-axis, stator self inductance, rotor side current transformer rated current.
3. The method of claim 1, wherein the feature vectors of the power generation units are expressed as
Wherein->Representing rotor active power of the ith power generation unit,/->Represents the stator active power of the ith power generation unit,/->Representing reactive power, < > of the ith power generation unit>Represents the air volume of the ith generating unit, +.>Represents the wind pressure of the ith power generation unit, +.>Representing power system frequency->Generator stator reactance representing the i-th power generation unit, and>stator-rotor interaction representing the ith power generation unit,/->Indicating slip of the ith power generation unit,/-or->Represents the rotor resistance of the ith power generation unit, < >>Stator self-inductance of the ith generating unit,/-for the generating unit>Representing the component of the rotor current of the ith power generation unit in the d-axis, and>representing the component of the rotor current of the ith power generation unit in the q-axis, and>representing the component of the rotor self-induction flux linkage of the ith generating unit in the d-axis,/->Representing the component of the rotor self-induction flux of the ith power generation unit in the q-axis,/-, and>stator self-induction flux representing the ith power generation unit,/->The rated current of the rotor-side converter of the i-th power generation unit is represented, and g represents the gravitational acceleration.
4. The method for processing natural language suitable for power dispatching according to claim 1, wherein the entities in the knowledge graph are named entities.
5. A natural language processing method suitable for power dispatching according to claim 1, wherein the dispatching parameter generated by the kth component of the parameter feature vector is distributed to the generating unit corresponding to the kth entity vector of the entity vector matrix.
6. The natural language processing method suitable for power dispatching according to claim 1, wherein the value range of 0.5p.u. -2.0p.u. is subjected to mean discretization to obtain 100 point values, and the 100 point values correspond to 100 classification labels of the classification space of the full connection layer respectively.
7. The natural language processing method for power scheduling according to claim 1, wherein the speed response value and the iteration number threshold are set on the basis of a preset value, and if the iteration number of step 104 exceeds the iteration number threshold, the speed response value is used as a condition for stopping the iteration of step 104 instead of the preset value, and the speed response value is greater than the preset value.
8. The method for processing natural language suitable for power dispatching according to claim 1, wherein the second hidden layer is pre-trained and is connected with a first classifier when pre-trained, and a classification label of the first classifier indicates the efficiency of the power generation unit.
CN202310979904.2A 2023-08-07 2023-08-07 Natural language processing method suitable for power dispatching Active CN116703128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310979904.2A CN116703128B (en) 2023-08-07 2023-08-07 Natural language processing method suitable for power dispatching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310979904.2A CN116703128B (en) 2023-08-07 2023-08-07 Natural language processing method suitable for power dispatching

Publications (2)

Publication Number Publication Date
CN116703128A CN116703128A (en) 2023-09-05
CN116703128B true CN116703128B (en) 2024-01-02

Family

ID=87824274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310979904.2A Active CN116703128B (en) 2023-08-07 2023-08-07 Natural language processing method suitable for power dispatching

Country Status (1)

Country Link
CN (1) CN116703128B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108874783A (en) * 2018-07-12 2018-11-23 国网福建省电力有限公司 Power information O&M knowledge model construction method
CN110825881A (en) * 2019-09-26 2020-02-21 中国电力科学研究院有限公司 Method for establishing electric power knowledge graph
WO2023004528A1 (en) * 2021-07-26 2023-02-02 深圳市检验检疫科学研究院 Distributed system-based parallel named entity recognition method and apparatus
CN116028646A (en) * 2023-02-09 2023-04-28 云南电网有限责任公司红河供电局 Power grid dispatching field knowledge graph construction method based on machine learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108874783A (en) * 2018-07-12 2018-11-23 国网福建省电力有限公司 Power information O&M knowledge model construction method
CN110825881A (en) * 2019-09-26 2020-02-21 中国电力科学研究院有限公司 Method for establishing electric power knowledge graph
WO2023004528A1 (en) * 2021-07-26 2023-02-02 深圳市检验检疫科学研究院 Distributed system-based parallel named entity recognition method and apparatus
CN116028646A (en) * 2023-02-09 2023-04-28 云南电网有限责任公司红河供电局 Power grid dispatching field knowledge graph construction method based on machine learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于依存句法分析的电力设备缺陷文本信息精确辨识方法;邵冠宇;王慧芳;吴向宏;陆金龙;李建红;何奔腾;;电力系统自动化(第12期);259-270 *

Also Published As

Publication number Publication date
CN116703128A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
Sitharthan et al. Adaptive hybrid intelligent MPPT controller to approximate effectual wind speed and optimal rotor speed of variable speed wind turbine
Soliman et al. Hybrid ANFIS‐GA‐based control scheme for performance enhancement of a grid‐connected wind generator
Mancilla-David et al. Adaptive passivity-based control for maximum power extraction of stand-alone windmill systems
Lin et al. Design of intelligent controllers for wind generation system with sensorless maximum wind energy control
Abouheaf et al. Model‐free adaptive learning control scheme for wind turbines with doubly fed induction generators
Lin et al. Intelligent controlled three‐phase squirrel‐cage induction generator system using wavelet fuzzy neural network for wind power
CN102611380A (en) Online identification method for parameters of double-fed motor
Shihabudheen et al. Neuro-fuzzy control of DFIG wind energy system with distribution network
CN115549139A (en) New energy power generation and load hybrid model identification modeling method
Wang et al. Dynamic equivalent method of PMSG‐based wind farm for power system stability analysis
Azeem et al. Robust neural network scheme for generator side converter of doubly fed induction generator
Bao et al. Adaptive inverse control of variable speed wind turbine
Bal et al. Artificial neural network based automatic voltage regulator for a stand-alone synchronous generator
Belkhier et al. Energy-based fuzzy supervisory non integer control for performance improvement of PMSG-Based marine energy system under swell effect and parameter uncertainties
Hachana et al. Efficient PMSG wind turbine with energy storage system control based shuffled complex evolution optimizer
CN116703128B (en) Natural language processing method suitable for power dispatching
Rai et al. A comparative performance analysis for loss minimization of induction motor drive based on soft computing techniques
Sivakumar et al. Neural network based reinforcement learning for maximum power extraction of wind energy
Darabian et al. Combined use of sensitivity analysis and hybrid Wavelet-PSO-ANFIS to improve dynamic performance of DFIG-based wind generation
Ortega et al. A globally convergent wind speed estimator for windmill systems
Sato et al. A topology optimization of hydroelectric generator using covariance matrix adaptation evolution strategy
CN115903457A (en) Low-wind-speed permanent magnet synchronous wind driven generator control method based on deep reinforcement learning
Sánchez et al. Neural Control of Renewable Electrical Power Systems
Kharrat et al. Robust H2‐Optimal TS Fuzzy Controller Design for a Wind Energy Conversion System
Lin Novel modified Elman neural network control for PMSG system based on wind turbine emulator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant