CN115207958A - Current deviation control method and system based on deep reinforcement learning - Google Patents

Current deviation control method and system based on deep reinforcement learning Download PDF

Info

Publication number
CN115207958A
CN115207958A CN202210983839.6A CN202210983839A CN115207958A CN 115207958 A CN115207958 A CN 115207958A CN 202210983839 A CN202210983839 A CN 202210983839A CN 115207958 A CN115207958 A CN 115207958A
Authority
CN
China
Prior art keywords
angle
current
turn
parameter
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210983839.6A
Other languages
Chinese (zh)
Inventor
刘崇茹
左优
李志显
李巨峰
李剑泽
史一博
肖康
黎晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN202210983839.6A priority Critical patent/CN115207958A/en
Publication of CN115207958A publication Critical patent/CN115207958A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/36Arrangements for transfer of electric power between ac networks via a high-tension dc link
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/001Methods to deal with contingencies, e.g. abnormalities, faults or failures
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/18Arrangements for adjusting, eliminating or compensating reactive power in networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/20Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/60Arrangements for transfer of electric power between AC networks or generators via a high voltage DC link [HVCD]

Landscapes

  • Engineering & Computer Science (AREA)
  • Power Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Inverter Devices (AREA)

Abstract

The invention provides a current deviation control method and a current deviation control system based on deep reinforcement learning, belonging to the field of power-related system control, wherein the current deviation control method comprises the following steps: acquiring the current running state, a fixed current advanced trigger angle, a fixed turn-off angle advanced trigger angle, a direct current instruction and an inversion side direct current of the high-voltage direct current transmission control system; judging whether the leading trigger angle of the fixed current is larger than the leading trigger angle of the fixed turn-off angle or not; if so, determining a first parameter optimal value and a second parameter optimal value corresponding to the current operation state according to the offline parameter set; determining a turn-off angle increment according to the direct current instruction, the direct current of the inversion side, the first parameter optimal value and the second parameter optimal value; and updating the leading trigger angle of the fixed turn-off angle according to the turn-off angle increment and the current turn-off angle until the leading trigger angle of the fixed current is less than or equal to the leading trigger angle of the fixed turn-off angle, so that the switching speed from the fixed current control to the fixed turn-off angle control is increased, and the risk of subsequent commutation failure of the system is reduced.

Description

Current deviation control method and system based on deep reinforcement learning
Technical Field
The invention relates to the field of power system control, in particular to a current deviation control method and system based on deep reinforcement learning.
Background
The LCC-HVDC (line commutated converter high voltage direct current, grid commutation converter type high voltage direct current) is a main component of an ac/dc hybrid grid, however, a thyristor is adopted as a commutation element, which causes a risk of commutation failure in an LCC-HVDC system. As the coupling between ac and dc systems becomes tighter, the impact of faults on the operation of the grid changes from local to global. Simulation shows that the current power grid can bear single-circuit ultrahigh voltage direct current 2-time left and right commutation failures under the condition of taking no measures, and the problems of direct current power drop and the like caused by multiple commutation failures can exceed the bearing range of the current power grid. Therefore, the research on the mechanism and the influence factors of the subsequent commutation failure and the corresponding inhibition measures are provided, which has great significance on the safe and reliable operation of the power grid.
The existing research shows that the main cause of the subsequent commutation failure is improper interaction of an inverter side controller, so most of the research provides corresponding improvement measures from the perspective of a direct current control system. However, in the existing research, the condition that the CEC (current error controller) parameter setting is not reasonable enough still exists, and the risk of subsequent commutation failure still cannot be avoided.
Disclosure of Invention
The invention aims to provide a current deviation control method and a current deviation control system based on deep reinforcement learning, which can improve the speed of constant current control to constant turn-off angle control and reduce the risk of subsequent commutation failure of a high-voltage direct-current transmission control system.
In order to achieve the purpose, the invention provides the following scheme:
a current deviation control method based on deep reinforcement learning is applied to a high-voltage direct-current transmission control system, wherein the high-voltage direct-current transmission control system comprises a rectification side control and an inversion side control; the inverter side control comprises a constant current control mode and a constant turn-off angle control mode, and the current deviation control method based on deep reinforcement learning comprises the following steps:
acquiring the current running state of the high-voltage direct-current transmission control system, an initial constant current advanced trigger angle, an initial constant turn-off angle advanced trigger angle, a direct current instruction and an inversion side direct current in inversion side control; the initial constant current advanced trigger angle is an advanced trigger angle obtained in a constant current control mode; the initial fixed turn-off angle advanced trigger angle is an advanced trigger angle obtained in a fixed turn-off angle control mode;
aiming at the nth iteration, judging whether the fixed current advanced trigger angle is larger than the advanced trigger angle of the nth fixed turn-off angle, wherein n is larger than 0; the 1 st fixed turn-off angle advanced trigger angle is an initial fixed turn-off angle advanced trigger angle;
if the fixed current leading trigger angle is less than or equal to the nth fixed turn-off angle leading trigger angle, no current deviation control is carried out;
if the leading trigger angle of the constant current is larger than the leading trigger angle of the nth fixed turn-off angle, then:
determining a first parameter optimal value and a second parameter optimal value corresponding to the current operation state according to an offline parameter set; the offline parameter set comprises a first parameter optimal value and a second parameter optimal value corresponding to different operating states of the high-voltage direct-current transmission control system; the first parameter optimal value and the second parameter optimal value are obtained by optimizing by adopting a deep reinforcement learning network in different running states of the high-voltage direct-current power transmission control system in advance;
determining a turn-off angle increment according to the direct current instruction, the direct current on the inversion side, the first parameter optimal value and the second parameter optimal value;
acquiring a current turn-off angle under a constant turn-off angle control mode;
and determining the (n + 1) th fixed turn-off angle leading trigger angle according to the turn-off angle increment and the current turn-off angle, and performing (n + 1) th iteration to accelerate the switching from the fixed current control mode to the fixed turn-off angle control mode.
Optionally, the current operating state of the high-voltage direct-current transmission control system includes an inverter-side direct-current voltage, an inverter-side alternating-current bus voltage drop amplitude, and a reactive power injected into a direct current.
Optionally, the current deviation control method based on deep reinforcement learning further includes:
simulating the high-voltage direct-current power transmission control system, and determining a first parameter optimizing range, a second parameter optimizing range, an initial first parameter value and an initial second parameter value;
aiming at the mth iteration of the high-voltage direct-current transmission control system in any historical operating state, acquiring the minimum turn-off angle of the high-voltage direct-current transmission control system under the action of the mth first parameter value and the mth second parameter value, wherein m is greater than 0; the 1 st first parameter value is an initial first parameter value, and the 1 st second parameter value is an initial second parameter value;
determining a value of a reward function according to the minimum turn-off angle;
judging whether the value of the reward function is larger than a set threshold value or not;
if the reward function value is smaller than or equal to a set threshold value, adjusting the size of the mth first parameter value in the first parameter optimizing range, adjusting the size of the mth second parameter value in the second parameter optimizing range to obtain the (m + 1) th first parameter value and the (m + 1) th second parameter value, and performing iteration for the (m + 1) th time;
and if the reward function value is larger than a set threshold value, taking the mth first parameter value as the first parameter optimal value of the historical operating state, and taking the mth second parameter value as the second parameter optimal value of the historical operating state.
Optionally, the determining a turn-off angle increment according to the dc current command, the inverter-side dc current, the first parameter optimal value, and the second parameter optimal value specifically includes:
determining a current deviation control curve according to the first parameter optimal value and the second parameter optimal value;
determining a current deviation value according to the direct current instruction and the direct current of the inversion side;
and determining a turn-off angle increment according to the current deviation control curve and the current deviation value.
Optionally, the current deviation value is determined using the following equation:
ΔI d =I dref -I d
wherein, delta I d Is a current deviation value, I dref For a direct current command, I d Is the direct current of the inversion side.
Optionally, the determining, according to the turn-off angle increment and the current turn-off angle, an n +1 th fixed turn-off angle leading trigger angle specifically includes:
determining a turn-off angle error according to the turn-off angle increment and the current turn-off angle;
and carrying out amplitude limiting on the turn-off angle error, and carrying out proportional integral control to obtain the (n + 1) th fixed turn-off angle advanced trigger angle.
Optionally, the off angle error is determined using the following equation:
Δγ=Δγ CECref -γ;
where Δ γ is the off angle error, Δ γ CEC For an increase of the turn-off angle, γ ref For the off-angle reference value, γ is the current off-angle.
In order to achieve the above purpose, the invention also provides the following scheme:
a current deviation control system based on deep reinforcement learning comprises:
the data acquisition unit is used for acquiring the current operation state of the high-voltage direct-current transmission control system, an initial constant current advanced trigger angle, an initial constant turn-off angle advanced trigger angle, a direct current instruction and an inversion side direct current in inversion side control; the initial constant current advanced trigger angle is an advanced trigger angle obtained in a constant current control mode; the initial fixed turn-off angle advanced trigger angle is an advanced trigger angle obtained in a fixed turn-off angle control mode;
the first judgment unit is connected with the data acquisition unit and used for judging whether the constant current advanced trigger angle is larger than the advanced trigger angle of the nth constant turn-off angle or not aiming at the nth iteration, wherein n is larger than 0; the 1 st fixed turn-off angle advanced trigger angle is an initial fixed turn-off angle advanced trigger angle;
the holding unit is connected with the first judging unit and is used for not carrying out current deviation control when the constant current leading trigger angle is less than or equal to the nth constant turn-off angle leading trigger angle;
the parameter determining unit is connected with the first judging unit and used for determining a first parameter optimal value and a second parameter optimal value corresponding to the current operation state according to an offline parameter set when the constant current leading trigger angle is larger than the n-th constant turn-off angle leading trigger angle; the offline parameter set comprises a first parameter optimal value and a second parameter optimal value corresponding to different operating states of the high-voltage direct-current transmission control system; the first parameter optimal value and the second parameter optimal value are obtained by optimizing by adopting a deep reinforcement learning network in different running states of the high-voltage direct-current power transmission control system in advance;
a turn-off angle increment determining unit connected to the parameter determining unit and the data obtaining unit, and configured to determine a turn-off angle increment according to the dc current instruction, the inverter-side dc current, the first parameter optimal value, and the second parameter optimal value;
the turn-off angle acquisition unit is used for acquiring a current turn-off angle under a constant turn-off angle control mode;
and the iteration unit is connected with the turn-off angle increment determining unit, the turn-off angle acquiring unit and the first judging unit and is used for determining the (n + 1) th fixed turn-off angle advanced trigger angle according to the turn-off angle increment and the current turn-off angle, and performing the (n + 1) th iteration to accelerate the switching from the fixed current control mode to the fixed turn-off angle control mode.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: optimizing the first parameter and the second parameter by adopting a deep reinforcement learning network in different running states of the high-voltage direct-current transmission control system in advance to obtain a first parameter optimal value and a second parameter optimal value corresponding to each running state; when the fixed current leading trigger angle is larger than the fixed turn-off angle leading trigger angle, selecting a first parameter optimal value and a second parameter optimal value corresponding to the current operation state, and determining a turn-off angle increment according to the direct current instruction, the inverter side direct current, the first parameter optimal value and the second parameter optimal value; and updating the leading trigger angle of the fixed turn-off angle according to the increment of the turn-off angle and the current turn-off angle until the leading trigger angle of the fixed current is less than or equal to the leading trigger angle of the fixed turn-off angle, and stopping current deviation control. The switching from the constant current control mode to the constant turn-off angle control mode is accelerated, and the risk of commutation failure of the high-voltage direct-current transmission control system is effectively inhibited.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a current deviation control method based on deep reinforcement learning according to the present invention;
FIG. 2 is a flow chart of a method for determining an offline parameter set;
FIG. 3 is a schematic diagram of a modified CEC module;
FIG. 4 is a diagram illustrating a variation of a minimum turn-off angle during a deep enhanced training process;
FIG. 5 is a diagram illustrating the variation of reward values during the deep-training process;
FIG. 6 is a schematic diagram of a modified CEC profile;
FIG. 7 is a block diagram of a DC control system;
FIG. 8 is a schematic diagram of post-fault DC system control response characteristics;
FIG. 9 is a schematic view of a current deviation control characteristic curve;
FIG. 10 is a control block diagram of an improved CEC method;
FIG. 11 is a graph illustrating turn-off angle increment comparison simulation curves for two control strategies;
FIG. 12 is a block diagram of a deep reinforcement learning based current deviation control system according to the present invention;
FIG. 13 is a diagram of the system architecture of the GIGER HVDC model host;
FIG. 14 is a graph of the amplitude response characteristic of the system inverter side circulating bus voltage under a three-phase ground fault;
FIG. 15 is a diagram of the DC response of the system under three-phase ground fault;
FIG. 16 is a graph of the turn-off angle response of the system under a three-phase ground fault;
FIG. 17 is a diagram of the DC power response characteristic of the system under a three-phase ground fault;
FIG. 18 is a graph of the amplitude response characteristic of the system inverter side circulating bus voltage under a single-phase ground fault;
FIG. 19 is a diagram of the DC response of the system in a single-phase earth fault;
FIG. 20 is a graph of system turn-off angle response characteristics under a single-phase ground fault;
fig. 21 is a graph of the dc power response characteristic of the system under a single-phase earth fault.
Description of the symbols:
the device comprises a data acquisition unit-101, a first judgment unit-102, a holding unit-103, a parameter determination unit-104, a turn-off angle increment determination unit-105, a turn-off angle acquisition unit-106 and an iteration unit-107.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a current deviation control method and a current deviation control system based on deep reinforcement learning.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The current deviation control method based on deep reinforcement learning is applied to a high-voltage direct-current transmission control system, and the high-voltage direct-current transmission control system comprises rectification side control and inversion side control. The inverter side control includes a CC (current control, constant current control mode) and a CEA (constant actuation angle, constant off angle control mode).
As shown in fig. 1, the current deviation control method based on deep reinforcement learning of the present invention includes:
s1: the method comprises the steps of obtaining the current running state of the high-voltage direct-current transmission control system, the initial fixed current advanced trigger angle, the initial fixed turn-off angle advanced trigger angle, a direct current instruction and the direct current of the inversion side in the control of the inversion side. The initial constant current advanced trigger angle is an advanced trigger beta obtained in a constant current control mode C And (4) an angle. The initial fixed turn-off angle advanced trigger angle is an advanced trigger angle beta obtained in a fixed turn-off angle control mode G
S2: aiming at the nth iteration, judging whether the fixed current advanced trigger angle is larger than the advanced trigger angle of the nth fixed turn-off angle, wherein n is larger than 0; the 1 st fixed turn-off angle leading trigger angle is the initial fixed turn-off angle leading trigger angle.
S3: and if the constant current leading trigger angle is less than or equal to the nth constant turn-off angle leading trigger angle, not carrying out current deviation control.
S4: if the constant current leading trigger angle is larger than the n-th constant turn-off angle leading trigger angle, then: and determining a first parameter optimal value and a second parameter optimal value corresponding to the current operation state according to the offline parameter set. The offline parameter set comprises a first parameter optimal value and a second parameter optimal value corresponding to different operating states of the high-voltage direct-current transmission control system. The first parameter optimal value and the second parameter optimal value are obtained by optimizing the high-voltage direct-current power transmission control system in advance by adopting a deep reinforcement learning network under different operation states.
In this embodiment, the first parameter optimal value and the second parameter optimal value are slopes of the CEC characteristic curve.
S5: and determining a turn-off angle increment according to the direct current instruction, the direct current on the inversion side, the first parameter optimal value and the second parameter optimal value.
Specifically, a current deviation control curve is first determined according to the first parameter optimal value and the second parameter optimal value. And then determining a current deviation value according to the direct current instruction and the direct current of the inversion side. In the present embodiment, the formula Δ I is adopted d =I dref -I d Determining a current deviation value, Δ I d As a deviation value of the current, I dref For a direct current command, I d Is the direct current of the inversion side. And finally, determining the turn-off angle increment according to the current deviation control curve and the current deviation value.
S6: and acquiring the current turn-off angle in the constant turn-off angle control mode.
S7: and determining the (n + 1) th fixed turn-off angle leading trigger angle according to the turn-off angle increment and the current turn-off angle, and performing (n + 1) th iteration to accelerate the switching from the fixed current control mode to the fixed turn-off angle control mode.
Specifically, firstly, according to the off-angle increment and the current off-angle, a formula Δ γ = Δ γ is adopted CECref -gamma determining a turn-off angle error, where Δ gamma is the turn-off angle error and Δ gamma CEC For an increase of the turn-off angle, gamma ref In order to turn off the angle reference value,gamma is the current off angle. And then carrying out amplitude limiting on the turn-off angle error, and carrying out proportional integral control to obtain the (n + 1) th fixed turn-off angle advanced trigger angle. Namely, the off angle error delta gamma controlled by the CEA is input into the PI link after amplitude limiting.
Further, the current operation state of the high-voltage direct-current transmission control system comprises an inversion side direct-current voltage, an inversion side alternating-current bus voltage drop amplitude and direct-current injection reactive power.
As shown in fig. 2, in the process of determining the offline parameter set, the current deviation control method based on deep reinforcement learning according to the present invention further includes:
s8: and simulating the high-voltage direct-current power transmission control system, and determining a first parameter optimizing range, a second parameter optimizing range, an initial first parameter value and an initial second parameter value.
As an example, by changing the voltage of the voltage source at two ends of the high-voltage direct-current transmission control system, different operation states of the system are obtained, wherein the operation states comprise the direct-current voltage at the inversion side, the voltage drop amplitude of the alternating-current bus at the inversion side and the reactive power injected into direct current. The three-phase grounding fault is set to occur at the bus of the inversion side within 2.0s, the fault duration is 0.5s, the fault grounding inductance is 0.8H, and the CEC link is rewritten into a parameter visualization module as shown in FIG. 3.
In the CIGRE HVDC standard model, the initial first parameter value Input _ max takes 0.1, and the initial second parameter value Output _ max takes 0.2793. In order to avoid the difficulty in deep reinforcement learning model training caused by overlarge action space, the parameter optimization range is preliminarily determined through simulation as follows: input _ max: [0.03,0.15], output _ max: [0.20,0.35], the adjustment steps for both parameters are 0.01.
S9: aiming at the mth iteration of the high-voltage direct-current transmission control system in any historical operating state, acquiring the minimum turn-off angle of the high-voltage direct-current transmission control system under the action of the mth first parameter value and the mth second parameter value, wherein m is greater than 0; the 1 st first parameter value is an initial first parameter value, and the 1 st second parameter value is an initial second parameter value.
S10: and determining the value of the reward function according to the minimum turn-off angle.
S11: and judging whether the value of the reward function is larger than a set threshold value or not.
S12: if the reward function value is smaller than or equal to the set threshold value, the size of the mth first parameter value is adjusted in the first parameter optimizing range, the size of the mth second parameter value is adjusted in the second parameter optimizing range, the (m + 1) th first parameter value and the (m + 1) th second parameter value are obtained, and iteration is carried out for the (m + 1) th time.
S13: and if the reward function value is larger than a set threshold value, taking the mth first parameter value as the first parameter optimal value of the historical operating state, and taking the mth second parameter value as the second parameter optimal value of the historical operating state.
Since the off-angle margin during fault recovery needs to be large enough to enhance the ability of the dc system to resist subsequent commutation failures, the reward function is set as follows:
Figure BDA0003801234160000091
wherein r is the value of the reward function, lambda is a negative constant, mu is a normal number, the size of the constant is adjusted timely according to different working conditions and different practical projects, and gamma is min Is the minimum turn-off angle obtained by simulation, when the fault degree, fault location and grounding inductance are changed, the gamma value is min Will change.
The reward function shows the minimum cut-off angle gamma after the first commutation failure after parameter optimization min Still less than 7.2, indicating that the parameter is invalid and the reward function is negative. If the parameters are optimized, the minimum cut-off angle gamma after the first commutation failure min Greater than 7.2 indicates that the parameter is valid. The larger the minimum off angle, the larger the value of the reward function. By increasing the reward value, the required optimization parameters are obtained.
As shown in fig. 4 and 5, the control performance in the initial stage is poor, and the output data oscillation amplitude is large. Along with the increase of training times and accumulated samples, the amplitude of output data is gradually converged, the minimum turn-off angle is gradually increased, the reward function value is also gradually increased, and the overall performance trend is good.
And finally, determining that the optimal parameters of the CEC ramp function are as follows under the fault condition through an optimization result: input _ max:0.05, output _max.
The CEC characteristic curve is optimized based on the first parameter optimal value and the second parameter optimal value, as shown in fig. 6. When the system normally operates, the CEC characteristic curve is a curve 1; when the inverter side control is switched to the CC control mode, the CEC changes the slope of the ramp function using the offline optimized parameters, so that the characteristic curve becomes curve 2. At the moment, the output quantity of a CEC link is increased, the switching from CC control to CEA control is accelerated, and the margin of the advance trigger angle is improved.
The method adopts the convolutional neural network to extract the environmental information of the power system, combines the environmental information with the Depth Q Network (DQN) to form a control parameter optimization model based on depth reinforcement learning, and optimizes the slope of the slope function (the first parameter optimal value and the second parameter optimal value) of the controller by using the trained model, thereby achieving the purpose of inhibiting the subsequent commutation failure.
In this embodiment, a CIGRE HVDC standard model is taken as an example, a three-phase ground fault occurs when an inverter side converter bus is set at 2.0s, and a ground inductor L f At 0.8H, the fault was cleared after 0.5 s. FIG. 7 is a block diagram of a DC control system CIGRE HVDC in the prior art, in which U d For inverting side DC voltage, I co For manual input of current values, I d For inverting side direct current, gamma min Is the minimum off angle of upper and lower bridge arms of the inverter, I dref Is a DC current command value, delta gamma is a turn-off angle error, beta C Advanced firing angle, beta, for constant current control G Advanced firing angle, alpha, for constant turn-off angle control i The flip angle is the inversion side flip angle.
After the first phase commutation failure occurs, both the direct current and the direct current voltage tend to the zero axis due to the overshoot phenomenon of the PI controller, and the control response characteristic of the direct current system after the failure is shown in fig. 8.
In fig. 8, the CEC is started before the mode switching of the inverter, and when the dc current is smaller than the command value, the dc system increases the CEA controller rapidly through the CECAdvanced firing angle beta of output G Thereby achieving the purpose of rapidly increasing the size of the turn-off angle. At this time, the turn-off angle is already raised to a larger value, the call-back requirement is obvious, and therefore beta G It will rapidly drop to adjust the magnitude of the turn-off angle. When beta is G Equal to the output lead firing angle beta of the CC controller C In this case, the inverter control mode is switched to CC control. In the stage, the direct current voltage is in a climbing recovery period, and the direct current is gradually recovered. When the commutation voltage is at a fault level, the slight change of the direct current can also cause commutation failure, so that the phase particularly needs the coordination of the control of the inversion side to accelerate the switching of the control mode of the inversion side.
In the CIGRE HVDC standard model, the characteristic curve of the current deviation control without the control method of the present invention is shown in fig. 9, where Δ I dmax Is the maximum value of the current deviation, Δ γ CEC Delta gamma for the off angle increment of the CEC output max Is the maximum value of the turn-off angle increment.
Delta gamma of off angle CEC Expressed as:
Figure BDA0003801234160000111
when the inversion side is in the CC control mode, the turn-off angle gradually regresses towards the reference value, and delta gamma CEC The ratio of the amount of the particles in the Δ γ gradually increases, and the Δ γ is gradually increased CEC The method plays a key role in switching the control mode of the inversion side. Therefore, in the inverter-side CC control mode, the slope of CEC is increased, and the output of the off angle increment is increased, thereby accelerating the switching of the inverter-side control mode.
Fig. 10 is a block diagram of a current deviation control CEC according to the present invention, where the start criteria in fig. 10 are: when beta is detected CG Is put into operation when detecting beta C ≤β G And the control is quitted, so that the control strategy can be ensured to only play a role when the inversion side is under CC control, and the influence of increasing CEC slope on system recovery is reduced. FIG. 11 is a GIGER Standard test model control strategy and the turn-off Angle of the control strategy of the present applicationCompared with a simulation curve, the increment of the off angle can be increased by increasing the CEC slope, and the expected effect is met.
As shown in fig. 12, the current deviation control system based on deep reinforcement learning according to the present invention includes: data acquisition section 101, first determination section 102, holding section 103, parameter determination section 104, off angle increment determination section 105, off angle acquisition section 106, and iteration section 107.
The data obtaining unit 101 is configured to obtain a current operating state of the high-voltage direct current transmission control system, an initial constant current advanced trigger angle, an initial constant turn-off angle advanced trigger angle in the inverter-side control, a direct current instruction, and an inverter-side direct current. The initial constant current advanced trigger angle is an advanced trigger angle obtained in a constant current control mode. The initial fixed turn-off angle advanced triggering angle is an advanced triggering angle obtained in a fixed turn-off angle control mode.
The first judging unit 102 is connected to the data acquiring unit 101, and the first judging unit 102 is configured to judge whether the constant current leading trigger angle is greater than an nth constant turn-off angle leading trigger angle, where n is greater than 0, for an nth iteration; the leading trigger angle of the 1 st fixed turn-off angle is the leading trigger angle of the initial fixed turn-off angle.
The holding unit 103 is connected to the first determining unit 102, and the holding unit 103 is configured to not perform the current deviation control when the constant current leading firing angle is smaller than or equal to the nth constant turn-off angle leading firing angle.
The parameter determining unit 104 is connected to the first determining unit 102, and the parameter determining unit 104 is configured to determine a first parameter optimal value and a second parameter optimal value corresponding to the current operating state according to an offline parameter set when the constant current leading trigger angle is greater than the nth constant turn-off angle leading trigger angle. The offline parameter set comprises a first parameter optimal value and a second parameter optimal value corresponding to different operating states of the high-voltage direct-current transmission control system. The first parameter optimal value and the second parameter optimal value are obtained by optimizing by adopting a deep reinforcement learning network in different running states of the high-voltage direct-current power transmission control system in advance;
the turn-off angle increment determining unit 105 is connected to the parameter determining unit 104 and the data obtaining unit 101, and the turn-off angle increment determining unit 105 is configured to determine a turn-off angle increment according to the dc current instruction, the inverter-side dc current, the first parameter optimal value, and the second parameter optimal value.
Specifically, the off angle increment determination unit 105 includes: the device comprises a curve determining module, a current deviation determining module and an increment determining module.
And the curve determining module is connected with the parameter determining module and is used for determining a current deviation control curve according to the first parameter optimal value and the second parameter optimal value.
The current deviation determining module is connected with the data acquiring unit 101, and is used for determining a current deviation value according to the direct current instruction and the direct current of the inversion side.
And the increment determining module is connected with the curve determining module and the current deviation determining module and is used for determining the turn-off angle increment according to the current deviation control curve and the current deviation value.
The off-angle obtaining unit 106 is configured to obtain a current off-angle in the constant off-angle control mode.
The iteration unit 107 is connected to the turn-off angle increment determining unit 105, the turn-off angle obtaining unit 106, and the first determining unit 102, and the iteration unit 107 is configured to determine an n +1 th constant turn-off angle leading trigger angle according to the turn-off angle increment and the current turn-off angle, and perform an n +1 th iteration to accelerate the switching from the constant current control mode to the constant turn-off angle control mode.
In addition, the current deviation control system based on deep reinforcement learning of the invention further comprises: the device comprises a simulation unit, a minimum off-angle acquisition unit, a reward determination unit, a second judgment unit, a parameter adjustment unit and an optimal value determination unit.
The simulation unit is used for simulating the high-voltage direct-current power transmission control system and determining a first parameter optimizing range, a second parameter optimizing range, an initial first parameter value and an initial second parameter value.
The minimum turn-off angle acquisition unit is connected with the simulation unit and used for acquiring a minimum turn-off angle of the high-voltage direct-current power transmission control system under the action of an mth first parameter value and an mth second parameter value in an mth iteration of any historical operating state of the high-voltage direct-current power transmission control system, wherein m is greater than 0; the 1 st first parameter value is an initial first parameter value, and the 1 st second parameter value is an initial second parameter value.
The reward determining unit is connected with the minimum turn-off angle acquiring unit and used for determining a reward function value according to the minimum turn-off angle.
The second judging unit is connected with the reward determining unit and used for judging whether the reward function value is larger than a set threshold value or not.
The parameter adjusting unit is connected with the second judging unit, the simulation unit and the minimum turn-off angle obtaining unit, and is configured to adjust a magnitude of an mth first parameter value within the first parameter optimization range and adjust a magnitude of an mth second parameter value within the second parameter optimization range when the reward function value is less than or equal to a set threshold value, so as to obtain an m +1 th first parameter value and an m +1 th second parameter value, and perform an m +1 th iteration.
The optimal value determining unit is connected with the second judging unit and the parameter determining unit, and is configured to, when the reward function value is greater than a set threshold, take the mth first parameter value as the first parameter optimal value of the historical operating state, and take the mth second parameter value as the second parameter optimal value of the historical operating state.
Next, a CIGRE standard model is used to verify the suppression effect of the control strategy of the present invention on the subsequent commutation failure, and a CIGRE HVDC model host system structure diagram is shown in fig. 13. In the figure L dc 、C dc 、R dc Respectively an inductor, a capacitor and a resistor of the direct current circuit; u shape dc Is the voltage of the midpoint capacitor of the direct current line; i is dcr And I dci Respectively, the direct current of rectification side and inversion sideAnd (4) streaming.
The single-phase grounding fault or the three-phase grounding fault of the alternating current transmission line in the actual engineering is simulated by grounding the converter bus through the inductor, and the severity of the fault is simulated by changing the size of the grounding inductor. In the following, simulation verification will be performed by two different control strategies.
(1) Control strategy 1: and controlling a CIGRE standard test model.
(2) Control strategy 2: on the basis of the control strategy 1, a current deviation control link in the inverter side control system is replaced by the control strategy.
A. System electrical quantity recovery characteristic verification
Three-phase grounding fault occurs when the AC bus of the inverter station is set for 2s, and the grounding inductor L f =1H, the fault is cleared after 0.5s, and the simulated response characteristic of the electrical quantity is shown in fig. 14-17.
The fault is a slight alternating current fault occurring in equivalent actual engineering, and as can be seen from fig. 14-17, under the fault condition, the direct current system has not yet undergone subsequent commutation failure, and the system recovery characteristics of the control strategy 2 and the control strategy 1 are highly consistent, which shows that the control strategy of the present invention is started only when the system has risk of subsequent commutation failure, and is not started when the system has slight fault, namely, the start criterion of the control strategy of the present invention is proved to be accurate and reasonable, and the fault recovery capability of the direct current system is not affected.
B. Validity verification of improved CEC control strategy under single-phase short-circuit grounding fault
The inverter station AC bus is set to generate single-phase grounding fault in 2s, and the grounding inductor L f =0.45H, the fault was cleared after continuing for 0.5s, and the electrical quantity simulation response characteristics are shown in fig. 18 to 21.
As can be seen from fig. 18 to 21, the control strategy 2 can effectively improve the shutdown angle margin during the fault period, improve the system recovery performance, and enhance the capability of the system to resist the subsequent commutation failure.
C. Verification of suppression effect of improved CEC method on subsequent commutation failure
To further verify the control strategy of the present invention against subsequent commutation failuresThe suppression effect of (2) needs to set more alternating current faults with different severity degrees for carrying out simulation experiments. Now define fault level F L The following were used:
Figure BDA0003801234160000141
wherein, P N For DC power rating, U L The voltage of the inversion side commutation bus is the commutation voltage.
For fault level F L Simulation experiments are carried out on single-phase alternating-current faults and three-phase alternating-current faults with different degrees of severity between 5% and 50%, and the frequency of phase change failure of a direct-current system is shown in table 1.
TABLE 1 Table of number of commutation failures of DC system under different control strategies
Figure BDA0003801234160000142
Figure BDA0003801234160000151
As can be seen from table 1, for faults of different severity and different types, the control strategy 2 can effectively suppress occurrence of subsequent commutation failure of the HVDC direct current system.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principle and the embodiment of the present invention are explained by applying specific examples, and the above description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A current deviation control method based on deep reinforcement learning is applied to a high-voltage direct-current transmission control system, and the high-voltage direct-current transmission control system comprises rectification side control and inversion side control; the inverter side control comprises a constant current control mode and a constant turn-off angle control mode, and is characterized in that the current deviation control method based on deep reinforcement learning comprises the following steps:
acquiring the current running state of the high-voltage direct-current transmission control system, an initial constant current advanced trigger angle, an initial constant turn-off angle advanced trigger angle, a direct current instruction and an inversion side direct current in inversion side control; the initial constant current advanced trigger angle is an advanced trigger angle obtained in a constant current control mode; the initial fixed turn-off angle advanced trigger angle is an advanced trigger angle obtained in a fixed turn-off angle control mode;
aiming at the nth iteration, judging whether the fixed current advanced trigger angle is larger than the advanced trigger angle of the nth fixed turn-off angle, wherein n is larger than 0; the 1 st fixed turn-off angle advanced trigger angle is an initial fixed turn-off angle advanced trigger angle;
if the fixed current leading trigger angle is less than or equal to the nth fixed turn-off angle leading trigger angle, no current deviation control is carried out;
if the constant current leading trigger angle is larger than the n-th constant turn-off angle leading trigger angle, then:
determining a first parameter optimal value and a second parameter optimal value corresponding to the current operation state according to an offline parameter set; the offline parameter set comprises a first parameter optimal value and a second parameter optimal value corresponding to different operating states of the high-voltage direct-current transmission control system; the first parameter optimal value and the second parameter optimal value are obtained by optimizing by adopting a deep reinforcement learning network in different running states of the high-voltage direct-current transmission control system in advance;
determining a turn-off angle increment according to the direct current instruction, the direct current on the inversion side, the first parameter optimal value and the second parameter optimal value;
acquiring a current turn-off angle under a constant turn-off angle control mode;
and determining the (n + 1) th fixed turn-off angle leading trigger angle according to the turn-off angle increment and the current turn-off angle, and performing (n + 1) th iteration to accelerate the switching from the fixed current control mode to the fixed turn-off angle control mode.
2. The deep reinforcement learning-based current deviation control method according to claim 1, wherein the current operation state of the HVDC control system comprises an inverter-side DC voltage, an inverter-side AC bus voltage drop amplitude and DC injection reactive power.
3. The deep reinforcement learning-based current deviation control method according to claim 1, further comprising:
simulating the high-voltage direct-current power transmission control system, and determining a first parameter optimizing range, a second parameter optimizing range, an initial first parameter value and an initial second parameter value;
aiming at the mth iteration of the high-voltage direct-current transmission control system in any historical operating state, acquiring the minimum turn-off angle of the high-voltage direct-current transmission control system under the action of the mth first parameter value and the mth second parameter value, wherein m is greater than 0; the 1 st first parameter value is an initial first parameter value, and the 1 st second parameter value is an initial second parameter value;
determining a value of a reward function according to the minimum turn-off angle;
judging whether the value of the reward function is larger than a set threshold value or not;
if the reward function value is smaller than or equal to a set threshold value, adjusting the size of the mth first parameter value in the first parameter optimizing range, adjusting the size of the mth second parameter value in the second parameter optimizing range to obtain the (m + 1) th first parameter value and the (m + 1) th second parameter value, and performing iteration for the (m + 1) th time;
and if the reward function value is larger than a set threshold value, taking the mth first parameter value as the first parameter optimal value of the historical operating state, and taking the mth second parameter value as the second parameter optimal value of the historical operating state.
4. The method according to claim 1, wherein the determining a turn-off angle increment according to the dc current command, the inverter-side dc current, the first parameter optimal value, and the second parameter optimal value specifically comprises:
determining a current deviation control curve according to the first parameter optimal value and the second parameter optimal value;
determining a current deviation value according to the direct current instruction and the direct current of the inversion side;
and determining a turn-off angle increment according to the current deviation control curve and the current deviation value.
5. The deep reinforcement learning-based current deviation control method according to claim 4, wherein the current deviation value is determined by using the following formula:
ΔI d =I dref -I d
wherein, delta I d Is a current deviation value, I dref For a direct current command, I d Is the direct current of the inversion side.
6. The current deviation control method based on deep reinforcement learning according to claim 1, wherein the determining an n +1 th constant-turn-off angle leading trigger angle according to the turn-off angle increment and the current turn-off angle specifically includes:
determining a turn-off angle error according to the turn-off angle increment and the current turn-off angle;
and carrying out amplitude limiting on the turn-off angle error, and carrying out proportional integral control to obtain the (n + 1) th fixed turn-off angle advanced trigger angle.
7. The deep reinforcement learning-based current deviation control method according to claim 6, wherein the turn-off angle error is determined by using the following formula:
Δγ=Δγ CECref -γ;
where Δ γ is the off angle error, Δ γ CEC For an increase of the turn-off angle, γ ref For the off-angle reference value, γ is the current off-angle.
8. A current deviation control system based on deep reinforcement learning, characterized in that the current deviation control system based on deep reinforcement learning comprises:
the data acquisition unit is used for acquiring the current operation state of the high-voltage direct-current transmission control system, an initial constant current advanced trigger angle, an initial constant turn-off angle advanced trigger angle, a direct current instruction and an inversion side direct current in inversion side control; the initial constant current advanced trigger angle is an advanced trigger angle obtained in a constant current control mode; the initial fixed turn-off angle advanced trigger angle is an advanced trigger angle obtained in a fixed turn-off angle control mode;
the first judgment unit is connected with the data acquisition unit and used for judging whether the constant current advanced trigger angle is larger than the advanced trigger angle of the nth constant turn-off angle or not aiming at the nth iteration, wherein n is larger than 0; the 1 st fixed turn-off angle advanced trigger angle is an initial fixed turn-off angle advanced trigger angle;
the holding unit is connected with the first judging unit and is used for not carrying out current deviation control when the constant current leading trigger angle is less than or equal to the nth constant turn-off angle leading trigger angle;
the parameter determining unit is connected with the first judging unit and used for determining a first parameter optimal value and a second parameter optimal value corresponding to the current operation state according to an offline parameter set when the constant current leading trigger angle is larger than the n-th constant turn-off angle leading trigger angle; the offline parameter set comprises a first parameter optimal value and a second parameter optimal value corresponding to different operating states of the high-voltage direct-current transmission control system; the first parameter optimal value and the second parameter optimal value are obtained by optimizing by adopting a deep reinforcement learning network in different running states of the high-voltage direct-current power transmission control system in advance;
a turn-off angle increment determining unit connected to the parameter determining unit and the data obtaining unit, and configured to determine a turn-off angle increment according to the dc current instruction, the inverter-side dc current, the first parameter optimal value, and the second parameter optimal value;
the turn-off angle acquisition unit is used for acquiring a current turn-off angle in a constant turn-off angle control mode;
and the iteration unit is connected with the turn-off angle increment determining unit, the turn-off angle acquiring unit and the first judging unit and is used for determining the (n + 1) th fixed turn-off angle advanced trigger angle according to the turn-off angle increment and the current turn-off angle, and performing the (n + 1) th iteration to accelerate the switching from the fixed current control mode to the fixed turn-off angle control mode.
9. The deep reinforcement learning-based current deviation control system according to claim 8, further comprising:
the simulation unit is used for simulating the high-voltage direct-current power transmission control system and determining a first parameter optimization range, a second parameter optimization range, an initial first parameter value and an initial second parameter value;
the minimum turn-off angle acquisition unit is connected with the simulation unit and used for acquiring a minimum turn-off angle of the high-voltage direct-current transmission control system under the action of an mth first parameter value and an mth second parameter value in an mth iteration of any historical operating state of the high-voltage direct-current transmission control system, wherein m is greater than 0; the 1 st first parameter value is an initial first parameter value, and the 1 st second parameter value is an initial second parameter value;
the reward determining unit is connected with the minimum turn-off angle acquiring unit and used for determining a reward function value according to the minimum turn-off angle;
the second judgment unit is connected with the reward determination unit and used for judging whether the reward function value is larger than a set threshold value or not;
the parameter adjusting unit is connected with the second judging unit, the simulation unit and the minimum turn-off angle obtaining unit, and is used for adjusting the size of an mth first parameter value in the first parameter optimizing range and the size of an mth second parameter value in the second parameter optimizing range to obtain an m +1 th first parameter value and an m +1 th second parameter value when the reward function value is smaller than or equal to a set threshold value, and performing iteration for the m +1 th time;
and the optimal value determining unit is connected with the second judging unit and the parameter determining unit and is used for taking the mth first parameter value as the first parameter optimal value of the historical operating state and taking the mth second parameter value as the second parameter optimal value of the historical operating state when the reward function value is greater than a set threshold value.
10. The deep reinforcement learning-based current deviation control system according to claim 8, wherein the off-angle increment determination unit includes:
the curve determining module is connected with the parameter determining module and used for determining a current deviation control curve according to the first parameter optimal value and the second parameter optimal value;
the current deviation determining module is connected with the data acquiring unit and used for determining a current deviation value according to the direct current instruction and the direct current of the inversion side;
and the increment determining module is connected with the curve determining module and the current deviation determining module and is used for determining the turn-off angle increment according to the current deviation control curve and the current deviation value.
CN202210983839.6A 2022-08-17 2022-08-17 Current deviation control method and system based on deep reinforcement learning Pending CN115207958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210983839.6A CN115207958A (en) 2022-08-17 2022-08-17 Current deviation control method and system based on deep reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210983839.6A CN115207958A (en) 2022-08-17 2022-08-17 Current deviation control method and system based on deep reinforcement learning

Publications (1)

Publication Number Publication Date
CN115207958A true CN115207958A (en) 2022-10-18

Family

ID=83585512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210983839.6A Pending CN115207958A (en) 2022-08-17 2022-08-17 Current deviation control method and system based on deep reinforcement learning

Country Status (1)

Country Link
CN (1) CN115207958A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809597A (en) * 2022-11-30 2023-03-17 东北电力大学 Frequency stabilization system and method for reinforcement learning emergency DC power support

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115809597A (en) * 2022-11-30 2023-03-17 东北电力大学 Frequency stabilization system and method for reinforcement learning emergency DC power support
CN115809597B (en) * 2022-11-30 2024-04-30 东北电力大学 Frequency stabilization system and method for reinforcement learning of emergency direct current power support

Similar Documents

Publication Publication Date Title
CN110233490B (en) Direct-current transmission fault recovery control method and system for avoiding continuous commutation failure
CN108808718B (en) Method for determining direct current operation range of high-voltage direct current transmission system in alternating current fault
CN109873443B (en) Method for predicting direct-current continuous commutation failure under power grid fault based on critical voltage
CN107306030B (en) Control method for inhibiting continuous commutation failure of direct-current power transmission
CN108400611B (en) HVDC continuous commutation failure suppression method based on nonlinear VDCOL
CN110474358B (en) Control method for inhibiting continuous commutation failure in extra-high voltage direct current hierarchical access mode
CN104242331A (en) Extra-high voltage direct current control system suitable for electromechanical transient simulation
WO2021047347A1 (en) Adaptive and active disturbance rejection proportional integration-based direct current transmission system control method and system
CN115207958A (en) Current deviation control method and system based on deep reinforcement learning
CN113098045A (en) Optimization control method suitable for UHVDC commutation failure fault recovery
CN110620396B (en) Self-adaptive low-voltage current limiting control method for LCC direct current transmission system
CN110620497A (en) Control method and circuit for restraining starting impact current of three-phase PWM rectifier
CN112332437B (en) Direct current transmission prediction type fault current limiting control method and system based on rectifying side
CN115276072A (en) Method, device, terminal and medium for inhibiting subsequent commutation failure of direct current system
CN109802380B (en) Low-voltage current limiting control method, system and device for high-voltage direct-current transmission
CN113472000B (en) Commutation failure control method for multi-feed-in direct current transmission system
CN113131506B (en) Fixed turn-off angle control method and stabilizer for inhibiting subsequent commutation failure of LCC-HVDC system
CN113162105B (en) Commutation failure control and simulation method and device based on trigger angle self-adaptive adjustment
CN110323776B (en) SC-based L CC-HVDC receiving end direct current system feedforward control method, system and medium
CN114447927A (en) VDCOL control improvement method for suppressing sending end overvoltage during commutation failure
CN113572187B (en) Virtual capacitor-based high-voltage direct-current continuous commutation failure inhibition method
CN103904679A (en) Method for controlling high-voltage direct-current transmission dispersed continuous current instruction type low-voltage current-limiting unit
CN113572188B (en) Self-adaptive compensation resistance control method for inhibiting subsequent commutation failure
Yan et al. Research on CIGRE benchmark model and improved DC control strategy
CN113595127B (en) Current deviation control optimization method for inhibiting direct current subsequent commutation failure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination