CN116001624A - Ordered charging method for one-pile multi-connected electric automobile based on deep reinforcement learning - Google Patents
Ordered charging method for one-pile multi-connected electric automobile based on deep reinforcement learning Download PDFInfo
- Publication number
- CN116001624A CN116001624A CN202211542881.0A CN202211542881A CN116001624A CN 116001624 A CN116001624 A CN 116001624A CN 202211542881 A CN202211542881 A CN 202211542881A CN 116001624 A CN116001624 A CN 116001624A
- Authority
- CN
- China
- Prior art keywords
- charging
- load
- electric automobile
- power
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000002787 reinforcement Effects 0.000 title claims abstract description 25
- 230000005611 electricity Effects 0.000 claims abstract description 24
- 230000006399 behavior Effects 0.000 claims abstract description 17
- 238000011217 control strategy Methods 0.000 claims abstract description 15
- 230000003993 interaction Effects 0.000 claims abstract description 7
- 238000010248 power generation Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 25
- 238000004422 calculation algorithm Methods 0.000 claims description 18
- 230000009471 action Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 10
- 230000008901 benefit Effects 0.000 claims description 9
- 238000004891 communication Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- 230000003542 behavioural effect Effects 0.000 claims description 4
- 230000000087 stabilizing effect Effects 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000005315 distribution function Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 239000013307 optical fiber Substances 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 238000005065 mining Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 238000005265 energy consumption Methods 0.000 abstract description 3
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 5
- 229910052799 carbon Inorganic materials 0.000 description 5
- 238000007726 management method Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000007599 discharging Methods 0.000 description 2
- 238000006386 neutralization reaction Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 239000004148 curcumin Substances 0.000 description 1
- BIABMEZBCHDPBV-UHFFFAOYSA-N dipalmitoyl phosphatidylglycerol Chemical compound CCCCCCCCCCCCCCCC(=O)OCC(COP(O)(=O)OCC(O)CO)OC(=O)CCCCCCCCCCCCCCC BIABMEZBCHDPBV-UHFFFAOYSA-N 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000004134 energy conservation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000005183 environmental health Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000005431 greenhouse gas Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/60—Other road transportation technologies with climate change mitigation effect
- Y02T10/70—Energy storage systems for electromobility, e.g. batteries
Landscapes
- Charge And Discharge Circuits For Batteries Or The Like (AREA)
Abstract
A sequential charging method of one-pile multi-connected electric vehicles based on deep reinforcement learning is characterized in that sequential charging coordination control strategies of one-pile multi-connected electric vehicles are established by analyzing the uncertainty of travel modes and charging demands of the electric vehicles and real-time information interaction of a power distribution network system and a user charging service system and analyzing user behavior characteristics based on historical data such as SOC (system on a chip), renewable energy power generation, conventional load of the power distribution network system and electricity price of the electric vehicles, sequential charging of the electric vehicles is achieved, and the hourly utilization rate of charging piles can be improved. The invention can improve the level of renewable energy consumption, reduce the charging cost of the electric automobile owner, improve the running efficiency of the traffic electrification network, stabilize load fluctuation, avoid overload of a transformer at the power grid side, and provide important guidance for charging facility planning, economic operation of distribution network and friendly interaction of the automobile network.
Description
Technical Field
The invention belongs to ordered charging research of new energy electric vehicles, and particularly relates to an ordered charging method of a one-pile multi-connected electric vehicle based on deep reinforcement learning.
Background
With the gradual advancement of energy resource transformation of carbon neutralization and carbon peak, people are more and more concerned about environmental health and sustainability development problems. Electric Vehicles (EVs) have the natural advantage of reducing greenhouse gas emissions and reducing fossil energy dependence, which is beneficial for the country to achieve the "two carbon" goal. According to the advantages of green, low carbon, environmental protection and the like, the automobile can gradually replace the traditional internal combustion engine automobile, and becomes an important component of a smart power grid and a green city.
The new energy automobile is taken as one of strategic emerging industries, provides a new thought for responding to the strategic aim of realizing carbon neutralization before 2060 of China, is a trend of energy strategy and green traffic in China, becomes an important break of energy conservation and emission reduction, happy economy and industrial structure transformation in China, and indicates a direction for energy transformation and green development in the next stage in China.
In recent years, scholars at home and abroad have conducted a great deal of research on the problem of orderly charging large-scale electric automobiles. In the prior art, widely used methods include a distributed algorithm such as an alternate direction multiplier method and a lagrangian relaxation method, an integer programming method, a heuristic algorithm and a model prediction control-based method. These approaches attempt to address the problem of orderly charging of electric vehicles in various scenarios. However, these methods require attention to accurate models or predictions of vehicle randomness when modeling the electric vehicle uncertainty. In practice, the uncertainty of electric vehicles is closely related to the behavior characteristics of users, and the latter is a complex problem, which results in that an accurate model of the vehicle randomness is difficult to build and an accurate prediction result is difficult to obtain, so that the practicability of the study is affected.
Because the high power and space-time uncertainty of the charging and discharging of the electric automobile can change the existing load level of the power grid, the peak-valley difference is further increased, and the safety and the stability of the power grid are impacted.
Disclosure of Invention
The invention provides a one-pile multi-connected electric vehicle ordered charging method based on deep reinforcement learning, which is suitable for the problem of ordered charging of large-scale electric vehicles. The invention can improve the level of renewable energy consumption, reduce the charging cost of the electric automobile owner, improve the running efficiency of the traffic electrification network, stabilize load fluctuation, avoid overload of a transformer at the power grid side, and provide important guidance for charging facility planning, economic operation of distribution network and friendly interaction of the automobile network.
The technical scheme of the invention is as follows:
a one-pile multi-connected electric vehicle orderly charging method based on deep reinforcement learning comprises the following steps:
step 3: the user initiates a charging request to the master station in a wireless communication mode by selecting a charging type, a target charging electric quantity and an expected vehicle lifting time; the main station and the intelligent equipment receive the charging requirement, and a corresponding charging plan is formulated by combining a base load curve, a district load limiting curve and a district working condition, and the energy router executes the charging plan to start charging; the main station responds to a charging schedule issued to the energy controller by the main station, and the main station responds to a charging pile power real-time control strategy so as to dynamically adjust the charging pile power when the load of the station area is over limit; under the scene that the communication between the master station and the energy controller is interrupted, a user sends a charging request to the energy router, the energy controller carries out local arrangement of a charging plan, and the charging type is normal charging and ordered charging;
and 6, solving the formulated Markov decision process problem based on a deep reinforcement learning algorithm.
Further, in step 3, the overall control strategy flow is as follows:
1) The master station pre-acquires the basic predicted load of the residents of the 24h platform area and the exceeding limit value of the platform area load; the method comprises the steps of dynamically obtaining charging information provided by a user, wherein the charging information comprises a charging service type, a charging demand electric quantity and time information reserved by the user; monitoring real-time operation load of the station area by taking 3min as a period;
2) Judging whether the current load of the station area is out of limit;
3) If the out-of-limit condition occurs, executing a charging power real-time control strategy with a regulation period of 3 min:
3.1 Calculating the charging demand priority of the user which is being charged at the out-of-limit moment;
3.2 The charging power of the electric automobile is sequentially adjusted to stabilize the load of the platform area in a safe area.
3.4 If the load of the platform area is not out of limit, carrying out ordered charging optimization strategy calculation.
Further, the ordered charging control targets of the electric automobile comprise load fluctuation at the power grid side and economic benefit of the charging station, a multi-objective function of a user layer and the power grid layer is established, and the ordered charging control targets of the electric automobile are optimized and solved:
1) User layer
The minimum charge cost of the user is used as the optimization target of the user side of the electric automobile,
assuming that the charging electricity price of the electric automobile adopts peak-valley electricity price, C f 、C g Respectively represent the electricity prices of peak-valley periods, t f (i) Indicating the time of charging the ith vehicle in peak period, t g (i) Represent the firsti charging time of the vehicle in valley period, f (i) represents charging cost of the ith vehicle, and N vehicles in total need to be controlled
2) Power grid layer
Taking the minimum peak-valley difference of the power grid as a charging optimization target at the power grid side, taking the peak-valley difference of the power grid as an objective function, and assuming that the peak load of a power grid load curve of electric vehicles connected to the power grid for charging is P f The load curve valley load is p g ;
f 2 =min(P f -P g )
The multi-objective function corresponding to the user-side objective with the least charge and the grid-side objective with the least peak-to-valley difference is as follows:
normalization processing for a plurality of targets:
P max is the maximum value of the conventional load of the power distribution network, P min Is the minimum value of the conventional load of the power distribution network;
the normalized ordered charge control targets after the weighting process are as follows:
minf=λ 1 f 1 +λ 2 f 2
in the formula λ1 、λ 2 Seventh, the weight coefficient of each target meets the following conditions: lambda (lambda) 1 +λ 2 =1, and λ 1 ≥0,λ 2 ≥0;
The load fluctuation of the power grid side is restrained, and the economic benefit of the charging station is restrained, wherein the restraint condition is that the operation of a transformer of the charging station is not overloaded, and the charging requirement of a user of the charging station is met to the maximum extent.
Further, in step 5, the markov decision process includes:
(Vector)The state of the electric automobile n at t is shown, and the state comprises the following four information:
representing the battery charge state of the electric automobile n at t, namely the ratio of the current residual battery capacity to the full battery capacity, if the electric automobile is charged, < >>Will increase by 20%, if it goes out +.>The reduction is 20%;
representing the percentage of the transformer load received by the electric automobile n at the time t to the maximum expected load of the transformer;
the position information of the electric automobile n at t is represented, 0 represents at home, and 1 represents outside trip;
the electricity prices obtained by the electric automobile n at the time t-23, t-22, … and t in sequence, namely the electricity price of the past 24 hours;
indicating that electric automobile n is in state +.>Behavior performed below, 1 representing charging, 0 representing no charging;
bonus space: is defined as wherein />Representing rewards obtained by the electric automobile n at t; it consists of three parts, each part corresponding to a given +.>Representing rewards of users of the electric automobile n to the satisfaction degree of the charge state of the battery at t; />Rewarding a degree of energy cost reduction, wherein +.>The electric quantity consumed by the electric automobile n at the time t; />The transformer overload protection circuit is designed for avoiding overload of the transformer, and no rewards are generated when the transformer received by the electric automobile n is in an overload state; when the load of the transformer is in a desired range, the rewarding obtained by the electric automobile n user is 10;
in an electric vehicle, the parameterized policy function is: pi θ (S,A)=P[A|S,θ];
Wherein P is a probability distribution function mapping the state S onto the behavior A by the parameter θ;
finally, according to S k Execution under State A k The following expects a report to evaluate the quality of the charge schedule:
wherein Qπθ (S t ,A t ) Representing a true behavior value function.
Further, in step 1, the deep reinforcement learning algorithm is a DDPG algorithm, and a policy network Actor and an evaluation network Critic dual-network structure are created by using the DDPG algorithm, and the continuous state and action control problem is solved by combining experience replay of deep Q learning.
Further, the Critic network loss function L is defined as:
wherein ,θQ For the parameters of the Critic current network, M is the number of learning samples selected from the experience replay buffer, y i Q is the Q value of Critic target network, Q is Critic current network, y i Is calculated as follows:
y i =r i +γQ'[s i+1 ,μ'(s i+1 |θ μ' )|θ Q' ]
wherein ,ri For the immediate prize value, γ is the discount factor. Q' and theta Q' Critic target network and parameters thereof, mu' and theta respectively μ' Respectively an Actor target network and parameters thereof;
mapping the current state to a designated action through an action value function, and updating the Actor network parameters through a gradient back propagation algorithm, wherein the loss gradient is as follows:
wherein ,gradient, θ μ The parameter is the parameter of the current network of the Actor, and mu is the current network of the Actor;
finally, updating Critic and Actor target network parameters theta by an exponential smoothing method Q' and θμ' The method comprises the following steps:
θ Q' =τθ Q +(1-τ)θ Q'
θ μ' =τθ μ +(1-τ)θ μ'
where τ is the soft update factor and the parameter τ is < 1.
Further, the charging plan issued by the master station in the step 3 to the energy controller scrolls once every 15min, wherein the charging plan comprises charging power, charging start time and charging end time; the main station responds to the charging pile power real-time control strategy every 3min so as to dynamically adjust the charging pile power when the load of the station area is out of limit.
Further, the specific calculation step of step 3.4) is as follows:
s1, calculating and sequencing the charging demand priority of a current submitted charging application user;
s2, determining the maximum available charging power of each electric automobile by considering the limitation of the load capacity margin of the platform and combining the priority of the charging demand of the user;
s3, calculating all possible charging time according to a first-level strategy for controlling the charging cost of a user;
and S4, screening and determining optimal charging time according to a second-stage strategy for stabilizing the power grid load fluctuation from all possible charging time intervals of the first-stage strategy.
And S5, judging whether the policy calculation can meet the charging requirement of the user, wherein the charging requirement refers to the charging electric quantity requirement and the time-to-lifting requirement, if the charging requirement of the user cannot be met, carrying out charging policy correction, namely expanding an available charging time window, and if the charging requirement still cannot be met after correction, the charging requirement of the user cannot be met through APP feedback.
And S6, the master station issues a charging plan with each 15min as a period, wherein the charging plan comprises the start-stop charging time and the charging power of each electric automobile.
Further, the constraint is:
(1) The charging power constraint charging power limit control is based on a charging control mode of a power grid side, a distribution network automation system collects real-time load information and distribution equipment information, an electric automobile integrated management system interacts with the distribution network automation system to obtain distribution line load distribution conditions on a branch where a charging station is located, the power distribution of the charging load is determined according to the charging load and a distribution line power limit value, and the charging station considers that a plurality of charging equipment are charged simultaneously to meet the limit of upper charging power:
wherein Pjmin The minimum load power at the moment of the charging station depends on the loads such as office work, illumination and the like of the charging station, and the general electric vehicle charging station structure is that a conventional load and an electric vehicle charging load are connected under a distribution transformer; p (P) jamax Is the maximum charging power which can be provided by the charging device of the charging station at the j moment, P jbmax Is the transmission power which can be provided by the line at the j moment, P ij The charging power of the ith trolley at the jth moment is represented, and N represents that N trolley charges are in total at the jth moment;
(2) User charging demand constraints
The charging requirements of all electric vehicle users accessing the charging station in one day need to be met, namely the battery charge state of the electric vehicle users when leaving the charging station is the target charge state set by the users.
SOC i Is the charge state of the i-th trolley at which charging is started;
SOC endi is i the target state of charge of the trolley;
B i is the battery capacity of the i-th dolly;
P i the charging power of the ith trolley;
T endi is the end charging time of the ith trolley, T i Is the departure time, T, of the ith trolley si Is the start charging time of the i-th dolly.
The invention provides a deep reinforcement learning-based ordered charging strategy of a one-pile multi-connected electric vehicle, which provides real-time individual difference charging service information for electric vehicle users, realizes information pushing of real-time charging price, available time period, queuing mode and the like, and specifically comprises the following steps:
1) The method improves the level of renewable energy consumption, reduces the charging cost of the main electric vehicle, improves the running efficiency of the traffic electrified network, stabilizes load fluctuation, and simultaneously avoids overload of a transformer at the power grid side.
2) By establishing a Markov (MDP) model suitable for the problem of orderly charging of a large-scale electric vehicle, the training cost and difficulty corresponding to the model are irrelevant to the scale of the electric vehicle, the convergence speed of training is high, the calculated amount is small, and important guidance is provided for charging facility planning, distribution network economic operation and vehicle network friendly interaction.
3) The uncertainty of the large-scale electric automobile can be effectively solved, and the charging cost of a user is reduced. In addition, the deep reinforcement learning algorithm can effectively capture the association relation of the geographic position from the position information, and has better self-adaption capability to the unknown environment.
Drawings
Fig. 1 is a schematic diagram of a cooperative control scenario of an orderly charging application master station and a local user;
FIG. 2 is a general flow chart of an orderly charging strategy for a multi-gang electric vehicle;
FIG. 3 is a diagram of a dynamic energy scheduling framework based on a deep reinforcement learning DDPG algorithm;
fig. 4 is a comparison of load curves before and after optimization.
Detailed Description
A one-pile multi-connected electric vehicle ordered charging strategy based on deep reinforcement learning comprises the following steps:
step 1: based on travel data, charging data and other information of vehicles in a historical area, vehicle electric quantity information monitored in real time, regional power grid load fluctuation condition and electricity price information, a macroscopic vehicle charging demand prediction model in the area is combined, real-time charging demands are mined, and real-time individuation difference charging service information is provided for electric automobile users.
Step 2: based on the cooperative control scene of the orderly charging application master station and the local user side, the 5G network and the optical fiber network are used for realizing high-speed communication, realizing real-time information interaction with the conventional load of the regional power distribution network system, the renewable energy power generation, the electricity price information and the user charging service system, realizing real-time monitoring of the charging load in the charging station, and establishing an orderly charging control strategy of the one-pile multi-connected electric automobile for the purpose
Step 3: as shown in fig. 1, a user initiates a charging request to a master station in a 5G wireless communication manner by selecting a charging type (normal charging, ordered charging), a target charging amount, and an expected vehicle lifting time. The main station and the intelligent equipment receive the charging requirement, and a corresponding charging plan is formulated by combining a base load curve, a district load limit curve (out-of-limit load) and a district working condition, and the energy router executes the charging plan to start charging. The charging plan issued by the master station to the energy controller scrolls every 15min, including charging power, charging start time and charging end time. The main station responds to the charging pile power real-time control strategy every 3min so as to dynamically adjust the charging pile power when the load of the station area is out of limit. In the case of communication interruption between the master station and the energy controller, the user sends a charging request to the energy router, and the energy controller performs local arrangement of a charging plan, as shown in fig. 2, and the overall flow is as follows:
1) The master station pre-acquires the basic predicted load of the residents of the 24h platform area and the exceeding limit value of the platform area load; the method comprises the steps of dynamically obtaining charging information provided by a user, wherein the charging information comprises a charging service type, a charging demand electric quantity and time information reserved by the user; and monitoring the real-time operation load of the station area by taking 3min as a period.
2) And judging whether the current load of the station area is out of limit.
3) If the out-of-limit condition occurs, executing a charging power real-time control strategy with a regulation period of 3 min:
(1) Calculating the charging demand priority of the user which is being charged at the out-of-limit moment;
(2) And stabilizing the load of the platform region in a safe region by sequentially adjusting the charging power of the electric automobile.
4) If the load of the platform area is not out of limit, carrying out ordered charging optimization strategy calculation with the period of 15min, wherein the specific calculation steps are as follows:
s1, calculating and sequencing the charging demand priority of a current submitted charging application user;
s2, determining the maximum available charging power of each electric automobile by considering the limitation of the load capacity margin of the platform and combining the priority of the charging demand of the user;
s3, calculating all possible charging time according to a first-level strategy for controlling the charging cost of a user;
and S4, screening and determining optimal charging time according to a second-stage strategy for stabilizing the power grid load fluctuation from all possible charging time intervals of the first-stage strategy.
And S5, judging whether the policy calculation can meet the charging requirement (the charging electric quantity requirement and the time-to-extraction requirement) of the user, if the policy calculation can not meet the charging requirement of the user, carrying out charging policy correction (expanding the charging time window close to the available time), and if the policy calculation can not still be met after correction, the charging requirement of the user can not be met through APP feedback.
And S6, the master station issues a charging plan with each 15min as a period, wherein the charging plan comprises the start-stop charging time and the charging power of each electric automobile.
Step 4: the ordered charging control target of the electric automobile mainly comprises load fluctuation at the power grid side and economic benefit of a charging station. The method mainly considers establishing multi-objective functions of a user layer and a power grid layer to carry out optimization solution.
(1) User layer
The optimization target of the electric automobile user is carried out on the premise of meeting the charging requirement of the electric automobile user, and the optimization target can be used as a constraint condition. The optimization objective on the user side of the electric vehicle is to minimize the charge cost for the user.
Assuming that the charging electricity price of the electric automobile adopts peak-valley electricity price, C f 、C g Respectively represent the electricity prices of peak-valley periods, t f (i) Indicating the time of charging the ith vehicle in peak period, t g (i) Indicating the charging time of the ith vehicle in the valley period, f (i) indicating the charging cost of the ith vehicle, and if N vehicles in total need to be controlled
(2) Power grid layer
The invention takes the minimum peak-valley difference of the power grid as the charging optimization target of the power grid side. The peak-valley difference of the power grid is mainly considered as an objective function. Assume that the peak load of a power grid load curve of electric vehicles connected to a power grid for charging is P f The load curve valley load is p g 。
f 2 =min(P f -P g )
In summary, the multi-objective function of the user-side objective (least charge cost) and the grid-side objective (least peak-to-valley difference) is as follows:
since the dimensions of the functions in the multiple targets are different and cannot be directly added, the specific method for normalizing the multiple targets is as follows:
P max is the maximum value of the conventional load of the power distribution network, P min Is the minimum value of the regular load of the distribution network.
The normalized ordered charge control targets after the weighting process are as follows:
minf=λ 1 f 1 +λ 2 f 2
in the formula λ1 、λ 2 Seventh, the weight coefficient of each target meets the following conditions: lambda (lambda) 1 +λ 2 =1, and λ 1 ≥0,λ 2 ≥0。
The load fluctuation at the power grid side is restrained, and the economic benefit of the charging station is restrained, wherein the restraint condition is that the operation of a transformer of the charging station is not overloaded, so that the charging requirement of a user of the charging station is met to the maximum extent.
(1) The charging power constraint charging power limit control is based on a charging control mode of a power grid side, a distribution network automation system collects real-time load information and distribution equipment information, an electric automobile integrated management system interacts with the distribution network automation system to obtain distribution line load distribution conditions on a branch where a charging station is located, the power distribution of the charging load is determined according to the charging load and the distribution line power limit value, and the charging station mainly considers that a plurality of charging equipment are charged simultaneously to meet the limit of upper charging power:
wherein Pjmin For the minimum load power at the moment of the charging station, the general electric vehicle charging station structure is that a conventional load and an electric vehicle charging load are connected under a distribution transformer depending on the loads such as office work, illumination and the like of the charging station.
P jamax Is the maximum charging power provided by the charging device of the charging station at the j moment
P jbmax Is the power of the transmission power provided by the line where the j moment is
P ij Indicating the charging power of the ith trolley at the jth moment
N represents that there is a j-th moment and there is a total of N trolley charges
(2) User charging demand constraints
The charging requirements of all electric vehicle users accessing the charging station in one day need to be met, namely the battery charge state of the electric vehicle users when leaving the charging station is the target charge state set by the users.
SOC i Is the charge state of the i-th trolley at which charging is started
SOC endi Is the target state of charge of the i < th > trolley
B i Battery capacity of the ith carriage
P i Is the charging power of the ith trolley
T endi The charging time is the end of the ith trolley;
and 5, analyzing user behavior characteristics based on historical data such as the SOC (state of charge), electricity price and the like of the electric vehicle aiming at random volatility of electric vehicle user behavior and renewable energy power generation, and completing a multi-connected electric vehicle charging control strategy by using a Markov decision technology of deep reinforcement learning so as to realize orderly management of the requirement diversity of large-scale electric vehicle charging users. Taking high power and space-time uncertainty of electric automobile charging and discharging into consideration, an MDP model suitable for the problem of ordered charging of large-scale electric automobiles is established, and the method comprises the following steps:
s1: the multi-objective ordered charging problem is represented as a markov decision process containing five elements (S, a, P, R, γ) to describe the uncertainty.
Where S is the state space, A is all possible actions, P is the state transition probability, R is the reward function, and γ is the discount factor.
S3, vector thereinThe state of the electric automobile n at t is shown, and the state comprises the following four information: />The battery state of charge (i.e., the ratio of the current remaining battery capacity to the full battery capacity) at t, representing the electric vehicle n, may be affected by position and behavior. If the electric automobile is charged, the driver is charged>Will increase by 20%, if it goes out +.>Reduced by 20% [4 ]]。/>And the representation shows that at the time t, the transformer load received by the electric automobile n accounts for the percentage of the maximum expected load of the transformer. />The position information of the electric vehicle n at t is represented by 0 at home and 1 at the outside. />The electricity prices (i.e., past 24h electricity prices) obtained by the electric automobile n at the times t-23, t-22, … and t in sequence are shown.
S4: behavioral space: discrete behavioral spaceAll possible actions are included. />Indicating that electric automobile n is in state +.>The behavior performed below, 1 indicates charging, and 0 indicates no charging.
S5: bonus space: is defined as wherein />Indicating the rewards earned by the electric car n at t. It consists of three parts, each of which corresponds to a given target: />Representing rewards of users of the electric automobile n to the satisfaction degree of the charge state of the battery at t; />Rewarding a degree of energy cost reduction, wherein +.>The electric quantity consumed by the electric automobile n at the time t; />Is designed to avoid overload of the transformer, and no rewards are given when the transformer received by the electric automobile n is in an overload state. When the transformer load is within the desired range, the electric vehicle n user receives a prize of 10. In an electric automobile cluster system, a parameterized policy function is: pi θ (S,A)=P[A|S,θ]
Where P is a probability distribution function that maps state S onto behavior a by a parameter θ.
Finally, according to S k Execution under State A k The following expects a report to evaluate the quality of the charge schedule:
wherein Qπθ (S t ,A t ) Representing a true behavior value function.
Step 6: by periodically observing the environment, actions are taken and prize values are obtained, and then the strategy is automatically adjusted according to the prize values to find the optimal charge scheduling strategy.
The dynamic energy scheduling framework based on the DDPG algorithm is shown in fig. 3, which describes the implementation of the proposed strategy for the i-th battery system. In the DDPG-based active power balancing strategy, the input variables for each time step t are ei, battery system active power Pbi, and battery constraints, respectively, and the output of the DPPG strategy is the battery reference current of the battery system internal controller.
The DDPG algorithm creates a strategy network Actor and evaluation network Critic dual network structure, combines experience replay of deep Q learning, and can be used for solving continuous state and action control problems. The Critic network loss function L is defined as:
wherein ,θQ For the parameters of the Critic current network, M is the number of learning samples selected from the experience replay buffer, y i Q is the Q value of the Critic target network, and Q is the Critic current network.
y i Is calculated as follows:
y i =r i +γQ'[s i+1 ,μ'(s i+1 |θ μ' )|θ Q' ]
wherein ,ri For the immediate prize value, γ is the discount factor. Q' and theta Q' Critic target network and parameters thereof, mu' and theta respectively μ' Respectively, an Actor target network and parameters thereof.
Mapping the current state to the appointed action through an action value function, and updating the Actor network parameters through a gradient back propagation algorithm. The loss gradient is as follows:
wherein ,gradient, θ μ Is a parameter of the current network of the Actor. μ is the Actor current network.
Finally, critic and Actor target network parameters θ Q' and θμ' May be updated by means of exponential smoothing, i.e.:
θ Q' =τθ Q +(1-τ)θ Q'
θ μ '=τθ μ +(1-τ)θ μ'
where τ is the soft update factor.
In order to verify the effectiveness of the method used in the invention, a certain cell is taken as an example for simulation verification. 600 electric vehicles are arranged in the district, all the electric vehicles are of the same model, the battery capacity is 43 kW.h, the vehicle-mounted charging power is slow charging, the slow charging power is 7kW, and E100 is 14 kW.h. The electricity price update time set by the calculation example is 15min, and 24h is divided into 96 time periods. The calculation example adopts a multi-target deep reinforcement learning algorithm to solve.
As can be seen from the comparison result of fig. 4, when the electric vehicle is charged in disorder, the stability of the power grid is reduced, and the load peak-valley difference of the power grid is increased, which is not beneficial to the safe operation of the power grid. By comprehensively considering two factors of the power grid and the user, the simulation result shows that the total load peak-valley difference of the optimized power grid is relatively reduced, the load curve is smoother, and the stability of the power grid is improved. From the aspect of users, the charging cost of the users is reduced, and the effectiveness of the ordered charging strategy is proved. It can be seen that the charging behavior of the electric automobile is guided through the ordered charging strategy, so that the charging cost of a user can be reduced, and the load peak-valley difference of the power grid can be reduced, thereby realizing win-win of the power grid and the user.
Claims (9)
1. A one-pile multi-connected electric vehicle ordered charging method based on deep reinforcement learning is characterized by comprising the following steps of: the method comprises the following steps:
step 1, mining real-time charging requirements based on travel data, charging data information, real-time monitoring vehicle electric quantity information, regional power grid load fluctuation conditions and electricity price information of vehicles in a historical region, and providing real-time individuation difference charging service information for electric automobile users by combining the macroscopic vehicle charging requirements in the region;
step 2, a cooperative control scene of a charging master station and a local user terminal is applied, high-speed communication is realized through a wireless network and an optical fiber network, conventional load, renewable energy power generation, electricity price information and real-time information interaction of a user charging service system and real-time monitoring of charging load in the charging station are realized, and an ordered charging control strategy of the multi-connected electric automobile is established;
step 3: the user initiates a charging request to the master station in a wireless communication mode by selecting a charging type, a target charging electric quantity and an expected vehicle lifting time; the main station and the intelligent equipment receive the charging requirement, and a corresponding charging plan is formulated by combining a base load curve, a district load limiting curve and a district working condition, and the energy router executes the charging plan to start charging; the main station responds to a charging schedule issued to the energy controller by the main station, and the main station responds to a charging pile power real-time control strategy so as to dynamically adjust the charging pile power when the load of the station area is over limit; under the scene that the communication between the master station and the energy controller is interrupted, a user sends a charging request to the energy router, the energy controller carries out local arrangement of a charging plan, and the charging type is normal charging and ordered charging;
step 4, when the electric automobile is connected into the charging pile i, the automobile owner inputs charging demand information, and the charging demand information comprises: expected residence time t of its electric vehicle i Desired battery charge level SOC at departure i D The electric automobile inserts the electric pile and passes through the data of visit electric automobile battery management system, and current battery information of input owner electric automobile, current battery information includes: battery capacity B i Current charge level SOC of battery i A The method has the advantages that the percentage of the current residual electric quantity of the electric automobile battery and the total capacity of the electric automobile battery can reach the charging expectation of an automobile owner based on the percentage, and the automobile owner can charge the electric automobile with one charging pile and multiple charging piles with one charging pile;
step 5, representing the ordered charging problem of the one-pile multi-connected electric vehicle as a Markov decision process containing five elements (S, A, P, R, gamma) to describe uncertainty and minimize the charging cost of the electric vehicle; where S is the state space, A is all possible behaviors, P is the state transition probability, R is the reward function, and γ is the discount factor;
and 6, solving the formulated Markov decision process problem based on a deep reinforcement learning algorithm.
2. The ordered charging strategy of a multi-pile electric vehicle based on deep reinforcement learning of claim 1, wherein in step 3, the overall control strategy flow is as follows:
1) The master station pre-acquires the basic predicted load of the residents of the 24h platform area and the exceeding limit value of the platform area load; the method comprises the steps of dynamically obtaining charging information provided by a user, wherein the charging information comprises a charging service type, a charging demand electric quantity and time information reserved by the user; monitoring real-time operation load of the station area by taking 3min as a period;
2) Judging whether the current load of the station area is out of limit;
3) If the out-of-limit condition occurs, executing a charging power real-time control strategy with a regulation period of 3 min:
3.1 Calculating the charging demand priority of the user which is being charged at the out-of-limit moment;
3.2 The charging power of the electric automobile is sequentially adjusted to stabilize the load of the platform area in a safe area.
3.4 If the load of the platform area is not out of limit, carrying out ordered charging optimization strategy calculation.
3. The deep reinforcement learning-based ordered charging strategy for a multi-gang electric automobile of one pile, according to claim 1, characterized in that: the ordered charging control targets of the electric automobile comprise load fluctuation at the power grid side, economic benefits of the charging station, multi-objective functions of a user layer and the power grid layer are established, and the ordered charging control targets of the electric automobile are optimized and solved:
1) User layer
The minimum charge cost of the user is used as the optimization target of the user side of the electric automobile,
assuming electricityThe charging electricity price of the motor car adopts peak-to-valley electricity price, C f 、C g Respectively represent the electricity prices of peak-valley periods, t f (i) Indicating the time of charging the ith vehicle in peak period, t g (i) Indicating the charging time of the ith vehicle in the valley period, f (i) indicating the charging cost of the ith vehicle, and if N vehicles in total need to be controlled
2) Power grid layer
Taking the minimum peak-valley difference of the power grid as a charging optimization target at the power grid side, taking the peak-valley difference of the power grid as an objective function, and assuming that the peak load of a power grid load curve of electric vehicles connected to the power grid for charging is P f The load curve valley load is p g ;
f 2 =min(P f -P g )
The multi-objective function corresponding to the user-side objective with the least charge and the grid-side objective with the least peak-to-valley difference is as follows:
normalization processing for a plurality of targets:
P max is the maximum value of the conventional load of the power distribution network, P min Is the minimum value of the conventional load of the power distribution network;
the normalized ordered charge control targets after the weighting process are as follows:
minf=λ 1 f 1 +λ 2 f 2
in the formula λ1 、λ 2 Seventh, the weight coefficient of each target meets the following conditions: lambda (lambda) 1 +λ 2 =1, and λ 1 ≥0,λ 2 ≥0;
The load fluctuation of the power grid side is restrained, and the economic benefit of the charging station is restrained, wherein the restraint condition is that the operation of a transformer of the charging station is not overloaded, and the charging requirement of a user of the charging station is met to the maximum extent.
4. The deep reinforcement learning-based ordered charging strategy for a multi-gang electric automobile of one pile, according to claim 1, characterized in that: in step 5, the markov decision process includes:
(Vector)The state of the electric automobile n at t is shown, and the state comprises the following four information:
representing the battery charge state of the electric automobile n at t, namely the ratio of the current residual battery capacity to the full battery capacity, if the electric automobile is charged, < >>Will increase by 20%, if it goes out +.>The reduction is 20%;
representing the percentage of the transformer load received by the electric automobile n at the time t to the maximum expected load of the transformer;
the position information of the electric automobile n at t is represented, 0 represents at home, and 1 represents outside trip;
the electricity prices obtained by the electric automobile n at the time t-23, t-22, … and t in sequence, namely the electricity price of the past 24 hours;
indicating that electric automobile n is in state +.>Behavior performed below, 1 representing charging, 0 representing no charging;
bonus space: is defined as wherein />Representing rewards obtained by the electric automobile n at t; it consists of three parts, each part corresponding to a given +.>Representing rewards of users of the electric automobile n to the satisfaction degree of the charge state of the battery at t; />Rewarding a degree of energy cost reduction, wherein +.>The electric quantity consumed by the electric automobile n at the time t; />The transformer overload protection circuit is designed for avoiding overload of the transformer, and no rewards are generated when the transformer received by the electric automobile n is in an overload state; when the load of the transformer is in a desired range, the rewarding obtained by the electric automobile n user is 10;
in an electric vehicle, the parameterized policy function is: pi θ (S,A)=P[A|S,θ];
Wherein P is a probability distribution function mapping the state S onto the behavior A by the parameter θ;
finally, according to S k Execution under State A k The following expects a report to evaluate the quality of the charge schedule:
wherein Qπθ (S t ,A t ) Representing a true behavior value function.
5. The deep reinforcement learning-based ordered charging strategy for a multi-gang electric automobile of one pile, according to claim 1, characterized in that: in step 1, the deep reinforcement learning algorithm is a DDPG algorithm, a policy network Actor and an evaluation network Critic dual-network structure are created by using the DDPG algorithm, and the continuous state and action control problem is solved by combining experience replay of deep Q learning.
6. The deep reinforcement learning-based ordered charging strategy for a multi-gang electric automobile of one pile of claim 5, wherein:
the Critic network loss function L is defined as:
wherein ,θQ For the parameters of the Critic current network, M is the number of learning samples selected from the experience replay buffer, y i Q is the Q value of Critic target network, Q is Critic current network, y i Is calculated as follows:
y i =r i +γQ'[s i+1 ,μ'(s i+1 |θ μ' )|θ Q' ]
wherein ,ri For the immediate prize value, γ is the discount factor. Q' and theta Q' Critic target network and parameters thereof, mu' and theta respectively μ' Respectively an Actor target network and parameters thereof;
mapping the current state to a designated action through an action value function, and updating the Actor network parameters through a gradient back propagation algorithm, wherein the loss gradient is as follows:
wherein ,gradient, θ μ The parameter is the parameter of the current network of the Actor, and mu is the current network of the Actor;
finally, updating Critic and Actor target network parameters theta by an exponential smoothing method Q' and θμ' The method comprises the following steps:
θ Q' =τθ Q +(1-τ)θ Q'
θ μ' =τθ μ +(1-τ)θ μ'
where τ is the soft update factor and the parameter τ is < 1.
7. The deep reinforcement learning-based ordered charging strategy for a multi-gang electric automobile of one pile, according to claim 1, characterized in that:
step 3, the charging plan issued by the master station to the energy controller scrolls once every 15min, wherein the charging plan comprises charging power, charging start time and charging end time; the main station responds to the charging pile power real-time control strategy every 3min so as to dynamically adjust the charging pile power when the load of the station area is out of limit.
8. The deep reinforcement learning-based ordered charging strategy for a multi-gang electric automobile of one pile, according to claim 1, characterized in that: step 3.4) the specific calculation steps are as follows:
s1, calculating and sequencing the charging demand priority of a current submitted charging application user;
s2, determining the maximum available charging power of each electric automobile by considering the limitation of the load capacity margin of the platform and combining the priority of the charging demand of the user;
s3, calculating all possible charging time according to a first-level strategy for controlling the charging cost of a user;
and S4, screening and determining optimal charging time according to a second-stage strategy for stabilizing the power grid load fluctuation from all possible charging time intervals of the first-stage strategy.
And S5, judging whether the policy calculation can meet the charging requirement of the user, wherein the charging requirement refers to the charging electric quantity requirement and the time-to-lifting requirement, if the charging requirement of the user cannot be met, carrying out charging policy correction, namely expanding an available charging time window, and if the charging requirement still cannot be met after correction, the charging requirement of the user cannot be met through APP feedback.
And S6, the master station issues a charging plan with each 15min as a period, wherein the charging plan comprises the start-stop charging time and the charging power of each electric automobile.
9. The deep reinforcement learning-based ordered charging strategy for a one-pile multi-connected electric vehicle according to claim 3, wherein: the constraint conditions include:
(1) The charging power constraint charging power limit control is based on a charging control mode of a power grid side, a distribution network automation system collects real-time load information and distribution equipment information, an electric automobile integrated management system interacts with the distribution network automation system to obtain distribution line load distribution conditions on a branch where a charging station is located, the power distribution of the charging load is determined according to the charging load and a distribution line power limit value, and the charging station considers that a plurality of charging equipment are charged simultaneously to meet the limit of upper charging power:
wherein Pjmin The minimum load power at the moment of the charging station depends on the loads such as office work, illumination and the like of the charging station, and the general electric vehicle charging station structure is that a conventional load and an electric vehicle charging load are connected under a distribution transformer; p (P) jamax Is the maximum charging power which can be provided by the charging device of the charging station at the j moment, P jbmax Is the transmission power which can be provided by the line at the j moment, P ij The charging power of the ith trolley at the jth moment is represented, and N represents that N trolley charges are in total at the jth moment;
(2) User charging demand constraints
The charging requirements of all electric vehicle users accessing the charging station in one day need to be met, namely the battery charge state of the electric vehicle users when leaving the charging station is the target charge state set by the users.
SOC i Is the charge state of the i-th trolley at which charging is started;
SOC endi is i the target state of charge of the trolley;
B i is the battery capacity of the i-th dolly;
P i the charging power of the ith trolley;
T endi is the end charging time of the ith trolley, T i Is the departure time, T, of the ith trolley si Is the start charging time of the i-th dolly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211542881.0A CN116001624A (en) | 2022-12-02 | 2022-12-02 | Ordered charging method for one-pile multi-connected electric automobile based on deep reinforcement learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211542881.0A CN116001624A (en) | 2022-12-02 | 2022-12-02 | Ordered charging method for one-pile multi-connected electric automobile based on deep reinforcement learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116001624A true CN116001624A (en) | 2023-04-25 |
Family
ID=86019955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211542881.0A Pending CN116001624A (en) | 2022-12-02 | 2022-12-02 | Ordered charging method for one-pile multi-connected electric automobile based on deep reinforcement learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116001624A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116345477A (en) * | 2023-05-19 | 2023-06-27 | 国网信息通信产业集团有限公司 | Electric automobile load time sequence allocation method under electric quantity type demand response |
CN116512968A (en) * | 2023-07-04 | 2023-08-01 | 深圳市菲尼基科技有限公司 | Charging power distribution method, device and equipment based on battery changing cabinet and storage medium |
CN116691413A (en) * | 2023-07-31 | 2023-09-05 | 国网浙江省电力有限公司 | Advanced vehicle-mounted dynamic load pre-configuration method and ordered charging system |
CN116691419A (en) * | 2023-08-03 | 2023-09-05 | 浙江大学 | Electric automobile autonomous charging control method for deep reinforcement learning under weak link communication |
CN116757877A (en) * | 2023-08-22 | 2023-09-15 | 国网山西省电力公司运城供电公司 | Power grid line loss reduction optimization method and system for new energy access power distribution network |
CN117301899A (en) * | 2023-11-29 | 2023-12-29 | 江西五十铃汽车有限公司 | Wireless charging method and system for electric automobile |
CN117841750A (en) * | 2024-02-05 | 2024-04-09 | 深圳海辰储能科技有限公司 | Charging regulation and control method for charging pile and related device |
CN117885593A (en) * | 2024-03-14 | 2024-04-16 | 江苏智融能源科技有限公司 | Charging station data management and control method and system based on big data |
CN117885593B (en) * | 2024-03-14 | 2024-05-24 | 江苏智融能源科技有限公司 | Charging station data management and control method and system based on big data |
-
2022
- 2022-12-02 CN CN202211542881.0A patent/CN116001624A/en active Pending
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116345477B (en) * | 2023-05-19 | 2023-10-17 | 国网信息通信产业集团有限公司 | Electric automobile load time sequence allocation method under electric quantity type demand response |
CN116345477A (en) * | 2023-05-19 | 2023-06-27 | 国网信息通信产业集团有限公司 | Electric automobile load time sequence allocation method under electric quantity type demand response |
CN116512968A (en) * | 2023-07-04 | 2023-08-01 | 深圳市菲尼基科技有限公司 | Charging power distribution method, device and equipment based on battery changing cabinet and storage medium |
CN116512968B (en) * | 2023-07-04 | 2023-09-08 | 深圳市菲尼基科技有限公司 | Charging power distribution method, device and equipment based on battery changing cabinet and storage medium |
CN116691413A (en) * | 2023-07-31 | 2023-09-05 | 国网浙江省电力有限公司 | Advanced vehicle-mounted dynamic load pre-configuration method and ordered charging system |
CN116691413B (en) * | 2023-07-31 | 2023-10-20 | 国网浙江省电力有限公司 | Advanced vehicle-mounted dynamic load pre-configuration method and ordered charging system |
CN116691419B (en) * | 2023-08-03 | 2023-11-14 | 浙江大学 | Electric automobile autonomous charging control method for deep reinforcement learning under weak link communication |
CN116691419A (en) * | 2023-08-03 | 2023-09-05 | 浙江大学 | Electric automobile autonomous charging control method for deep reinforcement learning under weak link communication |
CN116757877A (en) * | 2023-08-22 | 2023-09-15 | 国网山西省电力公司运城供电公司 | Power grid line loss reduction optimization method and system for new energy access power distribution network |
CN116757877B (en) * | 2023-08-22 | 2023-11-28 | 国网山西省电力公司运城供电公司 | Power grid line loss reduction optimization method and system for new energy access power distribution network |
CN117301899A (en) * | 2023-11-29 | 2023-12-29 | 江西五十铃汽车有限公司 | Wireless charging method and system for electric automobile |
CN117301899B (en) * | 2023-11-29 | 2024-03-15 | 江西五十铃汽车有限公司 | Wireless charging method and system for electric automobile |
CN117841750A (en) * | 2024-02-05 | 2024-04-09 | 深圳海辰储能科技有限公司 | Charging regulation and control method for charging pile and related device |
CN117841750B (en) * | 2024-02-05 | 2024-04-30 | 深圳海辰储能科技有限公司 | Charging regulation and control method for charging pile and related device |
CN117885593A (en) * | 2024-03-14 | 2024-04-16 | 江苏智融能源科技有限公司 | Charging station data management and control method and system based on big data |
CN117885593B (en) * | 2024-03-14 | 2024-05-24 | 江苏智融能源科技有限公司 | Charging station data management and control method and system based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116001624A (en) | Ordered charging method for one-pile multi-connected electric automobile based on deep reinforcement learning | |
CN111934335B (en) | Cluster electric vehicle charging behavior optimization method based on deep reinforcement learning | |
CN109523051B (en) | Electric automobile charging real-time optimization scheduling method | |
Deb et al. | Charging coordination of plug-in electric vehicle for congestion management in distribution system integrated with renewable energy sources | |
Ghasemi-Marzbali | Fast-charging station for electric vehicles, challenges and issues: A comprehensive review | |
Wang et al. | Demand side management of plug-in electric vehicles and coordinated unit commitment: A novel parallel competitive swarm optimization method | |
CN109398149A (en) | Intelligent electric automobile charge-discharge system and its progress control method based on distributed energy application | |
CN112193116B (en) | Electric vehicle charging optimization guiding strategy considering reward mechanism | |
Amirhosseini et al. | Scheduling charging of hybrid-electric vehicles according to supply and demand based on particle swarm optimization, imperialist competitive and teaching-learning algorithms | |
CN108596667B (en) | Electric automobile real-time charging electricity price calculation method based on Internet of vehicles | |
Alabi et al. | Improved hybrid inexact optimal scheduling of virtual powerplant (VPP) for zero-carbon multi-energy system (ZCMES) incorporating Electric Vehicle (EV) multi-flexible approach | |
CN102055217A (en) | Electric vehicle orderly charging control method and system | |
CN114004450A (en) | Ordered charging model guided by electric vehicle charging load interactive real-time pricing strategy | |
CN111798121B (en) | Distributed collaborative optimization method for energy management scheduling of electric automobile | |
CN113580984A (en) | Electric vehicle ordered charging strategy and simulation method thereof | |
Guo et al. | Robust energy management for industrial microgrid considering charging and discharging pressure of electric vehicles | |
CN113971530A (en) | Novel power system source network and storage cooperation oriented power balancing method | |
CN111680413B (en) | Tramcar timing energy-saving operation optimization method and system based on double-layer algorithm | |
Wu et al. | Electric vehicle charging scheduling considering infrastructure constraints | |
Wang et al. | Electric vehicle charging scheduling strategy for supporting load flattening under uncertain electric vehicle departures | |
Ali et al. | Multiobjective optimized smart charge controller for electric vehicle applications | |
CN116596252A (en) | Multi-target charging scheduling method for electric automobile clusters | |
Bai et al. | Multi-objective planning for electric vehicle charging stations considering TOU price | |
Sudhakar et al. | Optimal EV battery storage exploitation for energy conservation in low voltage distribution network | |
CN116054316A (en) | Ordered charging and discharging method for electric automobile based on chaotic sparrow optimization algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |