CN112606735B - Control method for optimal driving and energy management of non-contact power supply train - Google Patents
Control method for optimal driving and energy management of non-contact power supply train Download PDFInfo
- Publication number
- CN112606735B CN112606735B CN202011528282.4A CN202011528282A CN112606735B CN 112606735 B CN112606735 B CN 112606735B CN 202011528282 A CN202011528282 A CN 202011528282A CN 112606735 B CN112606735 B CN 112606735B
- Authority
- CN
- China
- Prior art keywords
- train
- battery
- formula
- state
- power supply
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60L—PROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
- B60L58/00—Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles
- B60L58/10—Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles for monitoring or controlling batteries
- B60L58/12—Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles for monitoring or controlling batteries responding to state of charge [SoC]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60L—PROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
- B60L2200/00—Type of vehicles
- B60L2200/26—Rail vehicles
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/60—Other road transportation technologies with climate change mitigation effect
- Y02T10/70—Energy storage systems for electromobility, e.g. batteries
Abstract
The invention provides a control method for optimal driving and energy management of a non-contact power supply train, which can realize efficient energy management and an optimal driving control method in the running process of the non-contact power supply train.
Description
Technical Field
The invention belongs to the technical field of optimized driving and energy management control of rail vehicles, and particularly relates to a control method integrating optimized driving and energy management of a non-contact power supply train.
Background
Lithium batteries are becoming an increasingly important component of train power systems due to their high efficiency, cleanliness, and reproducibility advantages. As a key technology of a non-contact power supply train, the quality of an energy management control strategy directly influences the energy utilization efficiency in the running process of the train. In order to enable the non-contact power supply train to exert the effects of energy conservation and emission reduction to the greatest extent under the limitation of the whole-line operation, the energy management strategy needs to be ensured to have good global performance.
The health state of the lithium ion battery is connected with the change of the charge state of the lithium ion battery in the using process, so that the charge state of the lithium ion battery is ensured to be in a proper range in the whole-line operation of a non-contact power supply train, and the service life of the battery is prolonged.
Because of more parameters and strong coupling, the research on the energy management control strategy and the research on the optimized driving are mostly separated at present, and the research is limited, so that the control method considering the energy management and optimizing the driving is significant for the safe and stable operation of the train and the maximized energy-saving and emission-reducing capacity.
Disclosure of Invention
The invention aims to design an intelligent control method integrating optimal driving and energy management aiming at a non-contact power supply tramcar, aims at optimizing the economy of the whole-line running of a train and enabling the train to arrive at a station on time, improves the energy utilization efficiency of the whole train and simultaneously ensures that the train arrives at the station on time within an allowable time range.
In order to achieve the above object, the present invention provides a control method for optimizing driving and energy management of a non-contact power train, comprising the following steps:
1. according to the train single-point model shown in fig. 2, a train kinematics equation is constructed, wherein the formula is as follows:
in the formula, FdrRepresents the tractive effort or braking effort provided by the traction motor to the train (positive numbers represent traction, negative numbers represent braking); x represents the distance traveled by the train; v represents the speed of the train; m represents the equivalent mass of the train; ffThe resistance of the train can be calculated through a traction calculation procedure, and the formula is as follows:
in the formula, ωo、ωiAnd ωrRespectively representing the unit basic resistance, the unit ramp resistance and the unit curve resistance (N/kN) of the locomotive; i (x) and r (x) respectively represent the slope (±) and curve radius (m) at position x; a. b and c represent basic resistance coefficients obtained according to a locomotive test (when the speed v of the general locomotive is less than or equal to 2.5km/h, the unit basic resistance is calculated according to the starting resistance); g represents the acceleration of gravity (m/s 2).
2. According to the topology of the equivalent circuit of the power battery shown in fig. 3, a power battery model is established, and the formula is as follows:
in the formula, PbatRepresents the output power of the battery; u shapeocvRepresents the open circuit voltage of the cell; r isoRepresents the internal resistance of the battery; I.C. AbatRepresents the output current of the battery; SOC represents the state of charge of the battery; qbatIndicating the rated capacity of the battery.
3. Combining a train kinematics equation and a power battery model to construct a state equation of the whole train system, wherein the formula is as follows:
in the formula, the battery output power PbatAnd the tractive or braking force F provided by the electric motor to the traindrAs the decision quantity U ═ Pbat,Fdr]T. The train running distance s, the train running speed v, and the battery state of charge SOC are taken as state quantities X ═ s, v, SOC]T。
4. According to the topological structure of the non-contact power supply train shown in figure 1, the tractive force/braking force F of the train is utilized based on the energy conservation theoremdrTrain running speed v and battery output power PbatCalculating the output power P of the non-contact power supply systemwireThe formula is as follows:
in the formula, PauxRepresents the auxiliary system output power, set to a constant value; etadc/dcAnd ηtranRespectively representing on-board DC/DC efficiency and train traction drive train efficiency.
5. Calculating a reward function for reinforcement learning by a state equation of a whole vehicle system, an energy consumption function of a non-contact power supply system, a penalty function of the running speed of the train exceeding a speed limit, a penalty function of the battery SOC exceeding an upper boundary and a lower boundary and a penalty function of the running time of the train not meeting the target time, wherein the calculation formula of the reward function r is as follows:
in the formula, r1Represents the energy consumption of the non-contact power supply system; r is a radical of hydrogen2A penalty function for representing that the train running speed exceeds the speed limit; r is a radical of hydrogen3And r4Penalty functions respectively representing the SOC of the battery exceeding an upper boundary and a lower boundary; r is a radical of hydrogen5A penalty function representing a deviation of train run time from a target time; sfIndicating a terminal position of the train; alpha (alpha) ("alpha")1、α2、α3、α4And alpha5Representing the weight factor corresponding to the reward. The specific formula is as follows:
in the formula, PwireAnd ηwireRespectively representing the output power and the efficiency of the non-contact power supply system; v. oflimIndicating a speed limit value;andupper and lower boundaries representing battery state of charge, respectively;andrespectively representing the upper and lower boundaries of time constraint when the train reaches the terminal point; c. C1~c19A correlation coefficient representing a penalty function.
6. Based on the whole vehicle whole system state equation and the reward function, according to the current state quantity X of the trainkCurrent decision quantity UkThe state quantity X at the next time can be calculatedk+1And the prize value r generated by the process, and stores it.
7. Updating the corresponding value Q by using the stored state variable X, the decision quantity U and the reward value r, wherein the formula is as follows:
wherein X represents a state quantity; u represents a decision quantity; α and γ represent a learning rate and a breakage coefficient, respectively.
8. The input of the neural network is set as a state quantity X and a decision quantity U, and the output is a corresponding value Q. And training the neural network for multiple times by using data in the database until the neural network converges and the error is less than the set requirement.
9. And inputting the running state of the current stage of the train and all feasible decisions aiming at the trained neural network. The trained neural network can output values Q corresponding to all feasible decisions, and a decision U corresponding to the maximum value is selected as the output of the current train optimization control, so that the optimized driving and energy management of the train are completed.
The invention can realize efficient energy management and optimized driving control method in the running process of the non-contact power supply train, aims at optimal economy and accurate arrival of the train at the station in the whole line running process based on reinforcement learning, improves the energy utilization efficiency of the whole train, and simultaneously enables the train to arrive at the station at the time within the error allowable range.
Drawings
FIG. 1 is a schematic diagram of a single particle model of a train;
FIG. 2 is a power cell equivalent circuit topology;
FIG. 3 is a topology of a non-contact power train;
FIG. 4 is a reinforcement learning based optimization control framework.
Detailed Description
FIG. 1 is a schematic diagram of a single particle model of a train, wherein FdrRepresenting the tractive effort or braking effort provided by the traction motor to the train (positive numbers representing traction, negative numbers representing braking), FfRepresenting the resistance experienced by the train, FNThe support force to which the train is subjected is indicated, and mg represents the gravity of the train. Accordingly, the train kinematics equation can be expressed as:
wherein s represents the travel distance of the train; v represents the speed of the train; m represents the equivalent mass of the train.
Fig. 2 shows an equivalent circuit model of the power battery. Wherein, UocvRepresents the open circuit voltage, R, of the batteryoIndicates the internal resistance of the battery, IbatRepresents the output current, P, of the batterybatRepresenting the output power of the battery. Thus, the state equation for the battery is:
in the formula, SOC represents the state of charge of the battery; qbatIndicating the rated capacity of the battery.
The invention outputs the power P of the batterybatAnd the tractive or braking force F provided by the electric motor to the traindrAs decision quantity U. The train running distance s, the train running speed v, and the battery state of charge SOC are taken as the state quantities X. Accordingly, the state equation of the whole system is:
the invention provides a driving and energy management optimizing strategy, which aims to be as follows: on the premise of ensuring the arrival of the train punctuality, the energy consumption of the whole-course operation of the train is reduced. Meanwhile, in the running process of the train, the running speed of the train cannot exceed the speed limit value v of the current positionlim. For this purpose, the reward function for reinforcement learning is designed as follows:
in the formula, r1Representing the energy consumption of the non-contact power supply system; r is2A penalty function for representing that the train running speed exceeds the speed limit; r is3And r4Penalty functions representing the battery SOC exceeding an upper boundary and a lower boundary respectively; r is a radical of hydrogen5A penalty function representing a deviation of train run time from a target time; s isfIndicating a terminal position of the train; alpha is alpha1、α2、α3、α4And alpha5Representing the weight coefficient corresponding to the reward. The specific formula is as follows:
in the formula, PwireAnd ηwireRespectively representing the output power and the efficiency of the non-contact power supply system; v. oflimIndicating a speed limit value;andupper and lower boundaries representing battery state of charge, respectively;andrespectively representing the upper and lower boundaries of time constraint when the train reaches the terminal point; c. C1~c19A correlation coefficient representing a penalty function.
Fig. 3 shows a topology structure of a non-contact power supply train studied by the present invention. The non-contact power supply system and the battery system provide energy for a traction system and an auxiliary system of the whole vehicle. The traction motor provides traction or electric braking force to the train through the train transmission system. In FIG. 1, Pwire、PbatAnd PauxRespectively representing the output power of the non-contact power supply system, the output power of the battery system and the required power of the auxiliary system; etawire、ηdc/dcAnd ηtranRespectively representing the efficiency of the non-contact power supply system, the efficiency of the on-board DC/DC converter and the efficiency of the train traction drive system.
As shown in fig. 4, the present invention performs offline learning by using a Deep Q Network algorithm (Deep Q Network DQN for short) of reinforcement learning based on the state equation and reward function of the whole system, and the specific pseudo codes are as follows:
(1) initializing a memory pool, and setting the capacity of the memory pool to be N;
(2) initializing a Q value neural network, and randomly generating a weight parameter of the neural network;
(3) initializing a target Q value neural network to be the same as the Q value neural network;
(4) number of cycles to required training M:
(4.1) initializing the State quantity X of the first stage1;
(4.2) circulating to the required train operation stage number N:
(4.2.1) determining the decision U of the current stage by a random greedy methodk(ii) a The greedy method has a selection rule of passing the current stage state XkCorrespondingly, the decision making the Q value maximum is taken as the current decision, and the formula is: u shapek=maxUQ(Xk,U;θ);
(4.2.2) executing the current decision UkSolving the state X of the next stage by the state equation and the reward function of the systemk+1And a current prize value r; and will (X)k,Uk,ri,Xk+1) Storing the data into a memory pool;
(4.2.3) randomly taking a partial sample (X) from the memory pooli,Ui,ri,Xi+1);
(4.2.4) calculating the parameter y by randomly taking the ith sample from the memory pooliAnd thus used to update the Q-value network, the formula: y isi=ri+γmaxU'Q(Xi+1,U';θ);
(4.2.5) based on the formula (y)i-Q(Xi,U;θ))2And training the Q-value neural network by using a gradient descent method.
And finally, the trained neural network is used for intelligent control of the actual non-contact power supply tramcar. And calculating value functions of different decisions according to the real-time running state of the train, taking the decision with the highest value as an actual control value of the controller, and sending the actual control value to the controllers of all the subsystems.
Claims (1)
1. A control method for optimal driving and energy management of a non-contact power supply train is characterized by comprising the following steps:
(1) according to the train single-substance point model, a train kinematics equation is constructed, wherein the formula is as follows:
in the formula, FdrThe traction motor is used for providing traction or braking force for the train, positive numbers represent traction, and negative numbers represent braking; x represents the distance traveled by the train; v represents the speed of the train; m represents the equivalent mass of the train; ffThe resistance of the train can be calculated through a traction calculation procedure, and the formula is as follows:
in the formula, ωo、ωiAnd omegarRespectively representing the unit basic resistance, the unit ramp resistance and the unit curve resistance (N/kN) of the locomotive; i (x) and r (x) respectively represent the slope (±) and curve radius (m) at position x; a. b and c represent the basic resistance coefficient obtained according to the locomotive test, and when the speed v of the locomotive is less than or equal to 2.5km/h, the unit basic resistance is calculated according to the starting resistance; g represents gravitational acceleration (m/s 2);
(2) establishing a power battery model according to the topology of the equivalent circuit of the power battery, wherein the formula is as follows:
in the formula, PbatRepresents the output power of the battery; u shapeocvIndicating the open circuit voltage of the battery;RoRepresents the internal resistance of the battery; i isbatRepresents the output current of the battery; SOC represents the state of charge of the battery; qbatRepresents a rated capacity of the battery;
(3) and combining a train kinematics equation and a power battery model to construct a state equation of the whole train system, wherein the formula is as follows:
in the formula, the battery output power PbatAnd the tractive or braking force F provided by the electric motor to the traindrAs the decision quantity U ═ Pbat,Fdr]TThe train running distance s, the train running speed v, and the battery state of charge SOC are taken as state quantities X ═ s, v, SOC]T;
(4) According to the topological structure of the non-contact power supply train, based on the law of conservation of energy, the traction force/braking force F of the train is utilizeddrTrain running speed v and battery output power PbatCalculating the output power P of the non-contact power supply systemwireThe formula is as follows:
in the formula, PauxRepresents the auxiliary system output power, set to a constant value; etadc/dcAnd ηtranRespectively representing the efficiency of the on-board DC/DC and the efficiency of the train traction transmission system;
(5) calculating a reward function for reinforcement learning through a state equation of a whole vehicle whole system, a non-contact power supply system energy consumption function, a penalty function of the running speed of the train exceeding a speed limit, a penalty function of the battery SOC exceeding an upper boundary and a lower boundary and a penalty function of the running time of the train not meeting the target time, wherein the calculation formula of the reward function r is as follows:
in the formula, r1Representing the energy consumption of the non-contact power supply system; r is a radical of hydrogen2A penalty function for indicating that the train running speed exceeds the speed limit; r is3And r4Penalty functions representing the battery SOC exceeding an upper boundary and a lower boundary respectively; r is5A penalty function representing a discrepancy between train operating time and target time; sfIndicating a terminal position of the train; alpha (alpha) ("alpha")1、α2、α3、α4And alpha5The weight coefficient corresponding to the reward is represented by the following specific formula:
in the formula, PwireAnd ηwireRespectively representing the output power and the efficiency of the non-contact power supply system; v. oflimIndicating a speed limit value;andupper and lower boundaries representing battery state of charge, respectively;andrespectively representing the upper and lower boundaries of time constraint when the train reaches the terminal point; c. C1~c19A correlation coefficient representing a penalty function;
(6) based on the whole vehicle whole system state equation and the reward function, according to the current state quantity X of the trainkCurrent decision quantity UkCalculating the state quantity X at the next momentk+1And the reward value r generated by the process, and storing it;
(7) updating the corresponding value Q by utilizing the stored state variable X, the decision quantity U and the reward value r, wherein the formula is as follows:
wherein X represents a state quantity; u represents a decision quantity; alpha and gamma represent learning rate and breakage coefficient respectively;
(8) setting the input of the neural network as a state quantity X and a decision quantity U, outputting the state quantity X and the decision quantity U as corresponding values Q, and training the neural network for multiple times by using data in a database until the neural network converges and the error is less than the set requirement;
(9) and inputting the running state of the train at the current stage and all feasible decisions aiming at the trained neural network, wherein the trained neural network can output the values Q corresponding to all feasible decisions, and selects the decision U corresponding to the maximum value as the output of the current train optimization control, thereby completing the optimized driving and energy management of the train.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011528282.4A CN112606735B (en) | 2020-12-22 | 2020-12-22 | Control method for optimal driving and energy management of non-contact power supply train |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011528282.4A CN112606735B (en) | 2020-12-22 | 2020-12-22 | Control method for optimal driving and energy management of non-contact power supply train |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112606735A CN112606735A (en) | 2021-04-06 |
CN112606735B true CN112606735B (en) | 2022-07-22 |
Family
ID=75244027
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011528282.4A Active CN112606735B (en) | 2020-12-22 | 2020-12-22 | Control method for optimal driving and energy management of non-contact power supply train |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112606735B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104071033A (en) * | 2013-12-07 | 2014-10-01 | 西南交通大学 | Method for matching and optimizing parameters of mixed power locomotive with fuel cell and super capacitor |
CN105857320A (en) * | 2016-06-01 | 2016-08-17 | 北京交通大学 | Energy management strategy of hybrid power bullet train set traction and transmission system |
CN107895960A (en) * | 2017-11-01 | 2018-04-10 | 北京交通大学长三角研究院 | City rail traffic ground type super capacitor energy storage system energy management method based on intensified learning |
CN108099635A (en) * | 2017-11-13 | 2018-06-01 | 山东斯博科特电气技术有限公司 | Fuel cell hybrid tramcar polyergic source coupling punishment control system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6814658B2 (en) * | 2017-02-21 | 2021-01-20 | 三菱重工エンジニアリング株式会社 | Vehicle control device, vehicle control method, program |
US20200108732A1 (en) * | 2018-10-09 | 2020-04-09 | Regents Of The University Of Minnesota | Physical model-guided machine learning framework for energy management of vehicles |
-
2020
- 2020-12-22 CN CN202011528282.4A patent/CN112606735B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104071033A (en) * | 2013-12-07 | 2014-10-01 | 西南交通大学 | Method for matching and optimizing parameters of mixed power locomotive with fuel cell and super capacitor |
CN105857320A (en) * | 2016-06-01 | 2016-08-17 | 北京交通大学 | Energy management strategy of hybrid power bullet train set traction and transmission system |
CN107895960A (en) * | 2017-11-01 | 2018-04-10 | 北京交通大学长三角研究院 | City rail traffic ground type super capacitor energy storage system energy management method based on intensified learning |
CN108099635A (en) * | 2017-11-13 | 2018-06-01 | 山东斯博科特电气技术有限公司 | Fuel cell hybrid tramcar polyergic source coupling punishment control system |
Non-Patent Citations (2)
Title |
---|
基于动态规划的混合动力有轨电车能量管理方法;陈维荣等;《西南交通大学学报》;20181015(第05期);全文 * |
基于深度学习的混合动力汽车能量管理策略研究;裴嘉政;《中国博士学位论文全文数据库 工程科技II辑》;20190901;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112606735A (en) | 2021-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104512265B (en) | Vehicle battery charging setpoint control | |
Wu et al. | Fuzzy energy management strategy for a hybrid electric vehicle based on driving cycle recognition | |
CN110549868B (en) | Hybrid power tramcar speed adjusting method based on real-time power of power system | |
CN111619545B (en) | Hybrid electric vehicle energy management method based on traffic information | |
Liu et al. | An on-line energy management strategy based on trip condition prediction for commuter plug-in hybrid electric vehicles | |
CN106004865A (en) | Mileage adaptive hybrid electric vehicle energy management method based on working situation identification | |
CN103863087B (en) | Plug-in hybrid electric vehicle energy-saving predictive control method based on optimal engine operation line | |
US20170320481A1 (en) | A hybrid vehicle and a method for energy management of a hybrid vehicle | |
CN110549914B (en) | Approximate optimal energy management method for daily operation of fuel cell tramcar | |
CN112633598A (en) | Comprehensive energy-saving optimization method for speed curve and timetable of urban rail transit train | |
CN102799743A (en) | Matching method for pure electric vehicle power system | |
CN113554337B (en) | Plug-in hybrid electric vehicle energy management strategy construction method integrating traffic information | |
CN115158094A (en) | Plug-in hybrid electric vehicle energy management method based on long-short-term SOC (System on chip) planning | |
CN115805840A (en) | Energy consumption control method and system for range-extending type electric loader | |
Park et al. | Intelligent energy management and optimization in a hybridized all-terrain vehicle with simple on–off control of the internal combustion engine | |
CN114148325B (en) | Method for managing predictive energy of heavy-duty hybrid commercial vehicle | |
Zhang et al. | Online updating energy management strategy based on deep reinforcement learning with accelerated training for hybrid electric tracked vehicles | |
CN108688476A (en) | Manage the method and system of vehicle driving range | |
CN110077389A (en) | A kind of plug-in hybrid electric automobile energy management method | |
Grossard et al. | An optimal energy-based approach for driving guidance of full electric vehicles | |
Halima et al. | Energy management of parallel hybrid electric vehicle based on fuzzy logic control strategies | |
Guo et al. | Deep reinforcement learning-based hierarchical energy control strategy of a platoon of connected hybrid electric vehicles through cloud platform | |
CN112606735B (en) | Control method for optimal driving and energy management of non-contact power supply train | |
Styler et al. | Active management of a heterogeneous energy store for electric vehicles | |
Schenker et al. | Optimization model for operation of battery multiple units on partly electrified railway lines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |