CN114154729A - Energy management system and method for hybrid electric vehicle composite energy storage system - Google Patents

Energy management system and method for hybrid electric vehicle composite energy storage system Download PDF

Info

Publication number
CN114154729A
CN114154729A CN202111490603.0A CN202111490603A CN114154729A CN 114154729 A CN114154729 A CN 114154729A CN 202111490603 A CN202111490603 A CN 202111490603A CN 114154729 A CN114154729 A CN 114154729A
Authority
CN
China
Prior art keywords
power
vehicle
storage system
energy storage
super capacitor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111490603.0A
Other languages
Chinese (zh)
Inventor
李卫民
魏大钧
邵壮
王昌朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Zhongke Advanced Technology Research Institute Co ltd
Original Assignee
Shandong Zhongke Advanced Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Zhongke Advanced Technology Research Institute Co ltd filed Critical Shandong Zhongke Advanced Technology Research Institute Co ltd
Priority to CN202111490603.0A priority Critical patent/CN114154729A/en
Publication of CN114154729A publication Critical patent/CN114154729A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L50/00Electric propulsion with power supplied within the vehicle
    • B60L50/40Electric propulsion with power supplied within the vehicle using propulsion power supplied by capacitors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L50/00Electric propulsion with power supplied within the vehicle
    • B60L50/50Electric propulsion with power supplied within the vehicle using propulsion power supplied by batteries or fuel cells
    • B60L50/60Electric propulsion with power supplied within the vehicle using propulsion power supplied by batteries or fuel cells using power supplied by batteries
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L50/00Electric propulsion with power supplied within the vehicle
    • B60L50/50Electric propulsion with power supplied within the vehicle using propulsion power supplied by batteries or fuel cells
    • B60L50/70Electric propulsion with power supplied within the vehicle using propulsion power supplied by batteries or fuel cells using power supplied by fuel cells
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L58/00Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles
    • B60L58/10Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles for monitoring or controlling batteries
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60LPROPULSION OF ELECTRICALLY-PROPELLED VEHICLES; SUPPLYING ELECTRIC POWER FOR AUXILIARY EQUIPMENT OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRODYNAMIC BRAKE SYSTEMS FOR VEHICLES IN GENERAL; MAGNETIC SUSPENSION OR LEVITATION FOR VEHICLES; MONITORING OPERATING VARIABLES OF ELECTRICALLY-PROPELLED VEHICLES; ELECTRIC SAFETY DEVICES FOR ELECTRICALLY-PROPELLED VEHICLES
    • B60L58/00Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles
    • B60L58/30Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles for monitoring or controlling fuel cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06315Needs-based resource requirements planning or analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E40/00Technologies for an efficient electrical power generation, transmission or distribution
    • Y02E40/70Smart grids as climate change mitigation technology in the energy generation sector
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/70Energy storage systems for electromobility, e.g. batteries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02T90/40Application of hydrogen technology to transportation, e.g. using fuel cells
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The invention relates to a hybrid electric vehicle composite energy storage system energy management system and a method, wherein the system comprises: the acquisition module is used for acquiring traffic situation data; the prediction module is used for predicting the vehicle speed by utilizing a neural network according to the traffic situation data to obtain predicted vehicle speed information; the vehicle demand power module is used for determining vehicle demand power according to the predicted vehicle speed information; the distribution module is used for distributing power of a power battery, power of a super capacitor and power of a fuel cell by using a DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result; and the control module is used for controlling the hybrid power automobile composite energy storage system according to the power distribution result. The invention realizes the optimization of energy distribution by coordinating the energy distribution among the fuel cell, the power battery and the super capacitor.

Description

Energy management system and method for hybrid electric vehicle composite energy storage system
Technical Field
The invention relates to the field of energy management of automobile composite energy storage systems, in particular to an energy management system and method of a hybrid electric automobile composite energy storage system.
Background
In recent years, the problems of energy shortage and environmental pollution are becoming more severe, and under the requirements of national carbon peak and carbon neutralization policy, the traditional internal combustion engine automobile is gradually developing towards electrification. The lithium battery has the characteristics of high power density, low energy density and low cycle number, and is generally used as an energy storage source of an electric automobile; the super capacitor has the characteristics of high power density, low energy density and long cycle times; the fuel cell system has the advantages of cleanliness and no pollution, and can realize zero emission in a real sense. Therefore, the hybrid power automobile composite energy storage system integrates the advantages of the lithium battery, the super capacitor and the fuel battery, and achieves the aims of long endurance mileage, low emission and low use cost. However, complex energy storage systems present complex energy management issues-how to properly distribute the energy of the composite energy storage system energy source is a hot topic. The reinforcement learning algorithm is an algorithm which is newly raised in recent years, and is gradually applied to solving the energy management problem due to the characteristics of high calculation speed, strong learning capability and the like. However, the reinforcement learning algorithm can only realize off-line operation and cannot be applied on line in real time.
Disclosure of Invention
The invention aims to provide an energy management system and an energy management method of a hybrid electric vehicle composite energy storage system, which are used for realizing optimization of energy distribution by coordinating energy distribution among a fuel cell, a power cell and a super capacitor.
In order to achieve the purpose, the invention provides the following scheme:
a hybrid electric vehicle hybrid energy storage system energy management system, comprising:
the acquisition module is used for acquiring traffic situation data;
the prediction module is used for predicting the vehicle speed by utilizing a neural network according to the traffic situation data to obtain predicted vehicle speed information;
the vehicle demand power module is used for determining vehicle demand power according to the predicted vehicle speed information;
the distribution module is used for distributing power of a power battery, power of a super capacitor and power of a fuel cell by using a DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result;
and the control module is used for controlling the hybrid power automobile composite energy storage system according to the power distribution result.
Optionally, the method further comprises:
and the storage module is used for storing the traffic people situation data.
Optionally, the system further comprises a neural network training module, wherein the neural network training module specifically comprises:
and the neural network training unit is used for training the back propagation neural network by taking the traffic situation data of a training set as input, taking the speed information of the training set as output, taking the error between an actual output value and a theoretical output value and the number of the training set as loss functions and taking a Sigmoid function as an activation function to obtain the trained neural network.
Optionally, the vehicle demanded power module specifically includes:
a vehicle information acquisition unit for acquiring vehicle information; the vehicle information comprises vehicle mass, vehicle frontal area and mechanical transmission efficiency;
an environment information acquisition unit for acquiring environment information; the environment information is a slope angle of a road;
and the vehicle required power determining unit is used for determining the vehicle required power according to the vehicle information, the environment information and the predicted vehicle speed information.
Optionally, the allocation module specifically includes:
the DDRG reinforcement learning algorithm training unit is used for training a DDRG network by taking a lithium battery SOC, a super capacitor SOC, a fuel battery current, the predicted vehicle speed information and the vehicle required power as state parameters of the DDRG reinforcement learning algorithm, taking a lithium battery output power factor and a super capacitor output power factor as dynamic parameters and taking a function determined according to the lithium battery SOC and the super capacitor SOC as a reward function to obtain a trained DDRG reinforcement learning algorithm;
and the distribution unit is used for distributing the power of the power battery, the power of the super capacitor and the power of the fuel cell by utilizing the trained DDRG reinforcement learning algorithm according to the power required by the vehicle to obtain a power distribution result.
A hybrid electric vehicle composite energy storage system energy management method comprises the following steps:
acquiring traffic situation data;
predicting the vehicle speed by utilizing a neural network according to the traffic situation data to obtain predicted vehicle speed information;
determining the required power of the vehicle according to the predicted vehicle speed information;
distributing power of a power battery, power of a super capacitor and power of a fuel cell by using a DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result;
and controlling the hybrid power automobile composite energy storage system according to the power distribution result.
Optionally, after the acquiring the traffic situation data, further comprising:
and storing the traffic people situation data.
Optionally, the training process of the neural network specifically includes:
and taking the traffic situation data of a training set as input, taking the vehicle speed information of the training set as output, taking the error between an actual output value and a theoretical output value and the number of the training sets as loss functions, and taking a Sigmoid function as an activation function to train the back propagation neural network to obtain the trained neural network.
Optionally, the determining the required power of the vehicle according to the predicted vehicle speed information specifically includes:
acquiring vehicle information; the vehicle information comprises vehicle mass, vehicle frontal area and mechanical transmission efficiency;
acquiring environmental information; the environment information is a slope angle of a road;
and determining the required power of the vehicle according to the vehicle information, the environment information and the predicted vehicle speed information.
Optionally, the allocating power of a power battery, power of a super capacitor and power of a fuel cell by using a DDRG reinforcement learning algorithm according to the power required by the vehicle to obtain a power allocation result specifically includes:
training a DDRG network by taking a lithium battery SOC, a super capacitor SOC, a fuel battery current, the predicted vehicle speed information and the vehicle required power as state parameters of the DDRG reinforcement learning algorithm, taking a lithium battery output power factor and a super capacitor output power factor as dynamic parameters and taking a function determined according to the lithium battery SOC and the super capacitor SOC as a reward function to obtain a trained DDRG reinforcement learning algorithm;
and distributing the power of a power battery, the power of a super capacitor and the power of a fuel cell by utilizing the trained DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the acquisition module is used for acquiring traffic situation data; the prediction module is used for predicting the vehicle speed by utilizing a neural network according to the traffic situation data to obtain predicted vehicle speed information; the vehicle demand power module is used for determining vehicle demand power according to the predicted vehicle speed information; the distribution module is used for distributing power of a power battery, power of a super capacitor and power of a fuel cell by using a DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result; and the control module is used for controlling the hybrid power automobile composite energy storage system according to the power distribution result. The invention realizes the optimization of energy distribution by coordinating the energy distribution among the fuel cell, the power battery and the super capacitor.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic diagram of an energy management system of a hybrid energy storage system of a hybrid electric vehicle according to the present invention;
FIG. 2 is a diagram of a real-time online energy management architecture provided by the present invention;
FIG. 3 is a flow chart of energy management of the hybrid energy storage system of the hybrid electric vehicle according to the present invention;
FIG. 4 is a flowchart of BP neural network training provided by the present invention;
FIG. 5 is a flowchart of the DDPG enhanced training process provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the energy management system of a hybrid energy storage system of a hybrid electric vehicle provided by the invention comprises:
the acquiring module 101 is configured to acquire traffic situation data.
And the prediction module 102 is configured to predict a vehicle speed by using a neural network according to the traffic situation data to obtain predicted vehicle speed information.
And the vehicle required power module 103 is used for determining the vehicle required power according to the predicted vehicle speed information.
And the distribution module 104 is configured to distribute the power of the power battery, the power of the super capacitor and the power of the fuel cell by using a DDRG reinforcement learning algorithm according to the vehicle required power to obtain a power distribution result.
And the control module 105 is used for controlling the hybrid energy storage system of the hybrid electric vehicle according to the power distribution result.
In practical application, the hybrid electric vehicle composite energy storage system energy management system further comprises: and the storage module is used for storing the traffic people situation data.
In practical application, the hybrid electric vehicle composite energy storage system energy management system further comprises a neural network training module, and the neural network training module specifically comprises: and the neural network training unit is used for training the back propagation neural network by taking the traffic situation data of a training set as input, taking the speed information of the training set as output, taking the error between an actual output value and a theoretical output value and the number of the training set as loss functions and taking a Sigmoid function as an activation function to obtain the trained neural network.
In practical applications, the vehicle power demand module 103 specifically includes:
a vehicle information acquisition unit for acquiring vehicle information; the vehicle information includes vehicle mass, vehicle frontal area, and mechanical transmission efficiency.
An environment information acquisition unit for acquiring environment information; the environment information is a slope angle of the road.
And the vehicle required power determining unit is used for determining the vehicle required power according to the vehicle information, the environment information and the predicted vehicle speed information.
In practical applications, the allocating module 104 specifically includes:
and the DDRG reinforcement learning algorithm training unit is used for training the DDRG network by taking the lithium battery SOC, the super capacitor SOC, the fuel cell current, the predicted vehicle speed information and the vehicle required power as state parameters of the DDRG reinforcement learning algorithm, taking a lithium battery output power factor and a super capacitor output power factor as dynamic parameters and taking a function determined according to the lithium battery SOC and the super capacitor SOC as a reward function to obtain the trained DDRG reinforcement learning algorithm.
And the distribution unit is used for distributing the power of the power battery, the power of the super capacitor and the power of the fuel cell by utilizing the trained DDRG reinforcement learning algorithm according to the power required by the vehicle to obtain a power distribution result.
The framework of the hybrid electric vehicle composite energy storage system energy management system provided by the invention is shown in fig. 2 and mainly comprises a data module, an energy distribution module and a hybrid electric vehicle module.
The data module comprises three parts of data crawling, data storage and vehicle speed prediction; i.e. the data module comprises an acquisition module 101, a storage module and a prediction module 102.
The energy distribution module comprises four parts of strategy design, state selection, action selection, a reward function and strengthening training; the strategic design is the vehicle demanded power module 103. The state selection, the action selection, the reward function and the reinforcement training correspond to a DDRG reinforcement learning algorithm training unit and an allocation unit.
The hybrid electric vehicle module comprises a composite energy storage system and two parts of speed and acceleration.
The data module provides a future 2min speed sequence for the energy distribution module, and the energy distribution module distributes energy and outputs motor power, super capacitor power and fuel cell power after receiving the future 2min speed sequence; wherein, the output motor power is the same as the lithium battery power.
As shown in fig. 3, the hybrid energy storage system in the hybrid electric vehicle module receives the motor power, the super capacitor power and the fuel cell power and then outputs a driving force to drive the hybrid electric vehicle to generate speed and acceleration.
Data module
1. Data crawling
Entering a high-level development platform, writing a crawler program by python by means of an Http interface of a traffic situation provided by a high-level Web service API (application program interface) to crawl traffic situation data, wherein the traffic situation data comprises running time, a running road section, a vehicle speed and current longitude and latitude information of a vehicle. The method mainly comprises the following steps:
first, apply for a "Web services API interface" Key (Key).
And secondly, splicing the Http request URL, wherein the Key applied in the first step needs to be sent together as a mandatory filling parameter.
And thirdly, receiving data (JSON or XML format) returned by the Http request, and analyzing the data.
The traffic situation data of the Goodpasture map is updated once every 2min, and in order to achieve maximum real-time online energy management, the data acquisition time interval is also set to be 2 min.
2. Data storage
The collected traffic situation data needs to be updated and stored in real time. The real-time updating and storing process comprises the following steps:
writing a code to store the latest acquired data into a MongoDB database, wherein original data in the MongoDB database are kept unchanged.
And step two, along with the continuous storage of new data in the MongoDB database, the data storage amount of the MongoDB database is continuously accumulated, namely the real-time traffic situation data collected at different time points are continuously superposed to the database for storage.
Data which are crawled recently can be checked by opening a MongoDB visualization tool studio 3t, and a Python program can be compiled to fixedly inquire certain traffic data and call the traffic data. Further, the stored vehicle speed data is extracted and processed (i.e., the stored data is subjected to gaussian filtering, so that noise interference of the sensor when collecting the vehicle speed is eliminated).
3. Vehicle speed prediction
A Back Propagation Neural Network (BPNN), namely a BP Neural Network is adopted to predict a speed sequence of 2min in the future, wherein the BP Neural Network is composed of an input layer, an implicit layer and an output layer.
The method comprises the following steps: training BP neural network
The BP neural network is first trained. The Sigmoid function is used as the activation function. The Sigmoid function has a monotone increasing characteristic, and the inverse function also has a monotone increasing characteristic, and is often used as an activation function of a neural network, and variables are mapped to 0-1, and a specific expression is as follows:
Figure BDA0003399172030000081
wherein S represents the mapping relation of the input and the output of the Sigmoid function, x is the input of the neuron, and S (x) represents the output value of the neuron when the input of the neuron is x.
The number of nodes of the hidden neuron is determined by the following equation.
Figure BDA0003399172030000082
Wherein N isiThe number of neurons in the input layer (in the present invention, five neurons are included in the input layer, and each neuron corresponds to a vehicle speed of 5 seconds in history, so that N is the number of neurons in the input layeri=5);NoThe number of neurons in the output layer (in the present invention, the output layer has one neuron in common, corresponding to the predicted vehicle speed in the next second, so No=1);NsIs the sample number of the training set (the training set in the invention is 2min data crawled by the data module each time, and the total number is 120 data, so N iss120); alpha is a weight variable, and alpha is 5 in the invention.
On the basis, the collected speed data after Gaussian filtering is input into a BP neural network for training, and historical speed of 5 seconds is sequentially input to output speed of 6 seconds.
The specific training process is shown in fig. 4:
(1) firstly, initializing a network, determining that the number of neurons in an input layer is 5, the number of neurons in a hidden layer is 4, the number of neurons in an output layer is 1, setting a weight initial value of each neuron to be 1, setting a connection threshold value between the neurons to be 1, and finishing the initialization of the network.
(2) And (3) taking speed samples within 2min, and sequentially inputting according to forward propagation (taking 5 vehicle speeds within 5 seconds as input layers, wherein the forward propagation process is that 5 neuron inputs input by the input layers are input to the output layers through the hidden layers).
(3)5 input quantities in 5 seconds pass through the input layer, pass through the hidden layer and finally reach the output layer, and in the process, the input and output of each layer of neurons can be obtained by the following formula:
inputting:
Figure BDA0003399172030000091
and (3) outputting: y isi=f(Neuini)
Wherein, NeuinRepresenting the input value of a neuron, n being the number of inputs to the neuron, ωiIs the weight of each neuron, xiThe value of the ith input to the neuron. y isiIs output from a neuron, [ theta ]iF is a Sigmoid function.
(4) And the actual output value of the output layer is different from the theoretical output value to obtain an output error.
(5) And (5) performing back propagation on the error and adjusting network parameters.
The network parameter adjustment process is as follows:
the parameters to be adjusted specifically include the weight and threshold of each neuron (corresponding to ω in the above formula)iAnd thetai). The actual output result of the neurons of the output layer is differed from the theoretical output result, when the error does not meet the requirement (the allowable speed training error in the invention is 0.3m/s), the error is propagated reversely (the error is transmitted to the input layer through the hidden layer and is distributed to all the neurons of each layer), the error signal obtained by each neuron is used as the basis for adjusting the weight of each neuron, and the error is caused by adjusting the connection threshold value between the neuronsAnd reducing and repeatedly training to determine parameters (weight and threshold of each neuron) corresponding to the minimum error.
(6) Judging training end conditions
In the invention, two conditions are set to judge whether the training is finished;
when the error between the output result of the neuron in the output layer and the actual result meets the requirement (the allowable error is set to be 0.3m/s in the invention), the training is finished. And secondly, when the 2min training samples are used up, the training is finished. And the neural network after training reaches a relatively stable state.
Step two: predicting the future 2min vehicle speed by using the trained neural network
After the training of the BP neural network is finished, the vehicle speed sequence of 2min in the future is predicted, and network parameters (including the number of neurons of an input layer, the number of neurons of a hidden layer, the number of neurons of an output layer, weights of the neurons and connection thresholds of the neurons) used in the prediction are kept consistent with the parameters after the training of the BP neural network is finished. Specifically, the historical 5-second vehicle speed is selected as the input of the BP neural network, and the next 1-second output is predicted. This process is repeated until a future vehicle speed of 2min is predicted, at which point the prediction is complete. The prediction process is shown as follows:
Figure BDA0003399172030000101
Xtis the input of a BP neural network input layer neuron, where v1,v2,v3,v4,v5Historical 5 second vehicle speed, vpreFor the predicted vehicle speed, FBPRepresenting a BP neural network.
Energy distribution module
1. Policy design
Reinforcement learning algorithms are used to describe and solve the problem of agents (agents) learning strategies to maximize returns or achieve specific goals during interactions with the environment. The Deep Deterministic Policy Gradient (DDPG) algorithm is an algorithm of an action-evaluation (AC) framework obtained by modifying a Deterministic Policy Gradient (DPG) method. The control state and the action space of the DDPG are continuous values, so that the problem of dimension disaster is avoided. Due to the use of the target network and the Critic network, the learning process is more stable, and convergence is more guaranteed. In recent years, the method has stronger learning ability and better algorithm result, and is gradually applied to the field of energy management.
The purpose of energy management of the hybrid power automobile composite energy storage system is to reasonably distribute the power output proportion of a lithium battery, a super capacitor and a fuel battery on the premise of ensuring the normal running of the automobile, so that the automobile is in the optimal working state, and the service life of the automobile is prolonged. Meanwhile, the efficiency of the DC/DC power converter is optimal, the energy utilization rate is improved, and the operation cost is reduced. In the case of this section, it is preferable that,
step one, the vehicle speed information within 2min obtained by the data module is solved according to the following formula:
Figure BDA0003399172030000102
in the formula, PdemFor the power demand of the vehicle, m is the mass of the vehicle, β is the road bank angle, f is the rolling resistance coefficient, CDIs the air resistance coefficient, A is the frontal area of the vehicle, vpreTo predict vehicle speed, η is mechanical transmission efficiency, and δ is a rotating mass conversion factor.
And step two, reasonably distributing the power of the output motor, the super capacitor and the fuel cell by using a DDPG reinforcement learning algorithm so as to achieve the aim that each energy storage element is in the optimal working state.
The second step specifically comprises:
1. state parameter selection
The DDPG reinforcement learning algorithm selects the state parameters thereof, and the specific method comprises the following steps: aiming at a hybrid power automobile composite energy storage system, the invention sets input states of a lithium battery SOC, a super capacitor SOC and a fuel cell current IfcPredicted vehicle speed and required power. Wherein, lithium cell SOC and super capacitor SOC carry out real-time transmission according to on-vehicle BMS.
s=[SOCbatt,SOCsc,Ifc,vpre,Pdem]
In the formula, SOCbattIs the SOC, SOC of the lithium batteryscIs SOC, v of the super capacitorpreTo predict vehicle speed, PdemTo demand power, IfcIs the super capacitor current.
2. Motion parameter selection
Selecting action parameters aiming at the DDPG reinforcement learning algorithm in the step 1, wherein the specific method comprises the following steps: and defining the output power factors of the lithium battery and the super capacitor as control actions, and calculating the output power of the fuel battery according to a formula.
a=[μ12]
Figure BDA0003399172030000111
Wherein a is the selected action amount, mu1Is the output power factor, mu, of a lithium battery2For super capacitor output power factor, PbattFor output of power from lithium batteries, PscIs the output power of the super capacitor, PfcFor the output power of the fuel cell, PdemIs the required power.
3. Designing reward functions
Aiming at the DDPG reinforcement learning algorithm in the step 1, selecting a reward function of the DDPG reinforcement learning algorithm, wherein the reward function has the following functions:
one aim of the invention is to improve the energy efficiency level of the hybrid electric vehicle composite energy storage system under the condition of meeting the constraint of the lithium battery and the super capacitor SOC. The reward function is used for quantitatively evaluating the target, the better the target is completed, the larger the output value of the reward function is, and conversely, the smaller the output value of the reward function is. In the subsequent DDPG reinforcement learning algorithm, the output action instruction is adjusted according to the output value of the reward function in the reinforcement training process until the action instruction output by the DDPG reinforcement learning algorithm can enable the reward function to obtain the maximum value.
The reward function is designed according to the following:
for the composite energy storage system of the electric automobile, in addition to considering the maximization of the efficiency of the composite energy storage system, the lithium battery and the super capacitor are ensured to meet the constraint condition of State of charge (in the invention, the lowest SOC constraint of the power battery is set as SOC constraintbat-minThe value is set to 0.3, and the lowest SOC of the super capacitor is restricted to SOCsc-minThe value is also set to 0.3). The reward function design process is as follows.
Figure BDA0003399172030000121
Figure BDA0003399172030000122
Wherein, beta12Penalty factors, which are corresponding terms, are set to 2.5 and 3, η, respectivelybattEnergy conversion efficiency value, η, for discharging a lithium batteryscEnergy conversion efficiency value, η, for discharging a supercapacitorfcFor the energy conversion efficiency of the fuel cell, etaDC-Batt,ηDC-sc,ηDC-fcThe energy conversion efficiency of the bidirectional direct-current power converter corresponding to the lithium battery, the super capacitor and the fuel battery respectively, eta is the efficiency of the whole composite energy storage system, and m isH2The greater the value of eta, the greater m is for the hydrogen consumption of the fuel cellH2The smaller the energy efficiency level of the composite energy storage system, the higher the energy efficiency level of the composite energy storage system. Meanwhile, the power battery SOC constraint and the super capacitor SOC constraint are introduced into the reward function, a reward function in the formula is obtained, and the larger the reward function value is, the better the reward function value is.
4. Training DDPG network
Aiming at the DDPG reinforcement learning algorithm in the step 1 and the state parameters, the action parameters and the reward functions selected in the steps 1 to 3, and then carrying out reinforcement training aiming at the DDPG network. The DDPG comprises an Actor network and a Critic network, wherein the Actor network is used for outputting action values, the Critic network is used for evaluating actions output by the Actor network, under the DDPG framework, the DDPG reinforcement learning algorithm is regarded as an agent, and a hybrid electric vehicle interacting with the DDPG algorithm is regarded as an environment. The specific strengthening training method is as follows, and the flow chart is shown in fig. 5:
the method comprises the following steps: and inputting an initial state s, and outputting an action value a under the combined action of the Actor network and the Critic network.
Step two: the action value a gets the reward and the next time state s' by interacting with the environment. The specific process comprises the following steps: and the action a is transmitted to the hybrid electric vehicle (environment), the hybrid electric vehicle executes the action a, a reward function value is obtained, and a state s' at the next moment is obtained (the lithium battery SOC at the next moment, the super capacitor SOC at the next moment, the current of the fuel battery, the vehicle speed and the required power at the next moment).
Step three: and C, simultaneously inputting the reward function value obtained in the step two and the state s' at the next moment into the Actor network and the Critic network.
Step four: and updating the output action a by the Actor network and the Critic network according to the received reward function value and the action s' at the next moment.
Step five: and (4) repeating the first step to the fourth step in the step (4) until the maximum reward function value is obtained, and ending the strengthening training for the DDPG network.
Step six: and inputting the action a corresponding to the maximum reward function value into the hybrid energy storage system of the hybrid electric vehicle, and driving the hybrid electric vehicle to operate.
In order to explore more learning actions when agent interacts with the environment, the Uhlenbeck-Ornstein random process is used as the introduced random noise in DDPG.
The Uhlenbeck-Ornstein random process has good correlation in time sequence, and can enable an agent to explore an environment with momentum attributes well. In addition, the actor of the DDPG stores the transition data into the experience replay buffer, and then randomly samples the mini-batch data from the experience replay buffer during training, so that the data obtained by sampling has larger relevance.
Third, hybrid vehicle module
1. Composite energy storage system
And (4) inputting the output power of the lithium battery, the super capacitor power and the fuel battery power obtained by optimizing the energy distribution module within 2min in the future as references into the hybrid power automobile composite energy storage system, and executing the optimization result output by the energy distribution module.
2. Velocity, acceleration
And the real vehicle executes the optimization result output by the energy distribution module to generate information such as speed, acceleration and the like. After running for 2min, the cycle period ended. Entering the next cycle (firstly, the data module executes the crawling data, data storage and vehicle speed prediction part, secondly, the energy optimization distribution module executes the strategy design, state selection, action selection, reward function and network training part, and finally, the hybrid electric vehicle module executes the composite energy storage system and the speed acceleration part).
The invention provides an energy management method of a hybrid electric vehicle composite energy storage system, which comprises the following steps:
and acquiring traffic situation data.
And predicting the vehicle speed by utilizing a neural network according to the traffic situation data to obtain predicted vehicle speed information.
And determining the required power of the vehicle according to the predicted vehicle speed information.
And distributing the power of the power battery, the power of the super capacitor and the power of the fuel cell by utilizing a DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result.
And controlling the hybrid power automobile composite energy storage system according to the power distribution result.
In practical applications, after the acquiring the traffic situation data, the method further includes:
and storing the traffic people situation data.
In practical application, the training process of the neural network specifically includes:
and taking the traffic situation data of a training set as input, taking the vehicle speed information of the training set as output, taking the error between an actual output value and a theoretical output value and the number of the training sets as loss functions, and taking a Sigmoid function as an activation function to train the back propagation neural network to obtain the trained neural network.
In practical application, the determining the required power of the vehicle according to the predicted vehicle speed information specifically includes:
acquiring vehicle information; the vehicle information includes vehicle mass, vehicle frontal area, and mechanical transmission efficiency.
Acquiring environmental information; the environment information is a slope angle of the road.
And determining the required power of the vehicle according to the vehicle information, the environment information and the predicted vehicle speed information.
In practical application, the power distribution result is obtained by distributing the power of the power battery, the power of the super capacitor and the power of the fuel cell by using a DDRG reinforcement learning algorithm according to the power required by the vehicle, and specifically includes:
and training the DDRG network by taking the lithium battery SOC, the super capacitor SOC, the fuel battery current, the predicted vehicle speed information and the vehicle required power as state parameters of the DDRG reinforcement learning algorithm, taking a lithium battery output power factor and a super capacitor output power factor as dynamic parameters and taking a function determined according to the lithium battery SOC and the super capacitor SOC as a reward function to obtain the trained DDRG reinforcement learning algorithm.
And distributing the power of a power battery, the power of a super capacitor and the power of a fuel cell by utilizing the trained DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result.
The invention realizes the real-time online energy distribution task of the hybrid electric vehicle fuel cell-power cell-super capacitor composite energy storage system. Real-time traffic situation data are obtained through an Http interface opened by a Goods API, short-term vehicle speed prediction is carried out through a BP neural network, vehicle speed data within 2min in the future are obtained, and reasonable energy distribution is carried out according to the prediction data by using a strategy based on reinforcement learning and taking the maximum energy utilization rate as a reward function. And sending the obtained energy distribution result to the vehicle for real-time control, obtaining new traffic situation data after 2min, and predicting the vehicle speed, updating the strategy and distributing the energy again. The invention can reasonably and effectively coordinate the energy distribution relation among the fuel cell, the power cell and the super capacitor by combining with real-time data, solves the problem that the traditional reinforcement learning algorithm can only perform off-line optimization, and maximally realizes optimal energy distribution.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. The utility model provides a hybrid vehicle composite energy storage system energy management system which characterized in that includes:
the acquisition module is used for acquiring traffic situation data;
the prediction module is used for predicting the vehicle speed by utilizing a neural network according to the traffic situation data to obtain predicted vehicle speed information;
the vehicle demand power module is used for determining vehicle demand power according to the predicted vehicle speed information;
the distribution module is used for distributing power of a power battery, power of a super capacitor and power of a fuel cell by using a DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result;
and the control module is used for controlling the hybrid power automobile composite energy storage system according to the power distribution result.
2. The hybrid vehicle composite energy storage system energy management system of claim 1, further comprising:
and the storage module is used for storing the traffic people situation data.
3. The hybrid electric vehicle composite energy storage system energy management system of claim 1, further comprising a neural network training module, wherein the neural network training module specifically comprises:
and the neural network training unit is used for training the back propagation neural network by taking the traffic situation data of a training set as input, taking the speed information of the training set as output, taking the error between an actual output value and a theoretical output value and the number of the training set as loss functions and taking a Sigmoid function as an activation function to obtain the trained neural network.
4. The hybrid energy storage system energy management system of claim 1, wherein the vehicle demand power module specifically comprises:
a vehicle information acquisition unit for acquiring vehicle information; the vehicle information comprises vehicle mass, vehicle frontal area and mechanical transmission efficiency;
an environment information acquisition unit for acquiring environment information; the environment information is a slope angle of a road;
and the vehicle required power determining unit is used for determining the vehicle required power according to the vehicle information, the environment information and the predicted vehicle speed information.
5. The hybrid energy storage system energy management system of claim 1, wherein the distribution module specifically comprises:
the DDRG reinforcement learning algorithm training unit is used for training a DDRG network by taking a lithium battery SOC, a super capacitor SOC, a fuel battery current, the predicted vehicle speed information and the vehicle required power as state parameters of the DDRG reinforcement learning algorithm, taking a lithium battery output power factor and a super capacitor output power factor as dynamic parameters and taking a function determined according to the lithium battery SOC and the super capacitor SOC as a reward function to obtain a trained DDRG reinforcement learning algorithm;
and the distribution unit is used for distributing the power of the power battery, the power of the super capacitor and the power of the fuel cell by utilizing the trained DDRG reinforcement learning algorithm according to the power required by the vehicle to obtain a power distribution result.
6. A hybrid electric vehicle composite energy storage system energy management method is characterized by comprising the following steps:
acquiring traffic situation data;
predicting the vehicle speed by utilizing a neural network according to the traffic situation data to obtain predicted vehicle speed information;
determining the required power of the vehicle according to the predicted vehicle speed information;
distributing power of a power battery, power of a super capacitor and power of a fuel cell by using a DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result;
and controlling the hybrid power automobile composite energy storage system according to the power distribution result.
7. The energy management method of the hybrid energy storage system of the hybrid electric vehicle according to claim 6, further comprising, after the acquiring the traffic situation data:
and storing the traffic people situation data.
8. The energy management method of the hybrid energy storage system of the hybrid electric vehicle according to claim 6, wherein the training process of the neural network specifically comprises:
and taking the traffic situation data of a training set as input, taking the vehicle speed information of the training set as output, taking the error between an actual output value and a theoretical output value and the number of the training sets as loss functions, and taking a Sigmoid function as an activation function to train the back propagation neural network to obtain the trained neural network.
9. The energy management method of the hybrid energy storage system of the hybrid electric vehicle according to claim 6, wherein the determining the required power of the vehicle according to the predicted vehicle speed information specifically comprises:
acquiring vehicle information; the vehicle information comprises vehicle mass, vehicle frontal area and mechanical transmission efficiency;
acquiring environmental information; the environment information is a slope angle of a road;
and determining the required power of the vehicle according to the vehicle information, the environment information and the predicted vehicle speed information.
10. The energy management method of the hybrid energy storage system of the hybrid electric vehicle according to claim 6, wherein the power distribution result is obtained by distributing power of a power battery, power of a super capacitor and power of a fuel cell by using a DDRG reinforcement learning algorithm according to the power required by the vehicle, and specifically comprises:
training a DDRG network by taking a lithium battery SOC, a super capacitor SOC, a fuel battery current, the predicted vehicle speed information and the vehicle required power as state parameters of the DDRG reinforcement learning algorithm, taking a lithium battery output power factor and a super capacitor output power factor as dynamic parameters and taking a function determined according to the lithium battery SOC and the super capacitor SOC as a reward function to obtain a trained DDRG reinforcement learning algorithm;
and distributing the power of a power battery, the power of a super capacitor and the power of a fuel cell by utilizing the trained DDRG reinforcement learning algorithm according to the required power of the vehicle to obtain a power distribution result.
CN202111490603.0A 2021-12-08 2021-12-08 Energy management system and method for hybrid electric vehicle composite energy storage system Pending CN114154729A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111490603.0A CN114154729A (en) 2021-12-08 2021-12-08 Energy management system and method for hybrid electric vehicle composite energy storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111490603.0A CN114154729A (en) 2021-12-08 2021-12-08 Energy management system and method for hybrid electric vehicle composite energy storage system

Publications (1)

Publication Number Publication Date
CN114154729A true CN114154729A (en) 2022-03-08

Family

ID=80453677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111490603.0A Pending CN114154729A (en) 2021-12-08 2021-12-08 Energy management system and method for hybrid electric vehicle composite energy storage system

Country Status (1)

Country Link
CN (1) CN114154729A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116985646A (en) * 2023-09-28 2023-11-03 江西五十铃汽车有限公司 Vehicle supercapacitor control method, device and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116985646A (en) * 2023-09-28 2023-11-03 江西五十铃汽车有限公司 Vehicle supercapacitor control method, device and medium
CN116985646B (en) * 2023-09-28 2024-01-12 江西五十铃汽车有限公司 Vehicle supercapacitor control method, device and medium

Similar Documents

Publication Publication Date Title
Tang et al. Distributed deep reinforcement learning-based energy and emission management strategy for hybrid electric vehicles
CN110775065B (en) Hybrid electric vehicle battery life prediction method based on working condition recognition
WO2021114742A1 (en) Comprehensive energy prediction and management method for hybrid electric vehicle
CN111267831A (en) Hybrid vehicle intelligent time-domain-variable model prediction energy management method
CN109703548B (en) Automobile power distribution method based on hybrid power
CN112249002B (en) TD 3-based heuristic series-parallel hybrid power energy management method
CN112668799A (en) Intelligent energy management method and storage medium for PHEV (Power electric vehicle) based on big driving data
CN112339756B (en) New energy automobile traffic light intersection energy recovery optimization speed planning algorithm based on reinforcement learning
CN116001624A (en) Ordered charging method for one-pile multi-connected electric automobile based on deep reinforcement learning
CN111798121B (en) Distributed collaborative optimization method for energy management scheduling of electric automobile
CN115107733A (en) Energy management method and system for hybrid electric vehicle
CN115793445B (en) Hybrid electric vehicle control method based on multi-agent deep reinforcement learning
CN112026744B (en) Series-parallel hybrid power system energy management method based on DQN variants
CN115805840A (en) Energy consumption control method and system for range-extending type electric loader
CN114154729A (en) Energy management system and method for hybrid electric vehicle composite energy storage system
CN115534929A (en) Plug-in hybrid electric vehicle energy management method based on multi-information fusion
Pan et al. Development of an energy consumption prediction model for battery electric vehicles in real-world driving: A combined approach of short-trip segment division and deep learning
Wang et al. Multi-objective mobile charging scheduling on the internet of electric vehicles: a DRL approach
Peng et al. Ecological driving framework of hybrid electric vehicle based on heterogeneous multi agent deep reinforcement learning
CN114619907B (en) Coordinated charging method and coordinated charging system based on distributed deep reinforcement learning
Jiang et al. A PHEV power management cyber-physical system for on-road applications
Li et al. Data-driven bi-level predictive energy management strategy for fuel cell buses with algorithmics fusion
Koot Energy management for vehicular electric power systems
CN112989699B (en) New energy automobile performance evaluation method based on deep reinforcement learning
Chang et al. An energy management strategy of deep reinforcement learning based on multi-agent architecture under self-generating conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Room 306, building 4, future venture Plaza, hi tech Zone, Jinan City, Shandong Province

Applicant after: Shandong Zhongke advanced technology Co.,Ltd.

Address before: Room 306, building 4, future venture Plaza, hi tech Zone, Jinan City, Shandong Province

Applicant before: Shandong Zhongke Advanced Technology Research Institute Co.,Ltd.

Country or region before: China