CN114186411A - Charging load prediction method and model training method and device for electric vehicle - Google Patents

Charging load prediction method and model training method and device for electric vehicle Download PDF

Info

Publication number
CN114186411A
CN114186411A CN202111505078.5A CN202111505078A CN114186411A CN 114186411 A CN114186411 A CN 114186411A CN 202111505078 A CN202111505078 A CN 202111505078A CN 114186411 A CN114186411 A CN 114186411A
Authority
CN
China
Prior art keywords
target
samples
model
prediction model
prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111505078.5A
Other languages
Chinese (zh)
Inventor
刘飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wutong Chelian Technology Co Ltd
Original Assignee
Beijing Wutong Chelian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wutong Chelian Technology Co Ltd filed Critical Beijing Wutong Chelian Technology Co Ltd
Priority to CN202111505078.5A priority Critical patent/CN114186411A/en
Publication of CN114186411A publication Critical patent/CN114186411A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/02Reliability analysis or reliability optimisation; Failure analysis, e.g. worst case scenario performance, failure mode and effects analysis [FMEA]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Water Supply & Treatment (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a charging load prediction method of an electric vehicle, a model training method and a model training device, and belongs to the technical field of computers. The charging load prediction method may input the acquired charging load prediction information into a target prediction model to determine the charging load of the electric vehicle. The target prediction model is obtained by verifying the target candidate prediction model based on the plurality of verification samples, the model structure of the target candidate prediction model is determined based on the model structure with the maximum fitness in the plurality of model structures, and the model structure with the maximum fitness is determined based on the plurality of training samples, so that the prediction accuracy of the target prediction model is effectively improved, and the precision of predicting the charging load by adopting the target prediction model is further improved.

Description

Charging load prediction method and model training method and device for electric vehicle
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method for predicting a charging load of an electric vehicle, a method for training a model, and an apparatus for training the model.
Background
An Electric Vehicle (EV) is a vehicle that runs by using a vehicle-mounted power supply as power and driving wheels by using a motor, and generally needs to be connected to a power distribution network to complete charging. As such, accurate prediction of the charging load of an EV (i.e., the amount of charge that the distribution grid needs to provide to the EV) becomes the basis for ensuring reliable, safe, and economical operation of the distribution grid.
In the related art, the charging load of the EV is generally predicted according to a fuzzy clustering algorithm, a genetic algorithm, or a neural network algorithm.
However, the prediction methods of the related art are all low in prediction accuracy.
Disclosure of Invention
The embodiment of the application provides a charging load prediction method of an electric vehicle, a model training method and a device, and can solve the problem of low charging load prediction accuracy in the related technology. The technical scheme is as follows:
in one aspect, there is provided a charging load prediction method of an electric vehicle, the method including:
acquiring charging load prediction information, wherein the charging load prediction information comprises: time and weather information;
inputting the charging load prediction information into a target prediction model;
determining a charging load of the electric vehicle according to an output result of the target prediction model;
the target prediction model is obtained by verifying a target candidate prediction model with a target model structure based on a plurality of verification samples, the target model structure is determined based on a target initial model structure with the maximum fitness in a plurality of initial model structures, and the target initial model structure is determined based on a plurality of first training samples; each initial model structure has a hidden layer and a connection weight, the number of hidden layers and the connection weight of different initial model structures are different, the generalization error of the target prediction model is smaller than an error threshold, and each first training sample and each verification sample comprise: time samples and weather information samples.
Optionally, before the obtaining of the charging load prediction information, the method further includes:
obtaining a sample set, wherein the sample set comprises a plurality of first training samples and a plurality of verification samples;
respectively initializing the plurality of initial model structures to obtain a plurality of individuals in one-to-one correspondence with the plurality of initial model structures, wherein each individual comprises a structure code for indicating the hidden layer number of the corresponding initial model structure and a connection weight code for indicating the connection weight of the corresponding initial model structure;
training each individual of the plurality of individuals by using the plurality of first training samples to select a target individual with the largest individual fitness from the plurality of individuals;
obtaining a target alternative prediction model based on the target individual;
verifying the generalization error of the target candidate prediction model by adopting the verification samples;
if the generalization error of the target alternative prediction model is smaller than the error threshold, determining the target alternative prediction model as a target prediction model;
and if the generalization error of the target candidate prediction model is not less than the error threshold, reselecting the sample set for training and verification until the generalization error of the trained target candidate prediction model is less than the error threshold.
Optionally, the training each individual of the plurality of individuals with the plurality of first training samples to select a target individual with the largest individual fitness among the plurality of individuals includes:
determining fitness of each individual based on the plurality of first training samples;
screening a plurality of individuals based on the fitness of each individual by sequentially adopting a box plot method and a proportion selection method to obtain a plurality of alternative individuals;
and screening the plurality of alternative individuals by sequentially adopting a box-line graph method and a tabu search algorithm to obtain the target individual.
Optionally, the sample set further includes a plurality of second training samples; each of the second training samples comprises: time samples and weather information samples; the obtaining of the target candidate prediction model based on the target individual comprises:
obtaining an alternative prediction model with alternative hidden layer numbers and alternative connection weights based on the target individual;
and training the alternative prediction model by adopting the plurality of second training samples to adjust the alternative hidden layer number and the alternative connection weight of the alternative prediction model to obtain the target alternative prediction model with the target hidden layer number and the target connection weight.
Optionally, before performing initialization processing on each of the plurality of initial model structures, the method further includes:
determining a connection weight maximum value and a connection weight minimum value based on the plurality of first training samples;
the initializing the plurality of initial model structures respectively to obtain a plurality of individuals in one-to-one correspondence with the plurality of initial model structures includes:
for each initial model structure, coding the initial model structure by adopting a corresponding structure code and a connection weight code, wherein the structure codes and the connection weight codes corresponding to different initial model structures are different, and the connection weight indicated by the connection weight code corresponding to each initial model structure is positioned between the maximum value and the minimum value of the connection weight;
the obtaining of the alternative prediction model with the alternative hidden layer number and the alternative connection weight based on the target individual comprises:
and decoding the target individual to obtain an alternative prediction model with an alternative hidden layer number and an alternative connection weight.
In another aspect, a model training method is provided, the method comprising:
obtaining a plurality of first training samples and a plurality of validation samples, each of the first training samples and each of the validation samples comprising: time samples and weather information samples;
selecting a target initial model structure with the maximum fitness from a plurality of initial model structures based on the first training samples, wherein each initial model structure is provided with a hidden layer and a connection weight, and the number of hidden layers and the connection weight of different initial model structures are different;
determining a target alternative prediction model based on the target initial model structure;
and verifying the target alternative prediction model based on the plurality of verification samples to obtain a target prediction model, wherein the generalization error of the target prediction model is smaller than an error threshold.
In still another aspect, there is provided a charging load prediction apparatus of an electric vehicle, the apparatus including:
a prediction information obtaining module, configured to obtain charging load prediction information, where the charging load prediction information includes: time and weather information;
the input module is used for inputting the charging load prediction information into a target prediction model;
a load determination module for determining a charging load of the electric vehicle according to an output result of the target prediction model;
the target prediction model is obtained by verifying a target candidate prediction model with a target model structure based on a plurality of verification samples, the target model structure is determined based on a target initial model structure with the maximum fitness in a plurality of initial model structures, and the target initial model structure is determined based on a plurality of first training samples; each initial model structure has a hidden layer and a connection weight, the number of hidden layers and the connection weight of different initial model structures are different, the generalization error of the target prediction model is smaller than an error threshold, and each first training sample and each verification sample comprise: time samples and weather information samples.
In yet another aspect, a model training apparatus is provided, the apparatus comprising:
a sample obtaining module, configured to obtain a plurality of first training samples and a plurality of verification samples, where each of the first training samples and each of the verification samples include: time samples and weather information samples;
a selection module, configured to select a target initial model structure with a maximum fitness from a plurality of initial model structures based on the plurality of first training samples, where each initial model structure has a hidden layer and a connection weight, and different initial model structures have different numbers of hidden layers and different connection weights;
the model determining module is used for determining a target alternative prediction model based on the target initial model structure;
and the verification module is used for verifying the target alternative prediction model based on the verification samples to obtain a target prediction model, and the generalization error of the target prediction model is smaller than an error threshold.
In still another aspect, there is provided a charging load prediction apparatus of an electric vehicle, the apparatus including: a memory, a processor and a computer program stored on the memory, the processor when executing the computer program implementing a method of predicting a charging load of an electric vehicle as described in the one aspect above and a method of training a model as described in the other aspect above.
In yet another aspect, a computer-readable storage medium is provided, having stored therein a computer program that is loaded and executed by a processor to implement the method for predicting a charging load of an electric vehicle as described in the above aspect, and the method for training a model as described in the above aspect.
To sum up, the beneficial effects brought by the technical scheme provided by the embodiment of the application at least can include:
provided are a charging load prediction method, a model training method and a device for an electric vehicle. The charging load prediction method may input the acquired charging load prediction information into a target prediction model to determine the charging load of the electric vehicle. The target prediction model is obtained by verifying the target candidate prediction model based on the plurality of verification samples, the model structure of the target candidate prediction model is determined based on the model structure with the maximum fitness in the plurality of model structures, and the model structure with the maximum fitness is determined based on the plurality of training samples, so that the prediction accuracy of the target prediction model is effectively improved, and the precision of predicting the charging load by adopting the target prediction model is further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment to which various embodiments of the present application relate;
fig. 2 is a flowchart of a method for predicting a charging load of an electric vehicle according to an embodiment of the present disclosure;
fig. 3 is a flowchart of another method for predicting a charging load of an electric vehicle according to an embodiment of the present application;
FIG. 4 is a flow chart of a method for determining a target individual according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for determining a target candidate prediction model according to an embodiment of the present disclosure;
FIG. 6 is a schematic structural diagram of an object prediction model provided in an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a box line diagram according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for model training provided by an embodiment of the present application;
fig. 9 is a block diagram illustrating a configuration of a charging load prediction apparatus for an electric vehicle according to an embodiment of the present application;
fig. 10 is a block diagram illustrating a charging load prediction apparatus of another electric vehicle according to an embodiment of the present application;
FIG. 11 is a block diagram of a model training apparatus according to an embodiment of the present disclosure;
fig. 12 is a block diagram of a charging load prediction apparatus for an electric vehicle according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
In recent years, the electric vehicles have increasingly become larger and larger due to the advantages of energy conservation, environmental protection, economy, practicality and the like, and accordingly, more and more electric vehicles are connected to a power distribution network. However, the electric vehicle may move continuously, so that a mobile load may be applied to the distribution network, and the superposition of the mobile load and the original fixed load of the distribution network may change the supply and demand relationship of the distribution network, affect the normal operation of the distribution network, and deteriorate the operational reliability of the distribution network. Therefore, it is highly desirable to reliably and precisely predict the charging load generated by the electric vehicle, so as to formulate a reasonable power utilization strategy according to the prediction result, thereby effectively alleviating the influence of the electric vehicle on the power distribution network. Because of electric vehicle generally inserts the distribution network through filling electric pile, the event can be through carrying out the analysis excavation to the stake side data that charges for fill electric pile and reach rational use, with the operational reliability who improves the distribution network.
At present, the charging load of an electric vehicle is predicted based on three types of algorithms described in the background art. However, it is determined through tests that the prediction accuracy of the fuzzy clustering algorithm is poor. The genetic algorithm has more information needed for prediction and poor practicability. The prediction result of the traditional neural network algorithm (also called as a prediction model) is easily influenced by a training sample, the generalization capability is poor, the problems of low convergence rate, easy falling into a local minimum value and the like easily occur in the learning process, and the prediction precision is poor. The generalization capability refers to the adaptive capacity of the algorithm to a fresh sample, and can be measured by a generalization error, and the smaller the generalization error is, the stronger the generalization capability is; the larger the generalization error is, the weaker the generalization ability is.
According to the characteristics of large scale, multiple types, low value density, quick change and the like of charging load data, the embodiment of the application provides a method for carrying out pre-filtering optimization on a neural network by adopting a genetic algorithm, and the charging load of an electric vehicle can be accurately predicted by adopting the optimized neural network by formulating a corresponding prediction strategy. So, establish the basis for filling the rationalization of electric pile and dispose, it is corresponding, improved the running performance of distribution network.
Optionally, the neural networks before and after optimization (i.e., the prediction models) provided in the embodiment of the present application may be Back Propagation (BP) neural networks.
Fig. 1 is a schematic view of an implementation environment of a method for predicting a charging load of an electric vehicle according to an embodiment of the present application. As shown in FIG. 1, the implementation environment may include: the terminal 10 may be a computer, a notebook computer, a smart phone, or the like, and fig. 1 illustrates the terminal 10 as a computer.
Optionally, the electric vehicle according to the embodiment of the present application may include: vehicles such as electric automobiles, electric motorcycles, and electric tricycles. Furthermore, the electric vehicle may be used to accommodate one or more passengers.
Fig. 2 is a flowchart of a method for predicting a charging load of an electric vehicle according to an embodiment of the present application, which may be applied to the terminal 10 shown in fig. 1. As shown in fig. 2, the method includes:
step 201, charge load prediction information is obtained.
Wherein the charging load prediction information may include: time and weather information. Alternatively, when the charging load prediction of the electric vehicle is required, the required charging load prediction information may be input to the terminal by a user (e.g., a staff of the distribution grid). That is, the terminal may receive the charging load prediction information input by the user.
Step 202, charging load prediction information is input into the target prediction model.
The target prediction model is obtained by verifying a target candidate prediction model with a target model structure based on a plurality of verification samples, the target model structure is determined based on a target initial model structure with the maximum fitness in a plurality of initial model structures, and the target initial model structure is determined based on a plurality of first training samples. Each initial model structure has a hidden layer and a connection weight (i.e., a weight and a threshold), and different initial model structures have different numbers of hidden layers and different connection weights. Each first training sample and each validation sample comprises: time samples and weather information samples. And finally, the generalization error of the obtained target prediction model is smaller than an error threshold value.
Optionally, after the terminal acquires the charging load prediction information, the charging load prediction information may be automatically input to the target prediction model, so as to realize accurate prediction of the charging load.
And step 203, determining the charging load of the electric vehicle according to the output result of the target prediction model.
In the target prediction model, after the terminal inputs the charging load prediction information to the target prediction model, the target prediction model may automatically predict the charging load based on the charging load prediction information and output the result of the prediction. The terminal may determine a result output by the target prediction model as a charging load of the electric vehicle.
In addition, the terminal can inform the user of the charging load obtained by determination through display or other modes, so that the user can make a reasonable use strategy for the charging pile based on the charging load, and if the charging pile is set to supply power to the outside in a fixed time period every day. Therefore, the load of the power distribution network can be reduced, and the reliable operation of the power distribution network is ensured.
In summary, the embodiment of the present application provides a method for predicting a charging load of an electric vehicle. The method may input the acquired charging load prediction information into a target prediction model to determine the charging load of the electric vehicle. The target prediction model is obtained by verifying the target candidate prediction model based on the plurality of verification samples, the model structure of the target candidate prediction model is determined based on the model structure with the maximum fitness in the plurality of model structures, and the model structure with the maximum fitness is determined based on the plurality of training samples, so that the prediction accuracy of the target prediction model is effectively improved, and the precision of predicting the charging load by adopting the target prediction model is further improved.
Fig. 3 is a flowchart of another method for predicting a charging load of an electric vehicle according to an embodiment of the present application, which may be applied to the terminal 10 shown in fig. 1. As shown in fig. 3, the method may include:
step 301, a sample set is obtained.
Wherein the sample set may include a plurality of first training samples and a plurality of verification samples, and each of the first training samples and each of the verification samples includes: time samples and weather information samples.
Alternatively, the time samples may be measured in "days". Of course, in some other embodiments, the unit of measurement may be "week" or "month". The weather information sample may include at least one of the following parameters: temperature, precipitation and duration of sunshine. If the 'day' is taken as a measurement unit, the temperature can be the average daily temperature, the precipitation can be the average daily precipitation, and the sunshine duration can be the average daily sunshine duration.
It should be noted that the charging piles corresponding to the samples included in the sample set may be different, so that different charging piles are distinguished, and each charging pile may uniquely correspond to one charging pile number. Correspondingly, each sample can also comprise a charging pile number besides the parameters. In addition, each sample is not limited to include the above-mentioned several parameters, and may further include other parameters having an influence on the charging load prediction, such as the geographic location of the charging pile, which may be measured by the longitude and latitude where the charging pile is located.
Alternatively, the samples included in the sample set may be provided by the utility company, i.e., the terminal may obtain the sample set provided by the utility company, and the sample set may be input into the terminal by the user.
Step 302, determining a maximum connection weight value and a minimum connection weight value based on a plurality of first training samples.
In this embodiment, the terminal may pre-construct a neural network, and then input the acquired first training samples into the neural network as an initial input value solution space, and determine a maximum connection weight value and a minimum connection weight value, that is, determine a connection weight range [ δ min, δ max ]. Wherein δ min refers to the minimum value of the connection weight, and δ max refers to the maximum value of the connection weight. The model structures described in the following examples may all be based on this pre-constructed neural network.
For example, the pre-constructed neural network may be a three-layer BP neural network, where the three layers are: an input layer, a hidden layer, and an output layer. The connection weight may refer to a weight of connection between layers.
Step 303, initializing the plurality of initial model structures respectively to obtain a plurality of individuals corresponding to the plurality of initial model structures one to one.
Wherein each individual may comprise: a structure code for indicating the number of implicit layers (i.e., the number of implicit layers included) of the corresponding initial model structure, and a connection weight code for indicating the connection weight of the corresponding initial model structure. Thus, each initial model structure is actually one structure that includes an implied number of layers and connection rights. Multiple individuals may form an initial population, and accordingly, this step 303 may also be referred to as an initialization population.
In this embodiment of the present application, for each initial model structure, the terminal may encode the initial model structure by using the corresponding structure code and the connection right code, so as to obtain an individual corresponding to the initial model structure. This process of encoding may be referred to as an initialization process for the initial model structure. In addition, as described in the above embodiments, the structure codes used for encoding different initial model structures are different, and the connection right codes are different. In addition, the connection weight indicated by the connection weight code corresponding to each initial model structure (i.e. the connection weight code adopted when encoding each initial model structure) is located between the maximum connection weight value and the minimum connection weight value, i.e. is located in the connection weight range [ δ min, δ max ]. In other words, the connection weight of each initial model structure constructed cannot exceed the connection weight maximum value and cannot be less than the connection weight minimum value.
For example, in encoding, the structure code may be used as the first half, the connection weight code may be used as the second half, and the encoding length may be equal to the sum of the length of the structure code (i.e., the string length) and the length of the connection weight code. Since the number of input layer units (i.e., the number of neurons included in the input layer) of the currently common neural network is generally 4, it can be roughly determined empirically that the number of implicit layers can be 7 at most. Thus, the terminal may set the string length of the structure code to 7, and the code may be represented by 0 and 1, where 0 represents that the input layer is not connected to the hidden layer, i.e., the hidden layer does not exist; 1 denotes that the input layer is connected to the hidden layer, i.e. the hidden layer is present. In addition, in combination with the number of input layer units being 4, the terminal may set the string length of the connection weight code to be 4 × n + n + n × 1+1 — 6n +1, where n refers to the number of implicit layers included in the optimized neural network, and n is a positive integer greater than 0. Of course, in other embodiments, the number of input layer units may be other numbers, and accordingly, the length of the string of the structure code and the length of the string of the connection weight code may be other lengths.
Optionally, the multiple initial model structures may be randomly constructed for the terminal, or may also be constructed for the terminal based on multiple different implicit layer numbers and multiple different connection weights input by the user. The connection weights (i.e. weight thresholds) can be output in rows and columns, the weight threshold corresponding to each of the input layer, the hidden layer and the output layer is a row, and the weight thresholds corresponding to multiple layers form a row-column matrix as the weight threshold code. The structure code and the weight threshold code can form a genetic algorithm code, and a foundation is laid for optimizing an individual by subsequently adopting the genetic algorithm.
And step 304, training each individual in the plurality of individuals by adopting a plurality of first training samples so as to select a target individual with the maximum individual fitness from the plurality of individuals.
After initializing a plurality of initial model structures to obtain a plurality of individuals, the terminal may perform iterative optimization on the plurality of individuals by using a genetic algorithm based on a plurality of first training samples, so as to select and determine a target individual with the maximum individual fitness from the plurality of individuals. The individual fitness can be used to measure the fitness and the viability of an individual in a population comprising a plurality of individuals, and the greater the individual fitness, the greater the viability.
Fig. 4 shows a flowchart of a method for selecting a target individual, which can be applied to the terminal 10 shown in fig. 1. As shown in fig. 4, the method may include:
step 3041, determining a fitness of each individual based on the plurality of first training samples.
In the embodiment of the application, the terminal may construct the fitness function in advance. For example, the fitness function may satisfy the following equation:
Figure BDA0003403941360000091
wherein, W1And B1Means the connection right between the input layer and the hidden layer, W2And B2Means the connection right between the hidden layer and the output layer, ykThe method refers to the expected output value of a prediction model with an initial model structure when a kth training sample is adopted for training,
Figure BDA0003403941360000101
the method is characterized in that when a kth training sample is adopted for training, an actual output value of a prediction model with an initial model structure is obtained, M is the number of input training samples, and k is a positive integer smaller than or equal to M.
Then, the terminal may determine the fitness of each individual based on the plurality of first training samples and the above equation (1). The fitness of each individual may be equal to the sum of the squared errors of the desired output value and the actual output value.
Step 3042, a box diagram method and a proportion selection method are sequentially adopted, and a plurality of individuals are screened based on the fitness of each individual, so as to obtain a plurality of alternative individuals.
After the fitness of each individual is determined, the terminal can sequentially adopt a box-plot method and a proportion selection method according to the fitness, and a plurality of individuals are screened based on the fitness of each individual to obtain a plurality of alternative individuals. The individuals with smaller fitness in the plurality of individuals can be deleted, and the retained individuals are the alternative individuals.
Optionally, in this embodiment of the application, the terminal may sequentially use a box-plot method and a proportional selection method to perform multiple screening on multiple individuals (i.e., an initial population), and a result of each screening may be inherited to a next generation until the result is inherited to a Kmax generation. That is, Kmax is the maximum number of generations of inheritance, and the upper limit of the number of screens performed by the terminal on a plurality of individuals may be equal to Kmax minus 1. In each screening process, the terminal needs to screen a plurality of individuals by adopting a box plot method and a proportional selection method in sequence. If Kmax is 5, the terminal can sequentially screen a plurality of individuals by using a box plot method and a scale selection method for 4 times. Alternatively, Kmax may be set automatically by the terminal, or may be determined by the terminal based on user settings.
Based on the above embodiment, (1) first, the terminal may sort the plurality of individuals included in the first generation (i.e., the plurality of individuals obtained in step 303) in order of decreasing fitness, and determine data nodes such as an upper edge, an upper quartile, a middle number, a lower quartile, a lower edge, and an abnormal value based on the fitness in combination with the box plot structure shown in fig. 7. The fitness indicated by the upper edge may be a fitness maximum; the fitness indicated by the lower edge may be a fitness minimum; the fitness indicated by the upper quartile can be the fitness of 25% after the fitness is ranked from large to small; the fitness indicated by the median can be the fitness of 50% after the fitness is ranked from large to small; the fitness indicated by the lower quartile can be the fitness of 75% after the fitness is ranked from large to small; the fitness indicated by the outlier is too large or too small.
Then, the terminal directly uploads the individuals above the upper quartile to the second generation (i.e., the next generation). (2) And selecting partial individuals from the rest individuals by using a proportion selection method and then inheriting the partial individuals to the second generation by the terminal. At this time, new individuals are obtained. The plurality of individuals in the second generation are part of the plurality of individuals in the first generation with better fitness. Then, the terminal may continue to screen a part of individuals with relatively good fitness from the plurality of individuals included in the second generation by adopting the steps (1) and (2), and so on until the Kmax generation is reached. At this time, each screened individual is an alternative individual, and a plurality of alternative individuals form a new generation group. Accordingly, step 3042 may also be referred to as a primary optimization of the initial population based on fitness.
Alternatively, the proportional selection method can also be a roulette selection method, and the basic idea of the roulette selection method is as follows: the probability of each individual being selected is proportional to its fitness. In this way, after step 3042 is executed, a plurality of candidate individuals with relatively good fitness can be obtained in advance from a plurality of individuals.
3043, screening multiple candidate individuals by box-plot method and tabu search algorithm to obtain target individuals.
In the obtaining of the multiple candidate individuals, the terminal may continue to sequentially adopt a box-line diagram method and a tabu search algorithm to screen the multiple candidate individuals to obtain a target individual, and the target individual may be the best individual with the highest fitness among the multiple candidate individuals. Accordingly, the individual with the best fitness among the plurality of individuals is obtained.
Firstly, by combining the processing principle of the box-line graph method, the terminal can sort the multiple candidate individuals according to the fitness from high to low, and directly reserve the candidate individuals with the fitness between the upper quartile and the upper edge in the multiple candidate individuals. Then, the terminal can remove at least one candidate individual close to the lower edge in the candidate individuals with fitness between the lower quartile and the lower edge in an accelerated elimination manner. That is, the terminal may remove a plurality of candidate individuals with a smaller fitness from among the candidate individuals located between the lower quartile and the lower edge. And finally, the terminal can sort the alternative individuals with the fitness between the upper quartile and the lower quartile in the removed alternative individuals again by using a box-plot method, and further adopts a tabu search algorithm to perform optimization processing on the sorted alternative individuals until the upper quartile is the same as the lower quartile. Thus, a target individual with the best fitness is obtained by screening from a plurality of candidate individuals.
Among them, the basic ideas of the tabu search algorithm are: by using the neighborhood-based optimal search method, the algorithm must be able to accept inferior solutions in order to escape from the locally optimal solution, i.e., the solution obtained each time is not necessarily better than the original solution. However, once a poor solution is accepted, the algorithm iterates, possibly trapping in a loop. To avoid loops, the algorithm places some of the recently accepted moves in a tabu table, which is disabled in later iterations. I.e. only the better solution in the no-longer-tabu table (possibly worse than the current solution) can be accepted as the initial solution for the next generation iteration. With the progress of iteration, the tabu table is continuously updated, and after a certain number of iterations, the movement which enters the tabu table at the earliest time is forbidden to exit from the tabu table.
Thus, after the step 3043 is executed, it can be ensured that the terminal can quickly reserve the target individual with the best fitness, and further, it can be ensured that the training precision of the target prediction model obtained by the final training is better. In addition, the implicit layer number and the connection weight are subjected to one-step optimizing filtering by adopting a genetic algorithm, so that the problems that the traditional BP neural network is low in learning speed and easy to fall into a local minimum value and the like are solved.
And 305, obtaining a target candidate prediction model based on the target individual.
Optionally, in this embodiment of the present application, the acquired sample set may further include a plurality of second training samples. Each second training sample may also include: time samples and weather information samples. For the description of the time sample and the weather information sample, reference may be made to the description of step 301, which is not described herein again.
It should be noted that the plurality of first training samples, the plurality of second training samples, and the plurality of verification samples may be from the same batch of samples. That is, after acquiring a plurality of samples, the terminal may randomize and divide the plurality of samples into a plurality of first training samples, a plurality of second training samples, and a plurality of verification samples.
Fig. 6 shows a flowchart of a method for obtaining a target candidate prediction model based on a target individual, which may be applied to the terminal 1 shown in fig. 1. As shown in fig. 6, the method may include:
step 3051, obtaining an alternative prediction model with an alternative hidden layer number and an alternative connection weight based on the target individual.
As can be seen from the above coding step 303, in this embodiment of the application, after the terminal obtains the target individual, the terminal may decode the target individual, so as to obtain a group of implicit layer numbers and connection rights corresponding to the target individual, which are referred to herein as an alternative implicit layer number and an alternative connection right. The prediction model with the alternative number of implicit layers and the alternative connection weights may be referred to as an alternative prediction model.
It should be noted that the decoding and encoding modes can be matched to ensure reliable performance of the encoding and decoding.
Step 3052, training the alternative prediction model by using a plurality of second training samples to adjust the alternative hidden layer number and the alternative connection weight of the alternative prediction model, so as to obtain the target alternative prediction model with the target hidden layer number and the target connection weight.
After the candidate hidden layer number and the candidate connection weight are obtained, the terminal may adopt a plurality of second training samples to train to adjust and optimize the candidate hidden layer number and the candidate connection weight, that is, optimize the candidate prediction model, so as to obtain a new set of hidden layer number and connection weight, which are referred to herein as a target hidden layer number and a target connection weight. The prediction model with the target hidden layer number and the target connection weight may be referred to as a target candidate prediction model.
Optionally, multiple second training samples are used to train the alternative prediction model, so as to optimize the formula of the number of hidden layers and the connection weight of the alternative prediction model, which can satisfy:
Figure BDA0003403941360000131
wherein M is1Refers to the number of the first training samples, M2Refers to the number of the plurality of second training samples.
And step 306, verifying whether the generalization error of the target candidate prediction model is smaller than an error threshold value by adopting a plurality of verification samples.
After the target candidate prediction model is obtained, the terminal can verify the generalization error of the target candidate prediction model by adopting a plurality of verification samples. If the generalization error is smaller than the error threshold, the terminal may execute the following step 307; if the generalization error is not less than the error threshold, the terminal may reselect the sample set for training and verification until the generalization error of the trained target candidate prediction model is less than the error threshold, that is, return to step 301. Thus, the generalization ability of the finally obtained target prediction model can be ensured to be strong.
For example, the formula for verifying the generalization error may satisfy:
Figure BDA0003403941360000132
where epsilon refers to an error threshold, which may be set automatically by the terminal or may be input to the terminal by the user. And the smaller the error threshold, the better, typically a number greater than 0 and less than 1, such as 0.1.
And 307, determining the target candidate prediction model as a target prediction model.
If the terminal verifies that the generalization error of the target candidate prediction model is smaller than the error threshold value based on the multiple verification samples, the target candidate prediction model can be considered to meet the generalization capability requirement. At this time, the terminal may directly determine the target candidate prediction model as the target prediction model.
For example, fig. 7 shows a structure diagram of a target prediction model obtained by training. Referring to fig. 7, it can be seen that the target prediction model includes one input layer, two hidden layers, and one output layer. The input layer includes three neurons, the first layer hidden layer includes four neurons, the second layer hidden layer includes three neurons, and the output layer includes two neurons.
And step 308, acquiring the charging load prediction information.
The charging load prediction information may include: time and weather information. The time measurement unit and the optional parameters included in the weather information may refer to the description of the step 301 for the time sample and the weather information sample, which is not described herein again.
Optionally, when the charging load in a future time period (e.g., a certain day) needs to be predicted, the user may input data such as the time of the time period and weather information of the time period into the terminal for the terminal to predict the charging load. That is, the terminal may acquire the charging load prediction information input by the user.
For example, assuming that the charging load of 10/20/2021 is predicted, the user may input weather information such as the average temperature of the day, the average precipitation of the day, and the duration of the sunshine, 10/20/2021, and 10/20/2021. Accordingly, the terminal can receive the charging load prediction information including 12/20/2020/20 and weather information.
Step 309, charging load prediction information is input to the target prediction model.
After the terminal acquires the charging load prediction information, the charging load prediction information can be respectively and automatically input into a predetermined target prediction model so as to accurately predict the charging load.
For example, the terminal may input the charging load prediction information obtained in step 308 into the target prediction model trained in fig. 7 to determine the charging load of the electric vehicle.
And step 310, determining the charging load of the electric vehicle according to the output result of the target prediction model.
In the embodiment of the application, the terminal can determine the output result of the target prediction model as the charging load of the electric vehicle, and inform the determined charging load to the user in a display or voice broadcast mode, so that the user can know the charging load in a certain period of time in the future, and a reasonable use strategy is formulated for the charging pile based on the charging load, so that the operation of the power distribution network is optimized.
For example, in the embodiment of the application, for a certain charging pile, the charging load of the charging pile within three random days (Day1, Day2 and Day3) is predicted by using a target prediction model obtained by optimizing a BP neural network and a traditional BP neural network. Table 1 shows relative errors and average values of the prediction results, and table 2 shows prediction accuracy and average values of the prediction results. Here, the relative error is predicted value-actual value. The prediction accuracy refers to the density or discrete degree of the relative error distribution, and the unit is%.
TABLE 1
Figure BDA0003403941360000141
TABLE 2
Figure BDA0003403941360000151
As can be seen by referring to table 1 above, the error of the conventional BP neural network is in the range of 7.94 to 8.45 on average in three days, and the average value (i.e., average error) is 8.24. The average error range of the optimized BP neural network in three days is 6.67-7.03, and the average value is 6.81. Namely, when the optimized BP neural network is adopted for prediction, the relative error of the prediction result is small. Referring to table 2 above, it can be seen that the average prediction accuracy of the conventional BP neural network in three days ranges from 90.13 to 92.04, and the average value (i.e., average prediction accuracy) is 92.04. The average prediction precision range of the optimized BP neural network in three days is 94.33-95.32, and the average value is 94.91. Through the comparison, the result of prediction by using the optimized BP neural network provided by the embodiment of the application is closer to the actual value. The charging load prediction method provided by the embodiment of the application has better prediction accuracy.
It should be noted that the order of the steps of the charging load prediction method for the electric vehicle provided in the embodiment of the present application may be appropriately adjusted, for example, step 305 may be deleted, that is, the prediction model corresponding to the number of hidden layers and the connection weight obtained by decoding one target individual with the highest fitness may be directly determined as the target candidate prediction model, and a second training sample is not further adopted to optimize the prediction model. Any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application is covered by the protection scope of the present application, and thus the detailed description thereof is omitted.
In summary, the embodiment of the present application provides a method for predicting a charging load of an electric vehicle. The method may input the acquired charging load prediction information into a target prediction model to determine the charging load of the electric vehicle. The target prediction model is obtained by verifying the target candidate prediction model based on the plurality of verification samples, the model structure of the target candidate prediction model is determined based on the model structure with the maximum fitness in the plurality of model structures, and the model structure with the maximum fitness is determined based on the plurality of training samples, so that the prediction accuracy of the target prediction model is effectively improved, and the precision of predicting the charging load by adopting the target prediction model is further improved.
Fig. 8 is a flowchart of a model training method provided in an embodiment of the present application, which may be applied to the terminal 10 shown in fig. 1. As shown in fig. 8, the method includes:
step 801, a plurality of first training samples and a plurality of verification samples are obtained.
Wherein each first training sample and each validation sample comprises: time samples and weather information samples. The obtaining method may refer to the description of step 301, and is not described herein again.
And step 802, selecting a target initial model structure with the maximum fitness from a plurality of initial model structures based on a plurality of first training samples.
Each initial model structure has a hidden layer and a connection weight, and different initial model structures have different hidden layer numbers and connection weights. The selection method can refer to the above description of step 304, and is not described herein again.
And step 803, determining a target candidate prediction model based on the target initial model structure.
The determining method may refer to the description of step 305, and is not described herein again.
And step 804, verifying the target alternative prediction model based on the multiple verification samples to obtain the target prediction model.
And finally verifying that the generalization error of the obtained target prediction model is smaller than an error threshold. The verification method may refer to the description of step 306, and is not described herein again.
In summary, the embodiment of the present application provides a model training method. The method can select a model structure with the maximum fitness from a plurality of initial model structures based on a plurality of acquired first training samples, determine a target alternative prediction model based on the model structure with the maximum fitness, and verify the target alternative prediction model based on a plurality of acquired verification samples, so that the target prediction model with the generalization error smaller than an error threshold value is obtained. Therefore, the prediction accuracy of the target prediction model obtained by training is effectively improved.
Fig. 9 is a block diagram of a charging load prediction apparatus for an electric vehicle according to an embodiment of the present application, which may be applied to the terminal 10 shown in fig. 1. As shown in fig. 9, the apparatus 90 may include:
a prediction information obtaining module 901, configured to obtain charging load prediction information.
The charging load prediction information includes: time and weather information.
An input module 902 is configured to input the charging load prediction information into the target prediction model.
And a load determining module 903, configured to determine a charging load of the electric vehicle according to an output result of the target prediction model.
The target prediction model is obtained by verifying a target candidate prediction model with a target model structure based on a plurality of verification samples, the target model structure is determined based on a target initial model structure with the maximum fitness in a plurality of initial model structures, and the target initial model structure is determined based on a plurality of first training samples. Each initial model structure is provided with a hidden layer and a connection weight, the number of hidden layers and the connection weight of different initial model structures are different, the generalization error of the target prediction model is smaller than an error threshold, and each first training sample and each verification sample respectively comprise: time samples and weather information samples.
Fig. 10 is a block diagram of a charging load prediction apparatus for an electric vehicle according to an embodiment of the present application, which may be applied to the terminal 10 shown in fig. 1. As shown in fig. 10, the apparatus 90 may further include:
the sample set obtaining module 904 may be configured to obtain a sample set before obtaining the charging load prediction information. Wherein the sample set comprises a plurality of first training samples and a plurality of validation samples.
The initialization processing module 905 may be configured to perform initialization processing on the multiple initial model structures, respectively, to obtain multiple individuals corresponding to the multiple initial model structures one to one.
Wherein each individual includes a structure code indicating an implicit layer number of the corresponding initial model structure, and a connection weight code indicating a connection weight of the corresponding initial model structure.
The selecting module 906 may be configured to train each individual of the plurality of individuals using the plurality of first training samples to select a target individual with the largest individual fitness among the plurality of individuals.
The first model determination module 907 may be configured to obtain a target candidate prediction model based on the target individual.
The verification module 908 may be configured to verify the generalized error of the target candidate prediction model using a plurality of verification samples.
The second model determining module 909 may be configured to determine the target candidate prediction model as the target prediction model if the generalization error of the target candidate prediction model is smaller than the error threshold. And if the generalization error of the target candidate prediction model is not less than the error threshold, reselecting the sample set for training and verification until the generalization error of the trained target candidate prediction model is less than the error threshold.
A selecting module 906 may be configured to determine fitness of each individual based on the plurality of first training samples. And screening the plurality of individuals based on the fitness of each individual by sequentially adopting a box plot method and a proportion selection method to obtain a plurality of alternative individuals. And screening the multiple alternative individuals by sequentially adopting a box-line graph method and a tabu search algorithm to obtain the target individual.
Optionally, the sample set acquired by the sample set acquiring module 904 may further include a plurality of second training samples, and each second training sample may also include: time samples and weather information samples.
On this basis, the first model determining module 907 may be configured to obtain an alternative prediction model with an alternative number of hidden layers and an alternative connection weight based on the target individual. And training the alternative prediction model by adopting a plurality of second training samples to adjust the alternative hidden layer number and the alternative connection weight of the alternative prediction model to obtain the target alternative prediction model with the target hidden layer number and the target connection weight.
Optionally, as can be seen with continued reference to fig. 10, the apparatus 90 may further comprise:
the connection weight determining module 910 may be configured to determine a maximum connection weight value and a minimum connection weight value based on the first training samples before performing the initialization process on the initial model structures, respectively.
The initialization processing module 905 may be configured to, for each initial model structure, encode the initial model structure with a corresponding structure code and a connection weight code.
The structure codes and the connection weight codes corresponding to different initial model structures are different, and the connection weight indicated by the connection weight code corresponding to each initial model structure is located between the maximum value and the minimum value of the connection weight.
A first model determining module 907 may be configured to decode the target individual to obtain an alternative prediction model with an alternative number of hidden layers and an alternative connection weight.
In summary, the present application provides a charging load prediction apparatus for an electric vehicle. The apparatus may input the acquired charging load prediction information into a target prediction model to determine a charging load of the electric vehicle. The target prediction model is obtained by verifying the target candidate prediction model based on the plurality of verification samples, the model structure of the target candidate prediction model is determined based on the model structure with the maximum fitness in the plurality of model structures, and the model structure with the maximum fitness is determined based on the plurality of training samples, so that the prediction accuracy of the target prediction model is effectively improved, and the precision of predicting the charging load by adopting the target prediction model is further improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 11 is a block diagram illustrating a model training apparatus according to an embodiment of the present disclosure, which may be applied to the terminal 10 shown in fig. 1. As shown in fig. 11, the apparatus 110 may include:
a sample obtaining module 1101, configured to obtain a sample set, where the sample set includes a plurality of first training samples and a plurality of verification samples, and each of the first training samples and each of the verification samples includes: time samples and weather information samples.
A selecting module 1102, configured to select, based on the first training samples, a target initial model structure with the highest fitness from the multiple initial model structures, where each initial model structure has a hidden layer and a connection weight, and different initial model structures have different numbers of hidden layers and different connection weights.
A model determining module 1103, configured to determine the target candidate prediction model based on the target initial model structure.
And the verification module 1104 is configured to verify the target candidate prediction model based on the multiple verification samples to obtain a target prediction model.
In summary, the embodiment of the present application provides a model training device. The device can select a model structure with the maximum fitness from a plurality of initial model structures based on a plurality of acquired first training samples, determine a target alternative prediction model based on the model structure with the maximum fitness, and verify the target alternative prediction model based on a plurality of acquired verification samples, so that the target prediction model with the generalization error smaller than an error threshold value is obtained. Therefore, the prediction accuracy of the target prediction model obtained by training is effectively improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In addition, it should be noted that the apparatus 110 and the apparatus 90 may be integrated into one apparatus, that is, the sample acquiring module 1101 in the apparatus 110 and the sample set acquiring module 904 in the apparatus 90 may be the same module; the selection module 1102 in the apparatus 110 and the selection module 906 in the apparatus 110 may be the same module; the model determining module 1103 in the apparatus 110 and the first model determining module 907 in the apparatus 110 may be the same module; and the verification module 1104 in the apparatus 110 and the verification module 908 and the second model determination module 909 in the apparatus 110 may be the same module.
Embodiments of the present application also provide a computer-readable storage medium, in which a computer program may be stored, where the computer program is loaded by a processor and executed to implement the above-mentioned method embodiments (for example, the charging load prediction method embodiment of the electric vehicle shown in fig. 2 or fig. 3).
Fig. 12 is a block diagram illustrating a configuration of a charging load prediction apparatus 1200 for an electric vehicle according to an embodiment of the present application. The apparatus 1200 may be a portable mobile terminal, such as: a computer, tablet, or e-book as shown in fig. 1. The apparatus 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on. In general, the apparatus 1200 may include: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, a 12-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a Graphics Processing Unit (GPU) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, processor 1201 may also include an Artificial Intelligence (AI) processor for processing computational operations related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is configured to store at least one instruction for execution by processor 1201 to implement a method of charging load prediction for an electric vehicle as provided by method embodiments herein.
It should be understood that reference herein to "and/or" means that there may be three relationships, for example, a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Similarly, the terms first, second, third and the like do not denote any order, quantity or importance, but rather are used to distinguish one element from another.
The terms "a" or "an," and the like, also do not denote a limitation of quantity, but rather denote the presence of at least one.
The word "comprise" or "comprises", and the like, means that the element or item listed before "comprises" or "comprising" covers the element or item listed after "comprising" or "comprises" and its equivalents, and does not exclude other elements or items.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A charging load prediction method for an electric vehicle, characterized by comprising:
acquiring charging load prediction information, wherein the charging load prediction information comprises: time and weather information;
inputting the charging load prediction information into a target prediction model;
determining a charging load of the electric vehicle according to an output result of the target prediction model;
the target prediction model is obtained by verifying a target candidate prediction model with a target model structure based on a plurality of verification samples, the target model structure is determined based on a target initial model structure with the maximum fitness in a plurality of initial model structures, and the target initial model structure is determined based on a plurality of first training samples; each initial model structure has a hidden layer and a connection weight, the number of hidden layers and the connection weight of different initial model structures are different, the generalization error of the target prediction model is smaller than an error threshold, and each first training sample and each verification sample comprise: time samples and weather information samples.
2. The method of claim 1, wherein prior to said obtaining charging load prediction information, the method further comprises:
obtaining a sample set, wherein the sample set comprises a plurality of first training samples and a plurality of verification samples;
respectively initializing the plurality of initial model structures to obtain a plurality of individuals in one-to-one correspondence with the plurality of initial model structures, wherein each individual comprises a structure code for indicating the hidden layer number of the corresponding initial model structure and a connection weight code for indicating the connection weight of the corresponding initial model structure;
training each individual of the plurality of individuals by using the plurality of first training samples to select a target individual with the largest individual fitness from the plurality of individuals;
obtaining a target alternative prediction model based on the target individual;
verifying the generalization error of the target candidate prediction model by adopting the verification samples;
if the generalization error of the target alternative prediction model is smaller than the error threshold, determining the target alternative prediction model as a target prediction model;
and if the generalization error of the target candidate prediction model is not less than the error threshold, reselecting the sample set for training and verification until the generalization error of the trained target candidate prediction model is less than the error threshold.
3. The method of claim 2, wherein training each of the plurality of individuals using the plurality of first training samples to select a target individual of the plurality of individuals with a greatest individual fitness comprises:
determining fitness of each individual based on the plurality of first training samples;
screening a plurality of individuals based on the fitness of each individual by sequentially adopting a box plot method and a proportion selection method to obtain a plurality of alternative individuals;
and screening the plurality of alternative individuals by sequentially adopting a box-line graph method and a tabu search algorithm to obtain the target individual.
4. The method of claim 2 or 3, wherein the set of samples further comprises a plurality of second training samples; each of the second training samples comprises: time samples and weather information samples; the obtaining of the target candidate prediction model based on the target individual comprises:
obtaining an alternative prediction model with alternative hidden layer numbers and alternative connection weights based on the target individual;
and training the alternative prediction model by adopting the plurality of second training samples to adjust the alternative hidden layer number and the alternative connection weight of the alternative prediction model to obtain the target alternative prediction model with the target hidden layer number and the target connection weight.
5. The method of claim 4, wherein prior to initializing each of the plurality of initial model structures, the method further comprises:
determining a connection weight maximum value and a connection weight minimum value based on the plurality of first training samples;
the initializing the plurality of initial model structures respectively to obtain a plurality of individuals in one-to-one correspondence with the plurality of initial model structures includes:
for each initial model structure, coding the initial model structure by adopting a corresponding structure code and a connection weight code, wherein the structure codes and the connection weight codes corresponding to different initial model structures are different, and the connection weight indicated by the connection weight code corresponding to each initial model structure is positioned between the maximum value and the minimum value of the connection weight;
the obtaining of the alternative prediction model with the alternative hidden layer number and the alternative connection weight based on the target individual comprises:
and decoding the target individual to obtain an alternative prediction model with an alternative hidden layer number and an alternative connection weight.
6. A method of model training, the method comprising:
obtaining a plurality of first training samples and a plurality of validation samples, each of the first training samples and each of the validation samples comprising: time samples and weather information samples;
selecting a target initial model structure with the maximum fitness from a plurality of initial model structures based on the first training samples, wherein each initial model structure is provided with a hidden layer and a connection weight, and the number of hidden layers and the connection weight of different initial model structures are different;
determining a target alternative prediction model based on the target initial model structure;
and verifying the target alternative prediction model based on the verification samples to obtain a target prediction model, wherein the generalization error of the target prediction model is smaller than an error threshold.
7. A charging load prediction apparatus of an electric vehicle, characterized by comprising:
a prediction information obtaining module, configured to obtain charging load prediction information, where the charging load prediction information includes: time and weather information;
the input module is used for inputting the charging load prediction information into a target prediction model;
a load determination module for determining a charging load of the electric vehicle according to an output result of the target prediction model;
the target prediction model is obtained by verifying a target candidate prediction model with a target model structure based on a plurality of verification samples, the target model structure is determined based on a target initial model structure with the maximum fitness in a plurality of initial model structures, and the target initial model structure is determined based on a plurality of first training samples; each initial model structure has a hidden layer and a connection weight, the number of hidden layers and the connection weight of different initial model structures are different, the generalization error of the target prediction model is smaller than an error threshold, and each first training sample and each verification sample comprise: time samples and weather information samples.
8. A model training apparatus, the apparatus comprising:
a sample obtaining module, configured to obtain a plurality of first training samples and a plurality of verification samples, where each of the first training samples and each of the verification samples include: time samples and weather information samples;
a selection module, configured to select a target initial model structure with a maximum fitness from a plurality of initial model structures based on the plurality of first training samples, where each initial model structure has a hidden layer and a connection weight, and different initial model structures have different numbers of hidden layers and different connection weights;
the model determining module is used for determining a target alternative prediction model based on the target initial model structure;
and the verification module is used for verifying the target alternative prediction model based on the verification samples to obtain a target prediction model, wherein the generalization error of the target prediction model is smaller than an error threshold.
9. A charging load prediction apparatus of an electric vehicle, characterized by comprising: a memory, a processor and a computer program stored on the memory, the processor implementing a method of predicting a charging load of an electric vehicle according to any one of claims 1 to 5 and a method of training a model according to claim 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the method for predicting charging load of an electric vehicle according to any one of claims 1 to 5 and the method for model training according to claim 6.
CN202111505078.5A 2021-12-10 2021-12-10 Charging load prediction method and model training method and device for electric vehicle Pending CN114186411A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111505078.5A CN114186411A (en) 2021-12-10 2021-12-10 Charging load prediction method and model training method and device for electric vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111505078.5A CN114186411A (en) 2021-12-10 2021-12-10 Charging load prediction method and model training method and device for electric vehicle

Publications (1)

Publication Number Publication Date
CN114186411A true CN114186411A (en) 2022-03-15

Family

ID=80604251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111505078.5A Pending CN114186411A (en) 2021-12-10 2021-12-10 Charging load prediction method and model training method and device for electric vehicle

Country Status (1)

Country Link
CN (1) CN114186411A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359602A (en) * 2023-03-07 2023-06-30 北京智芯微电子科技有限公司 Non-invasive electric vehicle charging identification method, device, medium and intelligent ammeter

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359602A (en) * 2023-03-07 2023-06-30 北京智芯微电子科技有限公司 Non-invasive electric vehicle charging identification method, device, medium and intelligent ammeter
CN116359602B (en) * 2023-03-07 2024-05-03 北京智芯微电子科技有限公司 Non-invasive electric vehicle charging identification method, device, medium and intelligent ammeter

Similar Documents

Publication Publication Date Title
CN105354646A (en) Power load forecasting method for hybrid particle swarm optimization and extreme learning machine
Mohammadi et al. Machine learning assisted stochastic unit commitment during hurricanes with predictable line outages
CN110910004A (en) Reservoir dispatching rule extraction method and system with multiple uncertainties
CN112381313B (en) Method and device for determining charging pile address
CN110794308B (en) Method and device for predicting train battery capacity
Yi et al. Intelligent prediction of transmission line project cost based on least squares support vector machine optimized by particle swarm optimization
CN114399235B (en) Method and system for judging disaster risk level based on rain condition data
Soldan et al. Short-term forecast of EV charging stations occupancy probability using big data streaming analysis
CN114648170A (en) Reservoir water level prediction early warning method and system based on hybrid deep learning model
CN114662653B (en) Double-LSTM battery capacity estimation method based on generation type countermeasure network
CN114186411A (en) Charging load prediction method and model training method and device for electric vehicle
CN115938104B (en) Dynamic short-time road network traffic state prediction model and prediction method
CN117674119A (en) Power grid operation risk assessment method, device, computer equipment and storage medium
CN112036598A (en) Charging pile use information prediction method based on multi-information coupling
CN117353359B (en) Battery combined energy storage and power supply method and system
CN113610268A (en) Carbon emission prediction method based on residential area space form
Xu et al. Short-term electricity consumption forecasting method for residential users based on cluster classification and backpropagation neural network
CN117252288A (en) Regional resource active support capacity prediction method and system
CN113723593B (en) Cut load prediction method and system based on neural network
CN114781699B (en) Reservoir water level prediction and early warning method based on improved particle swarm Conv1D-Attention optimization model
Yong et al. Short-term building load forecasting based on similar day selection and LSTM network
CN115856641A (en) Method and device for predicting remaining charging time of battery and electronic equipment
CN114707239A (en) Electric energy resource allocation planning method and device and electronic equipment
Zhang et al. Short-Term Power Forecasting and Uncertainty Analysis of Wind Farm at Multiple Time Scales
CN118199061B (en) Short-term power prediction method and system for renewable energy sources

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination