CN113487762A - Coding model generation method and charging data acquisition method and device - Google Patents

Coding model generation method and charging data acquisition method and device Download PDF

Info

Publication number
CN113487762A
CN113487762A CN202110832448.XA CN202110832448A CN113487762A CN 113487762 A CN113487762 A CN 113487762A CN 202110832448 A CN202110832448 A CN 202110832448A CN 113487762 A CN113487762 A CN 113487762A
Authority
CN
China
Prior art keywords
data
charging data
vector representation
loss function
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110832448.XA
Other languages
Chinese (zh)
Other versions
CN113487762B (en
Inventor
刘美亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202110832448.XA priority Critical patent/CN113487762B/en
Publication of CN113487762A publication Critical patent/CN113487762A/en
Application granted granted Critical
Publication of CN113487762B publication Critical patent/CN113487762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/70Energy storage systems for electromobility, e.g. batteries

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Charge And Discharge Circuits For Batteries Or The Like (AREA)

Abstract

The embodiment of the application discloses a coding model generation method, which comprises the steps of obtaining a data set to be trained, inputting the data set to be trained into a coding model, and obtaining a first vector representation set, wherein the data set to be trained comprises charging data corresponding to a plurality of time periods. And inputting the first vector representation set into a decoding model to obtain a first decoding data set, wherein the first decoding data set comprises charging data corresponding to each vector representation after decoding. And comparing the first decoding data set with the data set to be trained to obtain a first loss function. And adjusting parameters of the coding model according to the first loss function, and further continuing training the coding model until the first loss function meets a first preset condition to obtain the coding model. Through the training, when the coding model codes the charging data, the charging data can be subjected to smoothing processing and time sequence information among the charging data in different time periods can be learned, and then the charging data can be accurately coded.

Description

Coding model generation method and charging data acquisition method and device
Technical Field
The application relates to the technical field of data processing, in particular to a coding model generation method, a charging data acquisition method and a charging data acquisition device.
Background
With the continuous development of new energy automobiles, more and more users use new energy automobiles as transportation means. The new energy automobile mainly comprises a hybrid electric automobile, a pure electric automobile, a fuel cell electric automobile and the like. In order to improve the safety and the service life of a new energy automobile, the charging data of a battery pack in the automobile needs to be analyzed so as to find problems in time according to the change of the charging data. However, the real vehicle charging data is mostly fragmented data, and charging data full of charge and discharge can hardly be acquired. For example, in practical applications, to ensure smooth travel, a user usually does not charge the battery when the State of charge (SOC) is 0%, and the charging process is concentrated on the SOC of about 50% to 80%. Therefore, 0% to 100% of charging data cannot be acquired, and effective data is lacked, so that the data analysis result is inaccurate.
Disclosure of Invention
In view of this, the embodiment of the present application provides a coding model generation method, a charging data acquisition method and an apparatus, so as to acquire complete charging data of a vehicle and improve accuracy of a subsequent data analysis result.
In order to solve the above problem, the technical solution provided by the embodiment of the present application is as follows:
in a first aspect of embodiments of the present application, a method for generating a coding model is provided, where the method may include:
acquiring a data set to be trained, wherein the data set to be trained comprises charging data corresponding to a plurality of time periods;
inputting the data set to be trained into an encoding model to obtain a first vector representation set, wherein the first vector representation set comprises vector representations corresponding to the charging data of all time periods;
inputting the first vector representation set into a decoding model to obtain a first decoding data set, wherein the first decoding data set comprises charging data corresponding to each vector representation in the first vector representation set after decoding;
obtaining a first loss function according to the first decoding data set and the data set to be trained;
and training the coding model according to the first loss function until the first loss function meets a first preset condition, and obtaining the coding model.
In a specific implementation manner, after obtaining a first loss function according to the first decoded data set and the data set to be trained, the method further includes:
inputting the first vector representation set into an auxiliary task model to obtain a second loss function, wherein the auxiliary task model is used for assisting in training the coding model;
the training the coding model according to the first loss function, and indicating that the first loss function meets a preset condition, to obtain a coding model, includes:
and training the coding model according to the first loss function and the second loss function until a joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, and obtaining the coding model.
In a specific implementation manner, the inputting the first vector representation set into an auxiliary task model to obtain a second loss function includes:
inputting the first vector representation set into a classification model to obtain a classification result;
and obtaining a second loss function according to the classification result and the classification label corresponding to each vector representation in the first vector representation set.
In a particular implementation, the first predetermined condition is that the loss function value is minimized, and/or the second predetermined condition is that the loss function value is minimized.
In a specific implementation manner, the charging data corresponding to each of the plurality of time periods comprises charging data corresponding to the same vehicle in different time periods and/or charging data corresponding to different vehicles in different time periods.
In a specific implementation manner, the vector representations of the charging data corresponding to the preset time period have consistency, and the vector representations of the charging data corresponding to the non-preset time period have no consistency, aiming at the same vehicle.
In a particular implementation, the vector representations of the charging data for the same time period are not consistent for different vehicles.
In a second aspect of the embodiments of the present application, a charging data obtaining method is provided, where a target data set to be processed is obtained, where the target data set to be processed includes a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency;
inputting the target data set to be processed into a coding model to obtain a second vector representation set, wherein the second vector representation set comprises vector representations of the charging data, and the coding model is obtained by training according to the method of the first aspect;
inputting the second vector representation set into a decoding model to obtain a second decoding data set, wherein the second decoding data set comprises charging data corresponding to each vector representation after decoding;
and obtaining target charging data according to the second decoding data set, wherein the target charging data comprises fully charged charging data.
In a specific implementation manner, the acquiring a target data set to be processed includes:
acquiring a data set to be processed, wherein the data set to be processed comprises a plurality of pieces of charging data;
inputting the set of data to be processed into the coding model, and obtaining a third vector representation set, wherein the third vector representation set comprises vector representations of the charging data;
inputting the third vector representation set into a classification model to obtain a classification result;
and selecting the data to be processed belonging to the same classification result from the data set to be processed according to the classification result to form the target data set to be processed.
In a third aspect of embodiments of the present application, there is provided an encoding model generation apparatus, including:
the device comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring a data set to be trained, and the data set to be trained comprises charging data corresponding to a plurality of time periods;
the second obtaining unit is used for inputting the data set to be trained into a coding model to obtain a first vector representation set, and the first vector representation set comprises vector representations corresponding to the charging data of all time periods;
a third obtaining unit, configured to input the first vector representation set into a decoding model, and obtain a first decoded data set, where the first decoded data set includes charging data corresponding to each vector representation in the first vector representation set after decoding;
a fourth obtaining unit, configured to obtain a first loss function according to the first decoded data set and the data set to be trained;
and the training unit is used for training the coding model according to the first loss function until the first loss function meets a first preset condition, so as to obtain the coding model.
In a fourth aspect of embodiments of the present application, there is provided a charging data acquiring apparatus, including:
the device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring a target data set to be processed, the target data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data are consistent;
a second obtaining unit, configured to input the target data set to be processed into a coding model, and obtain a second vector representation set, where the second vector representation set includes vector representations of the charging data, and the coding model is obtained by training according to the method of the first aspect;
a third obtaining unit, configured to input the second vector representation set into a decoding model, and obtain a second decoded data set, where the second decoded data set includes charging data corresponding to each vector representation after decoding;
and the fourth acquisition unit is used for acquiring target charging data according to the second decoding data set, wherein the target charging data comprises fully charged charging data.
In a fifth aspect of embodiments of the present application, there is provided an apparatus, including: a processor, a memory;
the memory for storing computer readable instructions or a computer program;
the processor is configured to read the computer readable instructions or the computer program to enable the apparatus to implement the coding model generation method according to the first aspect or the charging data acquisition method according to the second aspect.
In a sixth aspect of embodiments of the present application, there is provided a computer-readable storage medium including instructions or a computer program which, when run on a computer, cause the computer to execute the coding model generation method of the first aspect or the charging data acquisition method of the second aspect.
Therefore, the embodiment of the application has the following beneficial effects:
in the embodiment of the application, a data set to be trained is obtained, the data set to be trained includes charging data corresponding to a plurality of time periods, the data set to be trained is input into a coding model, and a first vector representation set is obtained, and the first vector representation set includes vector representations corresponding to the charging data of the time periods. And inputting the first vector representation set into a decoding model to obtain a first decoding data set, wherein the first decoding data set comprises charging data corresponding to each vector representation after decoding. And comparing the first decoding data set with the data set to be trained to obtain a first loss function, wherein the first loss function is used for indicating the difference between the data obtained by decoding and the original data. And adjusting parameters of the coding model according to the first loss function, and further continuing training the coding model until the first loss function meets a first preset condition to obtain the coding model. Namely, through the training, when the coding model codes the charging data, the charging data can be smoothed and the time sequence information between the charging data of different time periods can be learned, so that the charging data can be accurately coded.
In addition, in order to improve the encoding precision of the encoding model, the first vector representation set can be input into an auxiliary task model, and a second loss function is obtained, wherein the auxiliary task model is used for assisting in training the encoding model. And training the coding model by using the first loss function and the second loss function until the joint loss function constructed by the first loss function and the second loss function meets a second preset condition, thereby obtaining the coding model.
In actual application, a target data set to be processed is set, the target data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency. And inputting the target data set to be processed into an encoding model to obtain a second vector representation set, and inputting the second vector representation set into a decoding model to obtain a second decoding data set. And obtaining target charging data by using the plurality of pieces of charging data in the second decoding data set, wherein the target charging data comprises charging data of full charge and dropping. That is, the embodiment of the application is based on the characteristic that the charging data shows consistency in a short period, and the charging data in the charging slices are spliced and supplemented, so that the charging data close to full charge and discharge can be obtained, and the accuracy of subsequent data analysis is improved.
Drawings
Fig. 1 is a flowchart of a coding model generation method according to an embodiment of the present application;
fig. 2 is a diagram of a coding model generation framework provided in an embodiment of the present application;
fig. 3 is a flowchart of a charging data obtaining method according to an embodiment of the present disclosure;
fig. 4 is a structural diagram of a coding model generation apparatus according to an embodiment of the present application;
fig. 5 is a structural diagram of a charging data acquisition device according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the drawings are described in detail below.
In the study of the charging data, the inventor finds that, when a battery pack of a real vehicle is charged, the SOC is mostly concentrated between 50% and 80%, and the charging data is mostly fragmented data, for example, the charging data from 30% to 80% of the SOC, the charging data from 50% to 90% of the SOC, and the like, the charging data of full-charge-down (0% to 100% of the SOC) cannot be acquired, so that the subsequent charging data analysis process is limited.
Based on the characteristic that a plurality of charging data segments are consistent in a short period, the coding model is trained. After the coding model is trained, the target data set to be processed is input into the coding model, so that a plurality of pieces of data in the target data set to be processed are coded by using the coding model, and a second vector representation set is obtained. Through encoding, multiple pieces of data in a target data set to be processed can be subjected to smoothing processing, abnormal data are eliminated, stability of the data is guaranteed, and time sequence information of each piece of data is extracted. And inputting the second vector representation set into a decoding model to obtain a second decoding data set, wherein a plurality of pieces of charging data included in the second decoding data set are smoothed data, and integrating the plurality of pieces of data in the second decoding data set to obtain target charging data, wherein the target charging data comprises charging data which is full and put down.
In order to facilitate understanding of the technical solutions provided by the embodiments of the present application, a training method of a coding model and a charging data obtaining method will be described below with reference to the accompanying drawings.
Referring to fig. 1, which is a flowchart of a coding model generation method provided in an embodiment of the present application, as shown in fig. 1, the method may include:
s101: and acquiring a data set to be trained.
In this embodiment, for training to generate the coding model, a set including a large amount of data to be trained may be obtained, where the data to be trained includes charging data corresponding to each of a plurality of time periods. Specifically, the charging data corresponding to each of the multiple time periods may be charging data in a continuous time period of a certain length in a full-scale vehicle, and the charging data is split according to a preset length, so that multiple-vehicle and multiple-time-period charging data is formed. Wherein a plurality of time periods may be preset. For example, the first preset time period is from day 1 to day 10, the vehicle is charged 4 times in the first preset time period, the data set to be trained includes charging data corresponding to the 4 times of charging, the second preset time period is from day 11 to day 20, the vehicle is charged 3 times in the second preset time period, and the data set to be trained includes charging data corresponding to the 3 times of charging. The charging data may be raw current and voltage data, or extracted characteristic data, such as: battery temperature, battery remaining amount (SOC), Capacity increment Analysis (ICA) curve characteristics, and the like.
The charging data corresponding to the plurality of time periods may include charging data corresponding to the same vehicle at different time periods and/or charging data corresponding to different vehicles at different time periods. The vector representation of the charging data corresponding to the preset time period has consistency, and the vector representation of the charging data corresponding to the non-preset time period does not have consistency. The preset time period may be an equivalent time period set according to actual service requirements. For example, there are a vehicle a and a vehicle B, the equivalent time period is 10 days, the vehicle a is charged 4 times in total in the 1 st day to the 10 th day, the charging data a1, a2, A3 and a4 are acquired, and the vector representations of the charging data of a1 to a4 have consistency; the vehicle a was charged 3 times in total on days 11 and 20, and the charging data a5, a6, and a7 were acquired, and the vector representations of the charging data of a5-a7 had consistency. Wherein the vector representation of any one of the charging data from A1-A4 does not have consistency with the vector representation of any one of the charging data from A5-A7. If the vehicle B is charged for 2 times in the 1 st day to the 10 th day, the charging data B1 and B2 are acquired, and the vector representations of B1 and B2 have consistency; the vehicle B is charged 7 times in the 11 th day and the 20 th day, and then the charging data B3, B4, B5, B6, B7, B8 and B9 are acquired, and then the vector representations corresponding to the charging data B3-B9 respectively have consistency. And there is no agreement between the vector representation of any one of the charging data B1-B2 and the vector representation of any one of the charging data B3-B9.
S102: and inputting the data set to be trained into the coding model to obtain a first vector representation set.
After the data set to be trained is obtained, the data set to be trained is input into the coding model, and a first vector representation set is obtained. Wherein the first vector representation set comprises vector representations corresponding to the charging data of the respective time periods. When the coding model codes the charging data in the data set to be trained, the plurality of charging data can be smoothed to eliminate the charging data under the extreme abnormal condition, so that the stability of the extracted features is ensured.
S103: the first vector representation set is input to a decoding model, obtaining a first decoded data set.
S104: and obtaining a first loss function according to the first decoding data set and the data set to be trained.
After the vector representation corresponding to each piece of charging data in the data set to be trained is obtained through the coding model, the vector representations are input into a decoding model, so that the vector representations are decoded by the decoding model to obtain decoding data. It can be understood that, since the vector representation is obtained after the charging data is smoothed by the encoding model, there is some difference between the decoded data obtained by decoding the vector representation by the decoding model and the actual charging data, and a first loss function is obtained according to the difference between the decoded data and the actual charging data, and the first loss function is used for reflecting the difference between the charging data obtained by decoding and the charging data in the data set to be trained. That is, the larger the first loss function is, the larger the difference between the charging data obtained by decoding and the charging data in the data set to be trained is, and the smaller the first loss function is, the smaller the difference between the charging data obtained by decoding and the charging data in the combination of the charging data and the data to be trained is, which further indicates that the obtained vector represents more accurately when the coding model codes the charging data.
S105: and training the coding model according to the first loss function until the first loss function meets a first preset condition, and obtaining the coding model.
After the first loss function is obtained, parameters of the coding model can be adjusted according to the first loss function, and S102-S104 are executed by using the adjusted coding model until the obtained first loss function meets a first preset condition, which indicates that the coding accuracy of the coding model is high, the parameters of the coding model are not adjusted any more, and the coding model is obtained, so that the subsequent charging parameters are supplemented by using the coding model. The first preset condition may be set according to an actual application situation, and this embodiment is not limited herein. For example, the first preset condition is that the loss function value is minimum, or the loss function value is smaller than a preset threshold, or the like.
In an application scenario, in order to improve the training accuracy of the coding model, after performing S104, the following steps may be further performed:
a1: and inputting the first vector representation set into an auxiliary task model to obtain a second loss function, wherein the auxiliary task model is used for assisting in training the coding model.
In this embodiment, the first vector representation set is first input into the auxiliary task model, a result obtained by the auxiliary task model based on the first vector representation set is obtained, and then the second loss function is determined according to the result, where the auxiliary task model may be a classification model, that is, each first vector representation in the input first vector representation set is classified. After the classification result is obtained, a second loss function is obtained according to the classification result and the classification label corresponding to each vector representation in the first vector representation set. Wherein, the classification label corresponding to the vector representation is used for indicating that the vector represents the actual classification result. Wherein the second loss function is used to reflect the difference between the classification result obtained using the vector representation and the actual classification result. The larger the second loss function is, the different classification result output by the classification model is shown to be different from the actual classification result, and further, the characteristics that the vector representation obtained by the coding model cannot represent the charging data are further described, and the coding is inaccurate. The smaller the second loss function is, the classification result of the classification model is the same as the actual classification result, so that the characteristics of the charging data can be accurately represented by the vector representation obtained by the coding of the coding model, and the coding is more accurate.
A2: and training the coding model according to the first loss function and the second loss function until the joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, and obtaining the coding model.
After the first loss function and the second loss function are obtained, the first loss function and the second loss function are utilized to construct a joint loss function, and then the joint loss function is utilized to train the coding model until the joint loss function meets a second preset condition. The second preset condition may be set according to an actual application situation, and this embodiment is not limited herein. For example, the second predetermined condition is that the loss function value is minimum, or the loss function value is smaller than a predetermined threshold, or the like.
It can be seen that a data set to be trained is obtained, the data set to be trained includes charging data corresponding to a plurality of time periods, the data set to be trained is input into a coding model, a first vector representation set is obtained, and the first vector representation set includes vector representations corresponding to the charging data of the time periods. And inputting the first vector representation set into a decoding model to obtain a first decoding data set, wherein the first decoding data set comprises charging data corresponding to each vector representation after decoding. And obtaining a first loss function according to the first decoding data set and the proxy training data set, wherein the first loss function is used for indicating the difference between the data obtained by decoding and the original data. And adjusting parameters of the coding model according to the first loss function, and further continuing training the coding model until the first loss function meets a first preset condition to obtain the coding model. Namely, through the training, when the coding model codes the charging data, the charging data can be smoothed and the time sequence information between the charging data of different time periods can be learned, so that the charging data can be accurately coded.
At the same time, the first vector representation set may also be input to the auxiliary task model to obtain a second penalty function. And training the coding model by using the first loss function and the second loss function until the joint loss function constructed by the first loss function and the second loss function meets a second preset condition, thereby obtaining the coding model. Namely, through the training, when the coding model codes the charging data, the charging data can be smoothed and the time sequence information between the charging data of different time periods can be learned, so that the charging data can be accurately coded.
To facilitate understanding of the training process of the coding model, refer to the frame diagram shown in fig. 2, which includes a coding model 201, a classification module 202, a decoding model 203, and a judgment module 204. The coding model 201 is configured to code input charging data to be trained to obtain a vector representation, and input the vector representation into the classification module 202. The classification module 202 classifies the vector representation according to the vector representation to obtain a classification result, and inputs the classification result represented by the vector representation to the judgment module 204, so that the judgment module 204 obtains a second loss function according to the classification result output by the classification module 202 and the label corresponding to the vector representation. Meanwhile, the encoding model 201 may also input the vector representation into the decoding model 203, so that the decoding model 203 decodes the input vector representation to obtain a decoded data set. The determining module 204 determines similarity between the charging data in the data set to be trained and the decoding data in the data set to be decoded, obtains a first loss function, and adjusts and trains the coding model according to the first loss function and the second loss function.
The coding model 201 may include two parts, the first part is a Neural network structure for extracting charging data features, such as a Convolutional Neural Network (CNN), and the two parts have two functions, namely, eliminating some extreme abnormal conditions, smoothing data, and ensuring more stable data and features, and the second part may fuse data charged in different time periods, such as 40% to 80% during one charging and 20% to 60% during one charging, and the structure may fuse and extract 20% to 80% of feature data. The second part is a model part, which extracts timing information of the charging sequence. Therefore, by the scheme provided by the embodiment of the application, not only can the charging data fully filled and discharged be obtained, but also some burr points in the charging data can be removed, and the data performance is more stable.
After the coding model is trained, the charging data can be supplemented according to the coding model, which will be described below with reference to the accompanying drawings.
Referring to fig. 3, which is a flowchart of a charging data obtaining method provided in an embodiment of the present application, as shown in fig. 3, the method may include:
s301: and acquiring a target data set to be processed.
In this embodiment, a charging data segment of the real vehicle in the actual charging process, that is, a target data set to be processed, is obtained. The data set to be processed comprises a plurality of charging data, and the plurality of charging data have consistency. That is, the plurality of pieces of charging data in the target data set to be processed are the plurality of pieces of charging data corresponding to the valid time period. Wherein the plurality of pieces of charging data include charging data of different SOC ranges. For example, one or more pieces of charging data having a SOC of 10% -80%, one or more pieces of data having a SOC of between 30% -90%, one or more pieces of charging data having a SOC of 20% -60%, one or more pieces of charging data having a SOC of 30% -100%, and the like may be included.
The target data set to be processed can be obtained through the following modes:
acquiring a data set to be processed, wherein the data set to be processed comprises a plurality of pieces of charging data, and the charging data comprise charging data in a nonequivalent time period; inputting a data set to be processed into an encoding model to obtain a third vector representation set, wherein the third vector representation set comprises vector representations of all pieces of charging data; inputting the third vector representation set into a classification model to obtain a classification result; and selecting the data to be processed belonging to the same classification result from the data sets to be processed according to the classification result to form a target data set to be processed. Wherein, the data to be processed belonging to the same classification result shows the characteristic of consistency.
S302: and inputting the target data set to be processed into the coding model to obtain a second vector representation set.
In the step of obtaining the data set to be processed, inputting each piece of charging data in the data set to be processed into the coding model, and obtaining a vector representation corresponding to each piece of charging data, namely a second vector representation set. Wherein, the coding model is obtained by training according to the method described in fig. 1.
S303: the second vector representation set is input to a decoding model, obtaining a second decoded data set.
Wherein the second set of decoded data comprises a set of charging data that each represents a corresponding decoded data. It can be understood that, because the encoding model can perform smoothing processing on the charging data input into the target data set to be processed, the charging data obtained after decoding is smoother.
S304: target charging data is obtained from the second set of decoded data, the target charging data including full charge dropped charging data.
The target charging data is a complete piece of charging data, and comprises charging data with SOC from 0% -100%. For example, the charging data in the second decoding data set comprises charging data with SOC from 20% to 50%, charging data with SOC from 0% to 20%, charging data with SOC from 50% to 70%, and charging data with SOC from 70% to 100%, and the charging data with SOC from 0% to 100% are obtained by splicing the charging data. For the charging data with intersection in the charging data, averaging can be carried out to obtain the charging data corresponding to the intersection part, and then the charging data splicing is carried out. As another example, the charge data in the second decoded data set includes first charge data having an SOC of 20% to 80% and second charge data having an SOC of 50% to 90%, and for the charge data having an intersection portion SOC of 50% to 80%, the charge data corresponding to the first charge data having an SOC of 50% to 80% and the charge data corresponding to the second charge data having an SOC of 50% to 80% may be averaged, and the average may be used as the charge data having an SOC of 50% to 80%. Or, the maximum charging data in the intersection is used as the charging data corresponding to the intersection part. For example, the charging data corresponding to the SOC from 50% to 80% in the first charging data is larger than the charging data corresponding to the SOC from 50% to 80% in the second charging data, and the charging data corresponding to the SOC from 50% to 80% in the first charging data is regarded as the charging data corresponding to the SOC from 50% to 80%.
As can be seen, the present embodiment obtains a target data set to be processed, where the target data set to be processed includes a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency therebetween. And inputting the target data set to be processed into an encoding model to obtain a second vector representation set, and inputting the second vector representation set into a decoding model to obtain a second decoding data set. And obtaining target charging data by using the plurality of pieces of charging data in the second decoding data set, wherein the target charging data comprises charging data of full charge and dropping. That is, the embodiment of the application is based on the characteristic that the charging data shows consistency in a short period, and the charging data in the charging slices are spliced and supplemented, so that the charging data close to full charge and discharge can be obtained, and the accuracy of subsequent data analysis is improved.
Based on the above method embodiments, the present application provides a coding model generation apparatus and a charging data acquisition apparatus, which will be described below with reference to the accompanying drawings.
Referring to fig. 4, which is a structural diagram of a coding model generation apparatus provided in an embodiment of the present application, as shown in fig. 4, the apparatus may include:
a first obtaining unit 401, configured to obtain a data set to be trained, where the data set to be trained includes charging data corresponding to each of a plurality of time periods;
a second obtaining unit 402, configured to input the to-be-trained data set into a coding model, and obtain a first vector representation set, where the first vector representation set includes vector representations corresponding to charging data of each time period;
a third obtaining unit 403, configured to input the first vector representation set into a decoding model, to obtain a first decoded data set, where the first decoded data set includes charging data corresponding to each vector representation in the first vector representation set after decoding;
a fourth obtaining unit 404, configured to obtain a first loss function according to the first decoding data set and the data set to be trained;
a training unit 405, configured to train the coding model according to the first loss function until the first loss function meets a first preset condition, so as to obtain the coding model.
In one possible implementation, the apparatus further includes: a fifth obtaining unit;
the fifth obtaining unit is configured to input the first vector representation set into an auxiliary task model to obtain a second loss function, where the auxiliary task model is used to assist in training the coding model;
the training unit is specifically configured to train the coding model according to the first loss function and the second loss function until a joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, so as to obtain the coding model.
In a possible implementation manner, the fifth obtaining unit is specifically configured to input the first vector representation set into a classification model, and obtain a classification result; and obtaining a second loss function according to the classification result and the classification label corresponding to each vector representation in the first vector representation set.
In a possible implementation, the first predetermined condition is that the loss function value is minimum, and/or the second predetermined condition is that the loss function value is minimum.
In one possible implementation, the charging data corresponding to each of the plurality of time periods includes charging data corresponding to different time periods for the same vehicle and/or charging data corresponding to different time periods for different vehicles.
In one possible implementation, the vector representations of the charging data corresponding to the preset time period have consistency, and the vector representations of the charging data corresponding to the non-preset time period have no consistency, for the same vehicle.
In one possible implementation, the vector representations of the charging data for the same time period do not have consistency for different vehicles.
It should be noted that, implementation of each unit in this embodiment may refer to the above method embodiment, and this embodiment is not described herein again.
Referring to fig. 5, which is a structural diagram of a charging data acquiring apparatus according to an embodiment of the present application, as shown in fig. 5, the apparatus includes:
a first obtaining unit 501, configured to obtain a target data set to be processed, where the target data set to be processed includes a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency;
a second obtaining unit 502, configured to input the to-be-processed target data set into a coding model, and obtain a second vector representation set, where the second vector representation set includes vector representations of the charging data, and the coding model is obtained by training according to any one of claims 1 to 5;
a third obtaining unit 503, configured to input the second vector representation set into a decoding model, so as to obtain a second decoded data set, where the second decoded data set includes charging data corresponding to each vector representation after decoding;
a fourth obtaining unit 504, configured to obtain target charging data according to the second decoding data set, where the target charging data includes fully charged charging data.
In a possible implementation manner, the first obtaining unit 501 is specifically configured to obtain a to-be-processed data set, where the to-be-processed data set includes a plurality of pieces of charging data; inputting the set of data to be processed into the coding model, and obtaining a third vector representation set, wherein the third vector representation set comprises vector representations of the charging data; inputting the third vector representation set into a classification model to obtain a classification result; and selecting the data to be processed belonging to the same classification result from the data set to be processed according to the classification result to form the target data set to be processed.
It should be noted that, for implementation of each unit in this embodiment, reference may be made to related descriptions of the above method embodiments, and details of this embodiment are not described herein again.
In addition, an embodiment of the present application provides an apparatus, including: a processor, a memory;
the memory for storing computer readable instructions or a computer program;
the processor is configured to read the computer readable instructions or the computer program, so as to enable the device to implement the coding model generation method or the charging data acquisition method.
An embodiment of the present application provides a computer-readable storage medium, which includes instructions or a computer program, and when the computer-readable storage medium runs on a computer, the computer is caused to execute the coding model generation method or the charging data acquisition method.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the system or the device disclosed by the embodiment, the description is simple because the system or the device corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for generating a coding model, the method comprising:
acquiring a data set to be trained, wherein the data set to be trained comprises charging data corresponding to a plurality of time periods;
inputting the data set to be trained into an encoding model to obtain a first vector representation set, wherein the first vector representation set comprises vector representations corresponding to the charging data of all time periods;
inputting the first vector representation set into a decoding model to obtain a first decoding data set, wherein the first decoding data set comprises charging data corresponding to each vector representation in the first vector representation set after decoding;
obtaining a first loss function according to the first decoding data set and the data set to be trained;
and training the coding model according to the first loss function until the first loss function meets a first preset condition, and obtaining the coding model.
2. The method of claim 1, wherein after obtaining a first loss function from the first set of decoded data and the set of data to be trained, the method further comprises:
inputting the first vector representation set into an auxiliary task model to obtain a second loss function, wherein the auxiliary task model is used for assisting in training the coding model;
the training the coding model according to the first loss function until the first loss function meets a preset condition to obtain a coding model includes:
and training the coding model according to the first loss function and the second loss function until a joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, and obtaining the coding model.
3. The method of claim 2, wherein inputting the first vector representation set into an auxiliary task model, obtaining a second penalty function, comprises:
inputting the first vector representation set into a classification model to obtain a classification result;
and obtaining a second loss function according to the classification result and the classification label corresponding to each vector representation in the first vector representation set.
4. A method according to any one of claims 2 or 3, characterised in that the first predetermined condition is a minimum loss function value and/or the second predetermined condition is a minimum loss function.
5. The method according to any one of claims 1-4, wherein the charging data for each of the plurality of time periods comprises charging data for the same vehicle at different time periods and/or charging data for different vehicles at different time periods.
6. A charging data acquisition method, the method comprising:
acquiring a target data set to be processed, wherein the target data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data are consistent;
inputting the target data set to be processed into a coding model, and obtaining a second vector representation set, wherein the second vector representation set comprises vector representations of the charging data, and the coding model is obtained by training according to the method of any one of claims 1 to 5;
inputting the second vector representation set into a decoding model to obtain a second decoding data set, wherein the second decoding data set comprises charging data corresponding to each vector representation after decoding;
and obtaining target charging data according to the second decoding data set, wherein the target charging data comprises fully charged charging data.
7. The method of claim 6, wherein the obtaining the target set of data to be processed comprises:
acquiring a data set to be processed, wherein the data set to be processed comprises a plurality of pieces of charging data;
inputting the set of data to be processed into the coding model, and obtaining a third vector representation set, wherein the third vector representation set comprises vector representations of the charging data;
inputting the third vector representation set into a classification model to obtain a classification result;
and selecting the data to be processed belonging to the same classification result from the data set to be processed according to the classification result to form the target data set to be processed.
8. An apparatus for generating a coding model, the apparatus comprising:
the device comprises a first acquisition unit, a second acquisition unit and a control unit, wherein the first acquisition unit is used for acquiring a data set to be trained, and the data set to be trained comprises charging data corresponding to a plurality of time periods;
the second obtaining unit is used for inputting the data set to be trained into a coding model to obtain a first vector representation set, and the first vector representation set comprises vector representations corresponding to the charging data of all time periods;
a third obtaining unit, configured to input the first vector representation set into a decoding model, and obtain a first decoded data set, where the first decoded data set includes charging data corresponding to each vector representation in the first vector representation set after decoding;
a fourth obtaining unit, configured to obtain a first loss function according to the first decoded data set and the data set to be trained;
and the training unit is used for training the coding model according to the first loss function until the first loss function meets a first preset condition, so as to obtain the coding model.
9. A charging data acquisition apparatus, characterized in that the apparatus comprises:
the device comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring a target data set to be processed, the target data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data are consistent;
a second obtaining unit, configured to input the target data set to be processed into a coding model, and obtain a second vector representation set, where the second vector representation set includes vector representations of the charging data, and the coding model is obtained by training according to any one of claims 1 to 5;
a third obtaining unit, configured to input the second vector representation set into a decoding model, and obtain a second decoded data set, where the second decoded data set includes charging data corresponding to each vector representation after decoding;
and the fourth acquisition unit is used for acquiring target charging data according to the second decoding data set, wherein the target charging data comprises fully charged charging data.
10. An apparatus, comprising: a processor, a memory;
the memory for storing computer readable instructions or a computer program;
the processor is configured to read the computer readable instructions or the computer program to enable the apparatus to implement the coding model generation method according to any one of claims 1 to 5, or the charging data acquisition method according to any one of claims 6 or 7.
CN202110832448.XA 2021-07-22 2021-07-22 Coding model generation method, charging data acquisition method and device Active CN113487762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110832448.XA CN113487762B (en) 2021-07-22 2021-07-22 Coding model generation method, charging data acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110832448.XA CN113487762B (en) 2021-07-22 2021-07-22 Coding model generation method, charging data acquisition method and device

Publications (2)

Publication Number Publication Date
CN113487762A true CN113487762A (en) 2021-10-08
CN113487762B CN113487762B (en) 2023-07-04

Family

ID=77942188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110832448.XA Active CN113487762B (en) 2021-07-22 2021-07-22 Coding model generation method, charging data acquisition method and device

Country Status (1)

Country Link
CN (1) CN113487762B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105301510A (en) * 2015-11-12 2016-02-03 北京理工大学 Battery aging parameter identification method
CN106973038A (en) * 2017-02-27 2017-07-21 同济大学 Network inbreak detection method based on genetic algorithm over-sampling SVMs
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN108983103A (en) * 2018-06-29 2018-12-11 上海科列新能源技术有限公司 A kind of data processing method and device of power battery
CN109934408A (en) * 2019-03-18 2019-06-25 常伟 A kind of application analysis method carrying out automobile batteries RUL prediction based on big data machine learning
CN110058175A (en) * 2019-05-05 2019-07-26 北京理工大学 A kind of reconstructing method of power battery open-circuit voltage-state-of-charge functional relation
CN110599557A (en) * 2017-08-30 2019-12-20 深圳市腾讯计算机系统有限公司 Image description generation method, model training method, device and storage medium
CN111291190A (en) * 2020-03-23 2020-06-16 腾讯科技(深圳)有限公司 Training method of encoder, information detection method and related device
CN111639684A (en) * 2020-05-15 2020-09-08 北京三快在线科技有限公司 Training method and device of data processing model
CN112379269A (en) * 2020-10-14 2021-02-19 武汉蔚来能源有限公司 Battery abnormity detection model training and detection method and device thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105301510A (en) * 2015-11-12 2016-02-03 北京理工大学 Battery aging parameter identification method
CN106973038A (en) * 2017-02-27 2017-07-21 同济大学 Network inbreak detection method based on genetic algorithm over-sampling SVMs
CN110599557A (en) * 2017-08-30 2019-12-20 深圳市腾讯计算机系统有限公司 Image description generation method, model training method, device and storage medium
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN108983103A (en) * 2018-06-29 2018-12-11 上海科列新能源技术有限公司 A kind of data processing method and device of power battery
CN109934408A (en) * 2019-03-18 2019-06-25 常伟 A kind of application analysis method carrying out automobile batteries RUL prediction based on big data machine learning
CN110058175A (en) * 2019-05-05 2019-07-26 北京理工大学 A kind of reconstructing method of power battery open-circuit voltage-state-of-charge functional relation
CN111291190A (en) * 2020-03-23 2020-06-16 腾讯科技(深圳)有限公司 Training method of encoder, information detection method and related device
CN111639684A (en) * 2020-05-15 2020-09-08 北京三快在线科技有限公司 Training method and device of data processing model
CN112379269A (en) * 2020-10-14 2021-02-19 武汉蔚来能源有限公司 Battery abnormity detection model training and detection method and device thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周才杰,汪玉洁: "基于自动编码器特征提取的锂电池健康状态估计", 《第21届中国系统仿真技术及其应用学术年会论文集》 *
周才杰,汪玉洁: "基于自动编码器特征提取的锂电池健康状态估计", 《第21届中国系统仿真技术及其应用学术年会论文集》, 27 August 2020 (2020-08-27), pages 247 - 251 *

Also Published As

Publication number Publication date
CN113487762B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN109596913B (en) Charging pile fault cause diagnosis method and device
CN110658460B (en) Battery life prediction method and device for battery pack
CN112016237B (en) Deep learning method, device and system for lithium battery life prediction
CN115291116B (en) Energy storage battery health state prediction method and device and intelligent terminal
CN110058161B (en) Distributed identification method and device for lithium ion battery model parameters
Mian Qaisar A proficient Li-ion battery state of charge estimation based on event-driven processing
CN115684947A (en) Battery model construction method and battery degradation prediction device
CN112305423A (en) Lithium ion power battery state of charge estimation method, device, medium and equipment
Mazzi et al. State of charge estimation of an electric vehicle's battery using tiny neural network embedded on small microcontroller units
CN114563712A (en) Battery capacity estimation method and device and electronic equipment
Huang et al. State of charge estimation of li-ion batteries based on the noise-adaptive interacting multiple model
CN113219336A (en) Battery deterioration determination system, method, and non-transitory storage medium storing program
CN114744723A (en) Method and device for adjusting charging request current and electronic equipment
CN110794306A (en) Lithium iron phosphate SOC terminal correction method
CN112782588B (en) SOC online monitoring method based on LSSVM and storage medium thereof
WO2022144542A1 (en) Method for predicting condition parameter degradation of a cell
CN113487762A (en) Coding model generation method and charging data acquisition method and device
CN109884548B (en) Method for predicting remaining life of lithium battery based on GASVM-AUKF algorithm
CN116796821A (en) Efficient neural network architecture searching method and device for 3D target detection algorithm
CN116736130A (en) Lithium battery residual service life prediction method and system
CN112803527A (en) Automobile lithium battery charging dynamic protection system based on experience function and big data
CN115166527A (en) Method, device, equipment and medium for predicting remaining service life of lithium ion battery
CN115389954A (en) Battery capacity estimation method, electronic equipment and readable storage medium
CN115223271A (en) Method for obtaining attention of vehicle residual information error and related device
JP7314822B2 (en) Battery deterioration determination device, battery deterioration determination method, and battery deterioration determination program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant