CN113487762B - Coding model generation method, charging data acquisition method and device - Google Patents

Coding model generation method, charging data acquisition method and device Download PDF

Info

Publication number
CN113487762B
CN113487762B CN202110832448.XA CN202110832448A CN113487762B CN 113487762 B CN113487762 B CN 113487762B CN 202110832448 A CN202110832448 A CN 202110832448A CN 113487762 B CN113487762 B CN 113487762B
Authority
CN
China
Prior art keywords
data
loss function
charging data
vector representation
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110832448.XA
Other languages
Chinese (zh)
Other versions
CN113487762A (en
Inventor
刘美亿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202110832448.XA priority Critical patent/CN113487762B/en
Publication of CN113487762A publication Critical patent/CN113487762A/en
Application granted granted Critical
Publication of CN113487762B publication Critical patent/CN113487762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/70Energy storage systems for electromobility, e.g. batteries

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Charge And Discharge Circuits For Batteries Or The Like (AREA)

Abstract

The embodiment of the application discloses a coding model generation method, which comprises the steps of obtaining a data set to be trained, wherein the data set to be trained comprises charging data corresponding to a plurality of time periods, inputting the data set to be trained into a coding model, and obtaining a first vector representation set. And inputting the first vector representation set into a decoding model to obtain a first decoded data set, wherein the first decoded data set comprises charging data corresponding to each vector representation after decoding. And comparing the first decoding data set with the data set to be trained to obtain a first loss function. And adjusting parameters of the coding model according to the first loss function, and further continuing training the coding model until the first loss function meets a first preset condition to obtain the coding model. Through the training, when the coding model codes the charging data, the charging data can be subjected to smoothing processing, and time sequence information among the charging data in different time periods can be learned, so that the charging data can be accurately coded.

Description

Coding model generation method, charging data acquisition method and device
Technical Field
The application relates to the technical field of data processing, in particular to a coding model generation method, a charging data acquisition method and a device.
Background
With the continuous development of new energy automobiles, more and more users use the new energy automobiles as vehicles. The new energy automobile mainly comprises a hybrid electric automobile, a pure electric automobile, a fuel cell electric automobile and the like. In order to improve the safety and service life of a new energy automobile, the charging data of a battery pack in the automobile needs to be analyzed so as to discover problems in time according to the change of the charging data. However, the actual vehicle charging data is mostly segment data, and the full charge-down charging data is hardly acquired. For example, in practical applications, in order to ensure smooth travel, a user generally does not charge the battery when the remaining capacity (SOC) of the battery is 0%, and the charging process is concentrated at about 50% -80% SOC. Therefore, 0% -100% of charging data cannot be acquired, and effective data is lacked, so that data analysis results are inaccurate.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method for generating a coding model, a method for acquiring charging data, and a device for acquiring complete charging data of a vehicle, so as to improve accuracy of a subsequent data analysis result.
In order to solve the above problems, the technical solution provided in the embodiments of the present application is as follows:
in a first aspect of the embodiments of the present application, a method for generating an encoding model is provided, which may include:
acquiring a data set to be trained, wherein the data set to be trained comprises charging data corresponding to each of a plurality of time periods;
inputting the data set to be trained into a coding model to obtain a first vector representation set, wherein the first vector representation set comprises vector representations corresponding to charging data of each time period;
inputting the first vector representation set into a decoding model to obtain a first decoded data set, wherein the first decoded data set comprises charging data corresponding to each vector representation in the first vector representation set after decoding;
obtaining a first loss function according to the first decoding data set and the data set to be trained;
training the coding model according to the first loss function until the first loss function meets a first preset condition, and obtaining the coding model.
In a specific implementation, after obtaining the first loss function according to the first decoded data set and the data set to be trained, the method further includes:
Inputting the first vector representation set into an auxiliary task model to obtain a second loss function, wherein the auxiliary task model is used for assisting in training the coding model;
training the coding model according to the first loss function, indicating that the first loss function meets a preset condition, and obtaining the coding model, wherein the training comprises the following steps:
training the coding model according to the first loss function and the second loss function until the joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, and obtaining the coding model.
In a specific implementation, the inputting the first vector representation set into an auxiliary task model to obtain a second loss function includes:
inputting the first vector representation set into a classification model to obtain a classification result;
and obtaining a second loss function according to the classification result and the classification labels corresponding to the vector representations in the first vector representation set.
In a specific implementation, the first preset condition is to minimize a loss function value, and/or the second preset condition is to minimize a loss function value.
In a specific implementation manner, the charging data corresponding to each of the plurality of time periods includes charging data corresponding to the same vehicle in different time periods and/or charging data corresponding to different vehicles in different time periods.
In a specific implementation, for the same vehicle, the vector representations of the charging data corresponding to the preset time period have consistency, and the vector representations of the charging data corresponding to the non-preset time period do not have consistency.
In one particular implementation, the vector representations of the charging data corresponding to the same period of time are not consistent for different vehicles.
In a second aspect of the embodiments of the present application, a method for acquiring charging data is provided, where a target data set to be processed is acquired, where the target data set to be processed includes a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency;
inputting the target data set to be processed into a coding model to obtain a second vector representation set, wherein the second vector representation set comprises vector representations of each piece of charging data, and the coding model is obtained through training according to the method of the first aspect;
inputting the second vector representation set into a decoding model to obtain a second decoding data set, wherein the second decoding data set comprises charging data corresponding to each vector representation after decoding;
And obtaining target charging data according to the second decoding data set, wherein the target charging data comprises fully-charged charging data.
In a specific implementation manner, the acquiring the target data set to be processed includes:
acquiring a data set to be processed, wherein the data set to be processed comprises a plurality of pieces of charging data;
inputting the data set to be processed into the coding model to obtain a third vector representation set, wherein the third vector representation set comprises vector representations of all the charging data;
inputting the third vector representation set into a classification model to obtain a classification result;
and selecting the data to be processed belonging to the same classification result from the data set to be processed according to the classification result to form the target data set to be processed.
In a third aspect of the embodiments of the present application, there is provided an encoding model generating apparatus, including:
the first acquisition unit is used for acquiring a data set to be trained, wherein the data set to be trained comprises charging data corresponding to each of a plurality of time periods;
the second acquisition unit is used for inputting the data set to be trained into the coding model to obtain a first vector representation set, wherein the first vector representation set comprises vector representations corresponding to charging data of each time period;
The third acquisition unit is used for inputting the first vector representation set into a decoding model to obtain a first decoded data set, wherein the first decoded data set comprises charging data corresponding to each vector representation in the first vector representation set after decoding;
a fourth obtaining unit, configured to obtain a first loss function according to the first decoded data set and the data set to be trained;
the training unit is used for training the coding model according to the first loss function until the first loss function meets a first preset condition, and the coding model is obtained.
In a fourth aspect of embodiments of the present application, there is provided a charging data acquisition apparatus, the apparatus including:
the first acquisition unit is used for acquiring a target data set to be processed, wherein the target data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency;
a second obtaining unit, configured to input the target data set to be processed into a coding model, and obtain a second vector representation set, where the second vector representation set includes vector representations of each piece of charging data, and the coding model is obtained by training according to the method described in the first aspect;
A third obtaining unit, configured to input the second vector representation set into a decoding model, and obtain a second decoded data set, where the second decoded data set includes charging data corresponding to each vector representation after decoding;
and a fourth obtaining unit, configured to obtain target charging data according to the second decoding data set, where the target charging data includes charging data that is fully charged and fully discharged.
In a fifth aspect of embodiments of the present application, there is provided an apparatus comprising: a processor, a memory;
the memory is used for storing computer readable instructions or computer programs;
the processor is configured to read the computer-readable instructions or the computer program to cause the apparatus to implement the encoding model generating method according to the first aspect or the charging data acquiring method according to the second aspect.
In a sixth aspect of embodiments of the present application, there is provided a computer-readable storage medium comprising instructions or a computer program, which when run on a computer, causes the computer to perform the encoding model generation method of the first aspect, or the charging data acquisition method of the second aspect.
From this, the embodiment of the application has the following beneficial effects:
In the embodiment of the application, a data set to be trained is obtained, the data set to be trained comprises charging data corresponding to each of a plurality of time periods, the data set to be trained is input into a coding model, a first vector representation set is obtained, and the first vector representation set comprises vector representations corresponding to the charging data of each time period. And inputting the first vector representation set into a decoding model to obtain a first decoded data set, wherein the first decoded data set comprises charging data corresponding to each vector representation after decoding. And comparing the first decoding data set with the data set to be trained to obtain a first loss function, wherein the first loss function is used for indicating the difference between the data obtained by decoding and the original data. And adjusting parameters of the coding model according to the first loss function, and further continuing training the coding model until the first loss function meets a first preset condition to obtain the coding model. That is, by the training, when the coding model codes the charging data, the smoothing process can be performed on the charging data, and the time sequence information among the charging data in different time periods can be learned, so that the charging data can be accurately coded.
In addition, in order to improve the coding precision of the coding model, the first vector expression set may be further input into an auxiliary task model, so as to obtain a second loss function, where the auxiliary task model is used for auxiliary training of the coding model. Training the coding model by using the first loss function and the second loss function until the joint loss function constructed by the first loss function and the second loss function meets a second preset condition, thereby obtaining the coding model.
In practical application, the target data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency. And inputting the target data set to be processed into the coding model to obtain a second vector representation set, and inputting the second vector representation set into the decoding model to obtain a second decoding data set. Target charging data is obtained using the plurality of pieces of charging data in the second decoded data set, the target charging data including charging data that is fully charged down. That is, the embodiment of the application completes the splicing and the completion of the charging data in the charging slice based on the characteristic that the charging data in a short period shows consistency, so that the charging data close to full charge and put down can be obtained, and the accuracy of the subsequent data analysis is improved.
Drawings
FIG. 1 is a flowchart of a method for generating an encoding model according to an embodiment of the present application;
FIG. 2 is a frame diagram of an encoding model generation provided in an embodiment of the present application;
fig. 3 is a flowchart of a method for acquiring charging data according to an embodiment of the present application;
fig. 4 is a block diagram of an encoding model generating device according to an embodiment of the present application;
fig. 5 is a block diagram of a charging data obtaining device according to an embodiment of the present application.
Detailed Description
In order to make the above objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures and detailed description are described in further detail below.
The inventors found in the study of the charge data that since the battery pack of the real vehicle is mostly concentrated between 50% and 80% in SOC and the charge data is mostly segment data, for example, the charge data from 30% to 80% in SOC, the charge data from 50% to 90% in SOC, etc., the charge data of full charge down (0% to 100% in SOC) cannot be obtained, resulting in a limitation of the subsequent charge data analysis process.
Based on the characteristics of consistency of a plurality of charging data fragments in a short period, the embodiment of the application trains the coding model. After the coding model is trained, inputting the target data set to be processed into the coding model, and coding a plurality of pieces of data in the target data set to be processed by using the coding model to obtain a second vector representation set. The method can carry out smoothing treatment on a plurality of pieces of data in the target data set to be processed through encoding, eliminate abnormal data, ensure the stability of the data and extract the time sequence information of each piece of data. And inputting the second vector representation set into the decoding model to obtain a second decoding data set, wherein a plurality of pieces of charging data included in the second decoding data set are smoothed data, and integrating the plurality of pieces of data in the second decoding data set to obtain target charging data, wherein the target charging data comprises fully-charged and fully-discharged charging data.
In order to facilitate understanding of the technical solution provided in the embodiments of the present application, a training method of a coding model and a charging data acquisition method will be described below with reference to the accompanying drawings.
Referring to fig. 1, the flowchart of a coding model generating method provided in an embodiment of the present application, as shown in fig. 1, the method may include:
s101: and acquiring a data set to be trained.
In this embodiment, a coding model is generated for training, and a set including a large amount of data to be trained may be acquired, where the set of data to be trained includes charging data corresponding to each of a plurality of time periods. Specifically, the charging data corresponding to each of the multiple time periods may be charging data in a continuous time period with a certain length in the full-volume vehicle, and the charging data is split according to a preset length, so as to form charging data with multiple vehicles and multiple time periods. Wherein the plurality of time periods may be preset. For example, the first preset time period is from day 1 to day 10, and the vehicle is charged for 4 times in the first preset time period, so that the data set to be trained includes charging data corresponding to each of the 4 times of charging, the second preset time period is from day 11 to day 20, and the vehicle is charged for 3 times in the second preset time period, so that the data set to be trained includes charging data corresponding to each of the 3 times of charging. The charging data may be original current-voltage data, or extracted feature data, such as: battery temperature, state of Charge (SOC), capacity increment analysis (Incremental Capacity Analysis, ICA) curve characteristics, and the like.
The charging data corresponding to the plurality of time periods may include charging data corresponding to the same vehicle in different time periods, and/or charging data corresponding to different vehicles in different time periods. The vector representations of the charging data corresponding to the preset time period have consistency, and the vector representations of the charging data corresponding to the non-preset time period have no consistency. The preset time period may be an equivalent time period set according to an actual service requirement. For example, if there are a vehicle a and a vehicle B, the equivalent time period is 10 days, and the vehicle a is charged 4 times in the 1 st day to the 10 th day, the charging data A1, A2, A3, and A4 are obtained, and the vector representations of the charging data A1 to A4 have consistency; vehicle a was charged 3 times in total on days 11 and 20, charge data A5, A6, and A7 were obtained, and vector representations of the charge data of A5-A7 were consistent. Wherein the vector representation of any one of the charging data A1-A4 does not have consistency with the vector representation of any one of the charging data A5-A7. The vehicle B is charged 2 times in the 1 st day to the 10 th day, charging data B1 and B2 are obtained, and vector representations of B1 and B2 are consistent; the vehicle B is charged 7 times in total on the 11 th day and the 20 th day, and then the charging data B3, B4, B5, B6, B7, B8, and B9 are acquired, and the vector representations corresponding to the charging data B3 to B9 respectively have consistency. And there is no correspondence between the vector representation of any one of the charging data B1-B2 and the vector representation of any one of the charging data B3-B9.
S102: and inputting the data set to be trained into the coding model to obtain a first vector representation set.
After the data set to be trained is obtained, the data set to be trained is input into the coding model, and a first vector representation set is obtained. Wherein the first set of vector representations includes vector representations corresponding to the charging data for each time period. When the coding model codes the charging data in the data set to be trained, the plurality of pieces of charging data can be subjected to smoothing treatment so as to eliminate the charging data under the extreme abnormal condition, and the stability of the extracted features is ensured.
S103: the first vector representation set is input into a decoding model to obtain a first decoded data set.
S104: a first loss function is obtained from the first decoded data set and the data set to be trained.
After the vector representation corresponding to each piece of charging data in the data set to be trained is obtained through the coding model, each vector representation is input into the decoding model, and the decoding model is utilized to decode the vector representation, so that decoded data are obtained. It will be appreciated that, since the encoding model performs smoothing processing on the charging data to obtain the vector representation, there are some differences between the decoding data obtained by decoding the vector representation with the decoding model and the actual charging data, and a first loss function is obtained according to the differences between the decoding data and the actual charging data, where the first loss function is used to reflect the magnitude of the difference between the charging data obtained by decoding and the charging data in the data set to be trained. That is, the larger the first loss function is, the larger the difference between the charge data obtained by decoding and the charge data in the data set to be trained is, the smaller the first loss function is, the smaller the difference between the charge data obtained by decoding and the charge data in the data combination to be trained is, and further, the more accurate the obtained vector representation is when the coding model codes the charge data.
S105: training the coding model according to the first loss function until the first loss function meets a first preset condition, and obtaining the coding model.
After the first loss function is obtained, parameters of the coding model can be adjusted according to the first loss function, and S102-S104 are executed by using the adjusted coding model until the obtained first loss function meets a first preset condition, which indicates that the coding accuracy of the coding model is higher, the parameters of the coding model are not adjusted any more, the coding model is obtained, and the subsequent charging parameters are supplemented by using the coding model. The first preset condition may be set according to an actual application, which is not limited herein. For example, the first preset condition is that the loss function value is minimum, or the loss function value is smaller than a preset threshold value, or the like.
In an application scenario, to improve the training accuracy of the coding model, after S104 is performed, the following steps may be further performed:
a1: the first set of vector representations is input into an auxiliary task model, which is used to assist in training the coding model, to obtain a second loss function.
In this embodiment, the first vector representation set is input into the auxiliary task model, a result obtained by the auxiliary task model based on the first vector representation set is obtained, and then a second loss function is determined according to the result, where the auxiliary task model may be a classification model, that is, classifying each first vector representation in the input first vector representation set. And after the classification result is obtained, obtaining a second loss function according to the classification result and the classification labels corresponding to each vector representation in the first vector representation set. Wherein the vector represents the corresponding classification label for indicating that the vector represents the actual classification result. Wherein the second loss function is used to reflect the difference between the classification result obtained by the vector representation and the actual classification result. The larger the second loss function is, the different classification result output by the classification model is compared with the actual classification result, and further the fact that the vector representation obtained by encoding the encoding model cannot represent the characteristics of the charging data is further explained, and encoding is inaccurate. The smaller the second loss function is, the classification result of the classification model is the same as the actual classification result, and further the characteristic that the vector representation obtained by encoding the encoding model can accurately represent the charging data is further described, and the encoding is more accurate.
A2: training the coding model according to the first loss function and the second loss function until the joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, and obtaining the coding model.
After the first loss function and the second loss function are obtained, a joint loss function is constructed by using the first loss function and the second loss function, and then the joint loss function is used for training the coding model until the joint loss function meets a second preset condition. The second preset condition may be set according to an actual application, which is not limited in this embodiment. For example, the second preset condition is that the loss function value is minimum, or the loss function value is smaller than a preset threshold value, or the like.
The method includes the steps that a data set to be trained is obtained, the data set to be trained comprises charging data corresponding to each of a plurality of time periods, the data set to be trained is input into a coding model, a first vector representation set is obtained, and the first vector representation set comprises vector representations corresponding to the charging data of each time period. And inputting the first vector representation set into a decoding model to obtain a first decoded data set, wherein the first decoded data set comprises charging data corresponding to each vector representation after decoding. A first loss function is obtained from the first decoded data set and the proxy training data set, the first loss function being indicative of a gap between the decoded data and the raw data. And adjusting parameters of the coding model according to the first loss function, and further continuing training the coding model until the first loss function meets a first preset condition to obtain the coding model. That is, by the training, when the coding model codes the charging data, the smoothing process can be performed on the charging data, and the time sequence information among the charging data in different time periods can be learned, so that the charging data can be accurately coded.
At the same time, the first set of vector representations may also be input into the auxiliary task model to obtain a second loss function. Training the coding model by using the first loss function and the second loss function until the joint loss function constructed by the first loss function and the second loss function meets a second preset condition, thereby obtaining the coding model. That is, by the training, when the coding model codes the charging data, the smoothing process can be performed on the charging data, and the time sequence information among the charging data in different time periods can be learned, so that the charging data can be accurately coded.
For facilitating understanding of the training process of the coding model, reference is made to a frame diagram shown in fig. 2, which includes a coding model 201, a classification module 202, a decoding model 203, and a judgment module 204. The encoding model 201 is configured to encode input charging data to be trained to obtain a vector representation, and input the vector representation into the classification module 202. The classification module 202 classifies the vector representation according to the classification result, and inputs the classification result of the vector representation to the judgment module 204, so that the judgment module 204 obtains a second loss function according to the classification result output by the classification module 202 and the label corresponding to the vector representation. Meanwhile, the encoding model 201 may also input the vector representation into the decoding model 203, so that the decoding model 203 decodes the input vector representation to obtain a decoded data set. The judging module 204 judges the similarity of the charging data in the data set to be trained and the decoding data in the decoding data set, obtains a first loss function, and adjusts and trains the coding model according to the first loss function and the second loss function.
The coding model 201 may include two parts, the first part is a neural network structure for extracting characteristics of charging data, such as convolutional neural network (Convolutional Neural Networks, CNN), the two parts have two functions, namely, eliminating some extreme abnormal conditions and smoothing data, so as to ensure more stable data and characteristics of the data, and the second part can be fused with data charged in different time periods, for example, 40% of charging is 80% and 20% of charging is 60% of charging, and the structure can be fused with characteristic data extracted to 20% -80%. The second part is a model part, and extracts timing information of the charging sequence. Therefore, by the scheme provided by the embodiment of the application, not only the fully charged and put-down charging data can be obtained, but also the burr points in some charging data can be removed, so that the data performance is more stable.
After the coding model is trained, the completion of the charging data can be performed according to the coding model, and the following description will be made with reference to the accompanying drawings.
Referring to fig. 3, the flowchart of a method for acquiring charging data according to an embodiment of the present application, as shown in fig. 3, the method may include:
s301: and acquiring a target data set to be processed.
In this embodiment, a charging data segment of a real vehicle in an actual charging process, that is, a target data set to be processed, is obtained. The data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency. That is, the plurality of pieces of charging data in the target data set to be processed are corresponding pieces of charging data in the effective time period. Wherein the plurality of pieces of charging data include charging data of different SOC ranges. For example, one or more pieces of charge data having an SOC of 10% -80%, one or more pieces of data having an SOC of between 30% -90%, one or more pieces of charge data having an SOC of 20% -60%, one or more pieces of charge data having an SOC of 30% -100%, and the like.
Wherein the target data set to be processed can be obtained by the following ways:
obtaining a data set to be processed, wherein the data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data comprise charging data in a non-equivalent time period; inputting the data set to be processed into a coding model to obtain a third vector representation set, wherein the third vector representation set comprises vector representations of all pieces of charging data; inputting the third vector representation set into a classification model to obtain a classification result; and selecting the data to be processed belonging to the same classification result from the data to be processed according to the classification result to form a target data set to be processed. Wherein, the data to be processed belonging to the same classification result shows the characteristics of consistency.
S302: and inputting the target data set to be processed into the coding model to obtain a second vector representation set.
In the process of obtaining the data set to be processed, each piece of charging data in the data set to be processed is input into the coding model, and a vector representation corresponding to each piece of charging data, namely a second vector representation set, is obtained. Wherein the coding model is trained according to the method described in fig. 1.
S303: and inputting the second vector representation set into the decoding model to obtain a second decoded data set.
The second decoding data set comprises charging data corresponding to each vector after decoding. It can be understood that, since the encoding model can perform smoothing processing on the charging data input into the target data set to be processed, the charging data obtained after decoding is smoother.
S304: and obtaining target charging data according to the second decoding data set, wherein the target charging data comprises charging data with full charge put down.
The target charging data is a complete piece of charging data, and the SOC of the target charging data is 0% -100% of the charging data. For example, the charging data in the second decoding data set includes charging data with SOC from 20% -50%, charging data with SOC from 0% -20%, charging data with SOC from 50% -70% and charging data with SOC from 70% -100%, and the charging data are spliced to obtain charging data with SOC from 0% -100%. And for the charging data with the intersection, the charging data corresponding to the intersection part can be obtained by averaging, and then the charging data are spliced. For another example, the charging data in the second decoding data set includes first charging data with SOC from 20% -80% and second charging data with SOC from 50% -90%, and for charging data with SOC from 50% -80% in the intersection portion, the charging data with SOC from 50% -80% in the first charging data and the charging data with SOC from 50% -80% in the second charging data may be averaged, and the average value is taken as the charging data with SOC from 50% -80%. Or taking the largest charging data in the intersection as the charging data corresponding to the intersection part. For example, if the SOC from 50% -80% of the first charge data is greater than the SOC from 50% -80% of the second charge data, the SOC from 50% -80% of the first charge data is used as the SOC from 50% -80% of the second charge data.
As can be seen, in this embodiment, a target data set to be processed is obtained, where the target data set to be processed includes a plurality of pieces of charging data, and there is a consistency between the plurality of pieces of charging data. And inputting the target data set to be processed into the coding model to obtain a second vector representation set, and inputting the second vector representation set into the decoding model to obtain a second decoding data set. Target charging data is obtained using the plurality of pieces of charging data in the second decoded data set, the target charging data including charging data that is fully charged down. That is, the embodiment of the application completes the splicing and the completion of the charging data in the charging slice based on the characteristic that the charging data in a short period shows consistency, so that the charging data close to full charge and put down can be obtained, and the accuracy of the subsequent data analysis is improved.
Based on the above method embodiments, the present application provides an encoding model generating device and a charging data acquiring device, and the description will be made with reference to the accompanying drawings.
Referring to fig. 4, the diagram is a structural diagram of an encoding model generating device provided in an embodiment of the present application, and as shown in fig. 4, the device may include:
a first obtaining unit 401, configured to obtain a data set to be trained, where the data set to be trained includes charging data corresponding to each of a plurality of time periods;
A second obtaining unit 402, configured to input the data set to be trained into a coding model, and obtain a first vector representation set, where the first vector representation set includes vector representations corresponding to charging data in each time period;
a third obtaining unit 403, configured to input the first vector representation set into a decoding model, and obtain a first decoded data set, where the first decoded data set includes charging data corresponding to each vector representation in the first vector representation set after decoding;
a fourth obtaining unit 404, configured to obtain a first loss function according to the first decoded data set and the data set to be trained;
and the training unit 405 is configured to train the coding model according to the first loss function until a first preset condition is satisfied according to the first loss function, thereby obtaining the coding model.
In one possible implementation, the apparatus further includes: a fifth acquisition unit;
the fifth acquisition unit is used for inputting the first vector representation set into an auxiliary task model to obtain a second loss function, and the auxiliary task model is used for assisting in training the coding model;
the training unit is specifically configured to train the coding model according to the first loss function and the second loss function until a joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, so as to obtain the coding model.
In a possible implementation manner, the fifth obtaining unit is specifically configured to input the first vector representation set into a classification model to obtain a classification result; and obtaining a second loss function according to the classification result and the classification labels corresponding to the vector representations in the first vector representation set.
In one possible implementation, the first preset condition is that the loss function value is minimum, and/or the second preset condition is that the loss function value is minimum.
In one possible implementation, the charging data corresponding to each of the plurality of time periods includes charging data corresponding to the same vehicle in different time periods and/or charging data corresponding to different vehicles in different time periods.
In one possible implementation, for the same vehicle, the vector representations of the charging data corresponding to the preset time period have consistency, and the vector representations of the charging data corresponding to the non-preset time period do not have consistency.
In one possible implementation, the vector representations of the charging data corresponding to the same period do not have consistency for different vehicles.
It should be noted that, in this embodiment, the implementation of each unit may refer to the above method embodiment, and this embodiment is not described herein again.
Referring to fig. 5, the structure diagram of a charging data obtaining device provided in an embodiment of the present application, as shown in fig. 5, the device includes:
a first obtaining unit 501, configured to obtain a target data set to be processed, where the target data set to be processed includes a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency;
a second obtaining unit 502, configured to input the target data set to be processed into a coding model, and obtain a second vector representation set, where the second vector representation set includes vector representations of each piece of charging data, and the coding model is obtained through training according to any one of claims 1-5;
a third obtaining unit 503, configured to input the second vector representation set into a decoding model, and obtain a second decoded data set, where the second decoded data set includes charging data corresponding to each vector representation after decoding;
a fourth obtaining unit 504, configured to obtain target charging data according to the second decoded data set, where the target charging data includes charging data that is fully charged and fully discharged.
In a possible implementation manner, the first obtaining unit 501 is specifically configured to obtain a data set to be processed, where the data set to be processed includes a plurality of pieces of charging data; inputting the data set to be processed into the coding model to obtain a third vector representation set, wherein the third vector representation set comprises vector representations of all the charging data; inputting the third vector representation set into a classification model to obtain a classification result; and selecting the data to be processed belonging to the same classification result from the data set to be processed according to the classification result to form the target data set to be processed.
It should be noted that, the implementation of each unit in this embodiment may refer to the related description of the above method embodiment, and this embodiment is not repeated herein.
In addition, an embodiment of the present application provides an apparatus, including: a processor, a memory;
the memory is used for storing computer readable instructions or computer programs;
the processor is configured to read the computer readable instructions or the computer program, so that the device implements the encoding model generating method or the charging data acquiring method.
Embodiments of the present application provide a computer-readable storage medium comprising instructions or a computer program, which when run on a computer, cause the computer to perform the encoding model generation method, or the charging data acquisition method.
It should be noted that, in the present description, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system or device disclosed in the embodiments, since it corresponds to the method disclosed in the embodiments, the description is relatively simple, and the relevant points refer to the description of the method section.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
It is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A method of acquiring charging data, the method comprising:
acquiring a target data set to be processed, wherein the target data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency;
Inputting the target data set to be processed into a coding model to obtain a second vector representation set, wherein the second vector representation set comprises vector representations of each piece of charging data;
inputting the second vector representation set into a decoding model to obtain a second decoding data set, wherein the second decoding data set comprises charging data corresponding to each vector representation after decoding;
obtaining target charging data according to the second decoding data set, wherein the target charging data comprises fully-charged charging data;
the coding model is trained according to the following steps:
acquiring a data set to be trained, wherein the data set to be trained comprises charging data corresponding to each of a plurality of time periods;
inputting the data set to be trained into a coding model to obtain a first vector representation set, wherein the first vector representation set comprises vector representations corresponding to charging data of each time period;
inputting the first vector representation set into a decoding model to obtain a first decoded data set, wherein the first decoded data set comprises charging data corresponding to each vector representation in the first vector representation set after decoding;
obtaining a first loss function according to the first decoding data set and the data set to be trained;
Training the coding model according to the first loss function until the first loss function meets a first preset condition, so as to obtain the coding model, wherein the coding model comprises a neural network structure for extracting charging data characteristics and is used for fusing the charging data characteristics;
after obtaining a first loss function from the first decoded data set and the data set to be trained, the method further comprises:
inputting the first vector representation set into an auxiliary task model to obtain a second loss function, wherein the auxiliary task model is used for assisting in training the coding model;
training the coding model according to the first loss function until the first loss function meets a preset condition, and obtaining the coding model comprises the following steps:
training the coding model according to the first loss function and the second loss function until the joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, so as to obtain the coding model;
the step of inputting the first vector representation set into an auxiliary task model to obtain a second loss function comprises the following steps:
Inputting the first vector representation set into a classification model to obtain a classification result;
and obtaining a second loss function according to the classification result and the classification label corresponding to each vector representation in the first vector representation set, wherein the second loss function is used for representing the difference between the classification result and the actual classification result.
2. The method of claim 1, wherein the acquiring the set of target data to be processed comprises:
acquiring a data set to be processed, wherein the data set to be processed comprises a plurality of pieces of charging data;
inputting the data set to be processed into the coding model to obtain a third vector representation set, wherein the third vector representation set comprises vector representations of all the charging data;
inputting the third vector representation set into a classification model to obtain a classification result;
and selecting the data to be processed belonging to the same classification result from the data set to be processed according to the classification result to form the target data set to be processed.
3. Method according to claim 1, characterized in that the first preset condition is that the loss function value is minimum and/or the second preset condition is that the loss function is minimum.
4. The method of claim 1, wherein the charging data corresponding to each of the plurality of time periods includes charging data corresponding to a same vehicle at different time periods and/or charging data corresponding to different vehicles at different time periods.
5. A charging data acquisition device, the device comprising:
the first acquisition unit is used for acquiring a target data set to be processed, wherein the target data set to be processed comprises a plurality of pieces of charging data, and the plurality of pieces of charging data have consistency;
the second acquisition unit is used for inputting the target data set to be processed into a coding model to obtain a second vector representation set, wherein the second vector representation set comprises vector representations of each piece of charging data;
a third obtaining unit, configured to input the second vector representation set into a decoding model, and obtain a second decoded data set, where the second decoded data set includes charging data corresponding to each vector representation after decoding;
a fourth obtaining unit, configured to obtain target charging data according to the second decoded data set, where the target charging data includes charging data that is fully charged and fully discharged;
The coding model is trained according to the following steps:
acquiring a data set to be trained, wherein the data set to be trained comprises charging data corresponding to each of a plurality of time periods;
inputting the data set to be trained into a coding model to obtain a first vector representation set, wherein the first vector representation set comprises vector representations corresponding to charging data of each time period;
inputting the first vector representation set into a decoding model to obtain a first decoded data set, wherein the first decoded data set comprises charging data corresponding to each vector representation in the first vector representation set after decoding;
obtaining a first loss function according to the first decoding data set and the data set to be trained;
training the coding model according to the first loss function until the first loss function meets a first preset condition, so as to obtain the coding model, wherein the coding model comprises a neural network structure for extracting charging data characteristics and is used for fusing the charging data characteristics;
after obtaining a first loss function from the first decoded data set and the data set to be trained, the steps further include:
Inputting the first vector representation set into an auxiliary task model to obtain a second loss function, wherein the auxiliary task model is used for assisting in training the coding model;
training the coding model according to the first loss function until the first loss function meets a preset condition, and obtaining the coding model comprises the following steps:
training the coding model according to the first loss function and the second loss function until the joint loss function constructed according to the first loss function and the second loss function meets a second preset condition, so as to obtain the coding model;
the step of inputting the first vector representation set into an auxiliary task model to obtain a second loss function comprises the following steps:
inputting the first vector representation set into a classification model to obtain a classification result; and obtaining a second loss function according to the classification result and the classification label corresponding to each vector representation in the first vector representation set, wherein the second loss function is used for representing the difference between the classification result and the actual classification result.
6. An apparatus, comprising: a processor, a memory;
the memory is used for storing computer readable instructions or computer programs;
The processor configured to read the computer-readable instructions or the computer program to cause the apparatus to implement the charging data acquisition method according to any one of claims 1 to 4.
CN202110832448.XA 2021-07-22 2021-07-22 Coding model generation method, charging data acquisition method and device Active CN113487762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110832448.XA CN113487762B (en) 2021-07-22 2021-07-22 Coding model generation method, charging data acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110832448.XA CN113487762B (en) 2021-07-22 2021-07-22 Coding model generation method, charging data acquisition method and device

Publications (2)

Publication Number Publication Date
CN113487762A CN113487762A (en) 2021-10-08
CN113487762B true CN113487762B (en) 2023-07-04

Family

ID=77942188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110832448.XA Active CN113487762B (en) 2021-07-22 2021-07-22 Coding model generation method, charging data acquisition method and device

Country Status (1)

Country Link
CN (1) CN113487762B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN110599557A (en) * 2017-08-30 2019-12-20 深圳市腾讯计算机系统有限公司 Image description generation method, model training method, device and storage medium
CN112379269A (en) * 2020-10-14 2021-02-19 武汉蔚来能源有限公司 Battery abnormity detection model training and detection method and device thereof

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105301510B (en) * 2015-11-12 2017-09-05 北京理工大学 A kind of cell degradation parameter identification method
CN106973038B (en) * 2017-02-27 2019-12-27 同济大学 Network intrusion detection method based on genetic algorithm oversampling support vector machine
CN108983103B (en) * 2018-06-29 2020-10-23 上海科列新能源技术有限公司 Data processing method and device for power battery
CN109934408A (en) * 2019-03-18 2019-06-25 常伟 A kind of application analysis method carrying out automobile batteries RUL prediction based on big data machine learning
CN110058175B (en) * 2019-05-05 2020-04-14 北京理工大学 Reconstruction method of power battery open circuit voltage-charge state function relation
CN111291190B (en) * 2020-03-23 2023-04-07 腾讯科技(深圳)有限公司 Training method of encoder, information detection method and related device
CN111639684B (en) * 2020-05-15 2024-03-01 北京三快在线科技有限公司 Training method and device for data processing model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599557A (en) * 2017-08-30 2019-12-20 深圳市腾讯计算机系统有限公司 Image description generation method, model training method, device and storage medium
CN107832718A (en) * 2017-11-13 2018-03-23 重庆工商大学 Finger vena anti false authentication method and system based on self-encoding encoder
CN112379269A (en) * 2020-10-14 2021-02-19 武汉蔚来能源有限公司 Battery abnormity detection model training and detection method and device thereof

Also Published As

Publication number Publication date
CN113487762A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
Zhao et al. Lithium-ion batteries state of charge prediction of electric vehicles using RNNs-CNNs neural networks
CN109596913B (en) Charging pile fault cause diagnosis method and device
CN110658460B (en) Battery life prediction method and device for battery pack
CN112016237B (en) Deep learning method, device and system for lithium battery life prediction
WO2022198616A1 (en) Battery life prediction method and system, electronic device, and storage medium
CN115291116A (en) Energy storage battery health state prediction method and device and intelligent terminal
CN113487762B (en) Coding model generation method, charging data acquisition method and device
CN114744723A (en) Method and device for adjusting charging request current and electronic equipment
CN114239949A (en) Website access amount prediction method and system based on two-stage attention mechanism
CN113376540B (en) LSTM battery health state estimation method based on evolutionary attention mechanism
CN112259157A (en) Protein interaction prediction method
CN115456223B (en) Lithium battery echelon recovery management method and system based on full life cycle
CN116796821A (en) Efficient neural network architecture searching method and device for 3D target detection algorithm
CN116736130A (en) Lithium battery residual service life prediction method and system
CN116008815A (en) Method, device, equipment, storage medium and vehicle for detecting short circuit in battery cell
CN112529637B (en) Service demand dynamic prediction method and system based on context awareness
CN108390407B (en) Distributed photovoltaic access amount calculation method and device and computer equipment
Ibraheem et al. Early prediction of Lithium-ion cell degradation trajectories using signatures of voltage curves up to 4-minute sub-sampling rates
CN112467752A (en) Voltage regulating method and device for distributed energy distribution system
CN112803527A (en) Automobile lithium battery charging dynamic protection system based on experience function and big data
CN113610111B (en) Fusion method, device, equipment and storage medium of distributed multi-source data
CN117637029B (en) Antibody developability prediction method and device based on deep learning model
CN115912375B (en) Low-voltage power supply compensation method and device and electronic equipment
Yang Prediction Method of Remaining Service Life of Li-ion Batteries Based on XGBoost and LightGBM
CN109242167B (en) Photovoltaic power generation online prediction method based on average Lyapunov index

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant