WO2024021630A1 - 一种计算指标数据的方法和装置 - Google Patents

一种计算指标数据的方法和装置 Download PDF

Info

Publication number
WO2024021630A1
WO2024021630A1 PCT/CN2023/081815 CN2023081815W WO2024021630A1 WO 2024021630 A1 WO2024021630 A1 WO 2024021630A1 CN 2023081815 W CN2023081815 W CN 2023081815W WO 2024021630 A1 WO2024021630 A1 WO 2024021630A1
Authority
WO
WIPO (PCT)
Prior art keywords
time series
series data
data
sample
measured
Prior art date
Application number
PCT/CN2023/081815
Other languages
English (en)
French (fr)
Inventor
宋礼
张钧波
郑宇�
Original Assignee
京东城市(北京)数字科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东城市(北京)数字科技有限公司 filed Critical 京东城市(北京)数字科技有限公司
Publication of WO2024021630A1 publication Critical patent/WO2024021630A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present disclosure relates to the field of big data technology, and in particular to a method and device for calculating index data.
  • Indicator calculation is an indispensable part of urban intelligence, and plays a vital role in the analysis and judgment of urban development, resource scheduling, etc.
  • a lot of manpower and time consumption There are many indicators in cities. However, the commonly used method is to design a machine learning or deep learning model for each indicator to fit the data (training phase) to complete the matching. Calculation of indicators (inference phase). For a specific demand scenario, such as social retail sales forecast, sales forecast, etc., the data collection phase usually accesses data from subsystems, the feature extraction phase usually uses a sliding window method, and model training is usually supported by existing open source algorithm libraries. Such as sklearn, etc., the model inference phase uses the latest features to calculate the indicators. The above process requires the use of human experience to model each indicator, which often requires huge manpower and time consumption.
  • embodiments of the present disclosure provide a method and device for calculating indicator data to solve the technical problems of high manpower and time consumption and sparse data.
  • a method for calculating indicator data including:
  • Features to be measured are extracted from the time series data to be measured, and the features to be measured are input into the trained indicator calculation model, thereby outputting indicator data.
  • filtering out sample time series data that is similar to the time series data to be measured includes:
  • the number of clusters is the square root of the total number of the respective sample time series data.
  • sample time series data that are similar to the time series data to be measured are screened out, include:
  • N is less than M, and N and M are both positive integers.
  • feature extraction is performed on the sample time series data to construct a data set, including:
  • a sliding window is used to extract features from each time in the sample time series data, and the sample features and sample labels corresponding to each time are obtained respectively; wherein, the sample features include the time and the time The previous indicator data, the sample label includes the indicator data after the said moment;
  • a data set is constructed based on the sample characteristics and sample labels corresponding to each moment in each sample time series data.
  • the sample characteristics further include time characteristics corresponding to the moment.
  • the data set is used to train the indicator calculation model, and a trained indicator calculation model is obtained, including:
  • the test set is used to test each index calculation model after the parameter adjustment, thereby screening out the index calculation model with the best test results.
  • the time series data to be measured is used to adjust parameters of each indicator calculation model, thereby obtaining each indicator calculation model after parameter adjustment, including:
  • a sliding window is used to extract features from each moment in the time series data to be measured, and sample features and sample labels corresponding to each moment are obtained respectively; where the sample features include the moment and the indicator data before the moment, so The sample label includes indicator data after the moment;
  • the sample characteristics further include time characteristics corresponding to the moment.
  • a device for calculating indicator data including:
  • a screening module is used to filter out sample time series data that is similar to the time series data to be tested, and perform feature extraction on the sample time series data to construct a data set; wherein the indicators corresponding to the time series data to be tested are the same as the sample time series data. The corresponding indicators are different, and the number of entries of the time series data to be measured is less than the number of entries of the sample time series data;
  • a training module used to train the indicator calculation model using the data set to obtain a trained indicator calculation model
  • a calculation module configured to extract features to be measured from the time series data to be measured, and input the features to be measured into the trained indicator calculation model, thereby outputting indicator data.
  • the screening module is also used to:
  • the number of clusters is the square root of the total number of the respective sample time series data.
  • the screening module is also used to:
  • N is less than M, and N and M are both positive integers.
  • the screening module is also used to:
  • a sliding window is used to extract features from each time in the sample time series data, and the sample features and sample labels corresponding to each time are obtained respectively; wherein, the sample features include the time and the time The previous indicator data, the sample label includes the indicator data after the said moment;
  • a data set is constructed based on the sample characteristics and sample labels corresponding to each moment in each sample time series data.
  • the training module is also used to:
  • the test set is used to test each index calculation model after the parameter adjustment, thereby screening out the index calculation model with the best test results.
  • the training module is also used to:
  • a sliding window is used to extract features from each moment in the time series data to be measured, and sample features and sample labels corresponding to each moment are obtained respectively; where the sample features include the moment and the indicator data before the moment, so The sample label includes indicator data after the moment;
  • an electronic device including:
  • processors one or more processors
  • a storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the method described in any of the above embodiments.
  • a computer-readable medium is also provided, on which a computer program is stored.
  • the program is executed by a processor, the method described in any of the above embodiments is implemented.
  • a computer program product including a computer program that implements the method described in any of the above embodiments when executed by a processor.
  • One embodiment of the above invention has the following advantages or beneficial effects: because the sample time series data is filtered out and is similar to the time series data to be measured, the sample time series data is characterized. It is a technical means to extract and construct a data set, thus overcoming the technical problems of high manpower and time consumption and sparse data in the existing technology. Embodiments of the present disclosure solve the problem of data sparseness by extracting features from sample time series data that are similar to the time series data to be measured and constructing a data set. Even complex machine learning or deep learning models can be used, thereby effectively improving the model. calculation accuracy; moreover, it can also effectively reduce the investment in human resource costs and time resource costs.
  • Figure 1 is a schematic diagram of the main flow of a method for calculating indicator data according to an embodiment of the present disclosure
  • Figure 2 is a schematic diagram of filtering sample time series data according to an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of using a sliding window to extract features from sample time series data according to an embodiment of the present disclosure
  • Figure 4 is a schematic diagram of the main flow of a method for calculating indicator data according to a reference embodiment of the present disclosure
  • Figure 5 is a schematic diagram of the main modules of a device for calculating indicator data according to an embodiment of the present disclosure
  • Figure 6 is an exemplary system architecture diagram in which embodiments of the present disclosure may be applied.
  • FIG. 7 is a schematic structural diagram of a computer system suitable for implementing a terminal device or server according to an embodiment of the present disclosure.
  • FIG. 1 is a schematic diagram of the main flow of a method for calculating indicator data according to an embodiment of the present disclosure.
  • the method of calculating indicator data may include:
  • Step 101 Screen out sample time series data that is similar to the time series data to be measured, and perform feature extraction on the sample time series data to construct a data set.
  • embodiments of the present disclosure screen out sample time series data that are similar to the time series data to be tested, and perform feature extraction on them, thereby constructing a data set for training the model; wherein, the time series data to be tested corresponds to The indicator is different from the indicator corresponding to the sample time series data, and the number of entries of the time series data to be measured is less than the number of entries of the sample time series data.
  • the entry number threshold can be preset according to business needs
  • Step 101 can be divided into two steps.
  • the first step is to screen out M sample time series data that are similar to the time series data to be tested from the existing historical data.
  • the second step is to perform feature extraction on the screened M sample time series data.
  • each piece of data needs to be collected first.
  • data collection comes from various business subsystems.
  • the format of each piece of data is as follows.
  • Each piece of data contains a time field and several dimension fields (the dimension field is greater than or equal to 1, and the dimension represents the value to be calculated.
  • Indicators such as sales volume, sales, etc.).
  • filtering out sample time series data that is similar to the time series data to be tested includes: inputting the time series data to be tested and each sample time series data into a trained encoder, and outputting the time series data to be tested respectively.
  • Measure the coding vectors corresponding to the time series data and the coding vectors corresponding to the time series data of each sample use a clustering algorithm to cluster the coding vectors corresponding to the time series data of each sample to obtain multiple clusters and the characteristic center vectors corresponding to each cluster. ; Based on the coding vector corresponding to the time series data to be measured and the characteristic center vector corresponding to each cluster, select several sample time series data that are similar to the time series data to be measured.
  • the time series data to be tested and each sample time series data are input into the trained encoder (Encoder). Assuming that there are n historical time series data, n encoding vectors will be obtained. Then a clustering algorithm (such as K-Means clustering algorithm) is used to cluster the coding vectors corresponding to each sample time series data, and multiple clusters and the characteristic center vectors corresponding to each cluster are obtained; finally, based on the corresponding time series data to be tested, The coding vector and the feature center vector corresponding to each cluster are used to screen out several sample time series data that are similar to the time series data to be measured.
  • K-Means clustering algorithm such as K-Means clustering algorithm
  • the encoder needs to be pre-trained.
  • An encoder (Encoder) and a decoder (Decoder) are trained through the auto-encoding model.
  • the Decoder is responsible for restoring the original input data based on the encoding vector f. After obtaining the encoder, you can use the encoder to encode each historical sample time series data.
  • the number of clusters is the square root of the total number of the respective sample time series data. Assuming that n sample time series data are clustered, we can finally form clusters and their corresponding feature center vectors.
  • the initial point of clustering can be set in advance. For example, based on the total number of sample time series data, the initial point can be set as This can improve the calculation speed of similarity in subsequent steps.
  • based on the encoding vector corresponding to the time series data to be measured and the characteristic center vector corresponding to each cluster several sample time series data that are similar to the time series data to be measured are screened out, Including: separately calculating the number of timings to be tested According to the similarity between the corresponding coding vector and the feature center vector corresponding to each cluster, N clusters with the greatest similarity to the time series data to be measured are selected; and the coding vector corresponding to the time series data to be measured and the corresponding coding vector are calculated respectively. Describe the similarity of the coding vector corresponding to each sample time series data in N clusters, and select M sample time series data that have the greatest similarity with the time series data to be tested; where N is less than M, and N and M are both positive integer.
  • the two feature center vectors U and V have the greatest similarity, and then calculate the similarity between the g vector and the two feature center vectors U, V respectively, and select the 10 vectors with the greatest similarity, that is
  • the 10 sample time series data most similar to the time series data to be measured can be obtained.
  • the values of N and M can be preset.
  • the embodiments of the present disclosure exemplarily show the values of N and M, but the values shown in the embodiments are not limited.
  • the existing technology usually needs to calculate the similarity between the time series data to be measured and each sample time series data.
  • the time complexity of the calculation is O(n), while the time complexity of the embodiment of the present disclosure is This significantly reduces the time complexity of calculating similarity. Therefore, the embodiment of the present disclosure can reduce the time complexity of comparing the similarity of each sample time series data one by one through one rough sorting and two fine sorting.
  • performing feature extraction on the sample time series data to construct a data set includes: for each sample time series data, using a sliding window to characterize each moment in the sample time series data. Extract to obtain sample features and sample labels corresponding to each moment; wherein, the sample features include the moment and the indicator data before the moment, and the sample labels include the indicator data after the moment; based on each of the The sample features and sample labels corresponding to each moment in the sample time series data are used to construct a data set.
  • the sample time series data uses a sliding window method for feature extraction.
  • the sample time series data to be extracted Each moment contains d f- dimensional features, the size of the sliding window is s, and the step size to be predicted is l, then the sample features extracted at moment i are (xi -s+1 ,...,xi -1 , xi ), the sample label is (xi +1 ,xi +2 ,...,xi +l ).
  • the prediction step size is l, in order to ensure calculation accuracy, the length of the sample label is preferably l.
  • the sample characteristics further include time characteristics corresponding to the moment.
  • time features corresponding to each moment in the sample time series data.
  • the dimension information in time is first extracted, including the month, day, and week corresponding to the current moment, the number of days of the date in the whole year, the number of weeks of the date in the whole year, the quarter in which the date is located, and whether the date is For working days or holidays, etc., if the granularity of time information is smaller, such as minute or hour granularity (traffic flow prediction scenario, passenger flow prediction scenario), then hour information, minute information, etc.
  • the features and time features extracted through the sliding window are spliced together to form a complete sample feature.
  • the sample feature and the corresponding sample label constitute a piece of sample data.
  • each sample time series data multiple pieces of sample data can be extracted using a sliding window, so each piece of sample data corresponding to each sample time series data together forms a data set.
  • Step 102 Use the data set to train the indicator calculation model to obtain a trained indicator calculation model.
  • the indicator calculation model is trained using the sample data in the data set, and the indicator calculation model is fitted to the sample data, and finally the trained indicator calculation model is obtained.
  • step 102 may include: dividing the data set into a training data set, a verification data set and a test data set; using the training data set and the verification data set and analyzing the data based on the network Lattice search algorithm and TPE search algorithm are used to calculate the optimal parameters of each model, thereby obtaining the various indicator calculation models; using the time series data to be measured to adjust the parameters of each indicator calculation model, so as to obtain each parameter-adjusted model.
  • Indicator calculation model use the test set to test each indicator calculation model after the parameter adjustment, thereby screening out the indicator calculation model with the best test results.
  • the data set can be divided into a training data set D train , a verification data set D val and a test number Data set D test ; among them, the training data set is used to learn the parameters ⁇ of the model m; the verification data set is used to select the model m (that is, select the model with the best performance from multiple models.
  • MSE can be used model selection, and the optimal parameters of the model ⁇ * ; the test data set is used to test the final effect of the model.
  • test data 8:1:1.
  • embodiments of the present disclosure use a hyperparameter optimization method to search for the optimal results of the training model.
  • the model space what models may be selected
  • the parameter space corresponding to the model the space in which each model hyperparameter can be selected.
  • the model space and corresponding parameter space included in the embodiments of the present disclosure are as follows:
  • the parameter space included includes the degree of difference (d), the length of autoregression (p), etc.
  • the Ridge model uses a linear model with a regularization term added to fit the training data, where the model assumptions are: and use the loss function Among them, d is the dimension of data features, the characteristics of the sample are (x 1 , x 2 ,..., x d ), y i and are the corresponding sample labels and corresponding predicted values respectively, and ⁇ is the regularization coefficient.
  • the parameter space included is the regularization coefficient ⁇ .
  • RandomForest is a random forest model that integrates multiple decision trees to complete predictions.
  • the parameter space of the model includes the number of trees, the proportion of split point sampling, the minimum number of samples for each leaf node, etc.
  • Xgboost model is a boosted tree model. Each tree fits the residual of the existing tree model and label data, that is The parameter space of the model includes the number of trees, learning rate, regularization terms, etc.
  • the embodiment of the present disclosure uses the grid search algorithm and the TPE (Tree-structured Parzen Estimator) search algorithm.
  • the grid search algorithm enumerates every possible value of each parameter and completes the combination of parameters through permutation and combination.
  • Each parameter combination uses the parameters of the training data set D train to train the model, and the verification of the model effect is completed through the verification data set D val .
  • a and b The possible values of each parameter are (a 1 , a 2 , a 3 ), (b 1 , b 2 ).
  • the parameter groups that need to be enumerated in the grid search algorithm include (a 1 ,b 1 ),(a 1 ,b 2 ),(a 2 ,b 1 ),(a 2 ,b 2 ),(a 3 ,b 1 ),a 3 ,b 2 ).
  • the disclosed embodiment adopts a combination of grid search algorithm and TPE search algorithm.
  • the main reasons are as follows: 1. When there are fewer parameter groups to be searched, the grid search algorithm is more efficient; 2.
  • the TPE algorithm can make up for the shortcomings of the grid search algorithm and explore possible optimal solutions in more spaces. It mainly solves the problem of insufficient search space caused by the grid search algorithm through equidistant division.
  • using the time series data to be measured to perform parameter adjustment on each indicator calculation model, thereby obtaining each indicator calculation model after parameter adjustment including: using a sliding window to adjust the parameters of each indicator calculation model.
  • Features are extracted at each moment in the time series data to obtain sample features and sample labels corresponding to each moment; wherein, the sample features include the moment and the indicator data before the moment, and the sample labels include after the moment The indicator data; use the sample characteristics and sample labels corresponding to each moment in the time series data to be measured to adjust the parameters of each indicator calculation model, thereby obtaining each indicator calculation model after parameter adjustment.
  • the sliding window method is used to extract features at each moment in the time series data to be tested.
  • the size of the sliding window is s, and the step size to be predicted is l.
  • the extracted sample features are (xi -s+1 ,...,xi -1 , xi ), and the sample labels are (xi +1 ,xi +2 ,...,xi +l ).
  • the sample features and sample labels corresponding to each moment in the time series data to be tested i.e., the training set
  • Indicator calculation model assuming optimal test results The parameters are Then the parameters of the model after fine-tuning where ⁇ * is the optimal parameter of the model after fine-tuning, eta is the learning rate of the model parameters in the fine-tuning stage, and L is the loss function of MSE, which is used to measure the performance of the model.
  • the error on the training set, (x, y) is the sample data in the training set. According to one or more embodiments of the present disclosure, in a specific implementation process, it can be implemented in a mini-batch gradient descent (mini-batch SGD) manner.
  • mini-batch SGD mini-batch gradient descent
  • the index calculation model used in the embodiment of the present disclosure only uses single-dimensional data for training, when fine-tuning the model, the model is not fine-tuned for multi-dimensional data.
  • the sample characteristics further include time characteristics corresponding to the moment. Similar to extracting features of sample time series data, it is also necessary to extract time features corresponding to each moment in the sample time series data. Specifically, the dimension information in time is first extracted, including the month, day, and week corresponding to the current moment, the number of days of the date in the whole year, the number of weeks of the date in the whole year, the quarter in which the date is located, and whether the date is For working days or holidays, etc., if the granularity of time information is smaller, such as minute or hour granularity (traffic flow prediction scenario, passenger flow prediction scenario), then hour information, minute information, etc. can also be extracted, and the embodiment of the present disclosure does not limit this. . Then, the features and time features extracted through the sliding window are spliced together to form a complete sample feature. The sample feature and the corresponding sample label constitute a piece of sample data.
  • test data set D test is finally used to test the effects of different models, and the model with the best results is selected.
  • the evaluation index used is: y is the sample label, is the prediction result of the model.
  • Step 103 Extract the features to be measured from the time series data to be measured, and input the features to be measured into the trained indicator calculation model, thereby outputting indicator data.
  • calculating indicator data usually requires feature extraction through the data of the last period.
  • the extracted features are recorded as x p (i.e., the features to be measured), and then the optimal indicator calculation model is used. Calculate the indicator data and get That is the output result of the indicator calculation model.
  • the embodiments of the present disclosure perform feature extraction on the sample time series data by filtering out sample time series data that is similar to the time series data to be measured.
  • This technical means of constructing a data set solves the technical problems of high manpower and time consumption and sparse data in the existing technology.
  • Embodiments of the present disclosure solve the problem of data sparseness by extracting features from sample time series data that are similar to the time series data to be measured and constructing a data set. Even complex machine learning or deep learning models can be used, thereby effectively improving the model. calculation accuracy; moreover, it can also effectively reduce the investment in human resource costs and time resource costs.
  • FIG. 4 is a schematic diagram of the main flow of a method for calculating indicator data according to a reference embodiment of the present disclosure.
  • the method for calculating indicator data may include:
  • Step 401 Collect historical sample time series data.
  • sample time series data corresponding to the indicator is collected.
  • Step 402 Filter out sample time series data that is similar to the time series data to be measured from each sample time series data.
  • the indicator corresponding to the time series data to be measured is different from the indicator corresponding to the sample time series data, and the number of entries of the time series data to be measured is less than the number of entries of the sample time series data.
  • step 402 may include: inputting the time series data to be measured and each sample time series data into a trained encoder, and outputting the coding vector corresponding to the time series data to be measured and the Coding vectors corresponding to each sample time series data; using a clustering algorithm to cluster the coding vectors corresponding to each sample time series data to obtain multiple clusters and feature center vectors corresponding to each cluster (a rough ranking); based on the to-be- The encoding vector corresponding to the time series data and the feature center vector corresponding to each cluster are measured, and several sample time series data that are similar to the time series data to be measured are screened out (secondary fine sorting).
  • the embodiment of the present disclosure can reduce the time complexity of comparing the similarity of each sample time series data one by one through one rough sorting and two fine sorting.
  • Step 403 Perform feature extraction on the sample time series data to construct a data set; perform feature extraction on the time series data to be tested to obtain fine-tuning sample data.
  • both the sample time series data and the time series data to be tested adopt a sliding window method for feature extraction, thereby constructing a data set and fine-tuning the sample data.
  • data The set is used to train each model, and the fine-tuning sample data is used to fine-tune the parameters of each model.
  • Step 404 Divide the data set into a training data set, a verification data set and a test data set.
  • Step 405 Use the training data set and the verification data set and calculate the optimal parameters of each model based on the grid search algorithm and the TPE search algorithm, thereby obtaining the each index calculation model.
  • the embodiment of the present disclosure uses the grid search algorithm and the TPE search algorithm to calculate the optimal parameters of each model.
  • the grid search algorithm enumerates every possible value of each parameter, completes the combination of parameters through permutation and combination, uses the parameters of the training data set D train to train the model for each parameter combination, and uses the verification data set D val Complete verification of model effects
  • Step 406 Use the fine-tuned sample data corresponding to the time series data to be measured to fine-tune the parameters of each of the index calculation models, thereby obtaining the fine-tuned index calculation models.
  • Step 407 Use the test set to test each of the fine-tuned index calculation models, thereby selecting the index calculation model with the best test results.
  • test data set D test is finally used to test the effects of different models, and the model with the best results is selected.
  • Step 408 Extract the features to be measured from the time series data to be measured, and input the features to be measured into the trained indicator calculation model, thereby outputting indicator data.
  • FIG. 5 is a schematic diagram of the main modules of a device for calculating index data according to an embodiment of the present disclosure.
  • the device 500 for calculating indicator data includes a screening module 501, a training module 502 and a calculation module 503; wherein the screening module 501 is used to screen out sample time series data that is similar to the time series data to be measured.
  • Feature extraction is performed on sample time series data, Thereby constructing a data set; wherein, the indicators corresponding to the time series data to be tested are different from the indicators corresponding to the sample time series data, and the number of entries of the time series data to be tested is less than the number of entries of the sample time series data; training module 502 is used to train the indicator calculation model using the data set to obtain the trained indicator calculation model; the calculation module 503 is used to extract the features to be measured from the time series data to be measured, and input the features to be measured into The trained indicator is calculated in the model, thereby outputting indicator data.
  • the screening module 501 is also used to:
  • the number of clusters is the square root of the total number of the respective sample time series data.
  • the screening module 501 is also used to:
  • N is less than M, and N and M are both positive integers.
  • the screening module 501 is also used to:
  • sample features include the time and indicator data before the time
  • sample labels include indicator data after the time
  • a data set is constructed based on the sample characteristics and sample labels corresponding to each moment in each sample time series data.
  • the training module 502 is also used to:
  • the test set is used to test each index calculation model after the parameter adjustment, thereby screening out the index calculation model with the best test results.
  • the training module 502 is also used to:
  • a sliding window is used to extract features from each moment in the time series data to be measured, and the sample features and sample labels corresponding to each moment are obtained respectively; wherein the sample features include the moment and the indicator data before the moment, so The sample label includes indicator data after the moment;
  • FIG. 6 illustrates a method or calculation for calculating indicator data to which embodiments of the present disclosure may be applied.
  • Exemplary system architecture 600 for a device of indicator data for a device of indicator data.
  • the system architecture 600 may include terminal devices 601, 602, 603, a network 604 and a server 605.
  • Network 604 is a medium used to provide communication links between terminal devices 601, 602, 603 and server 605.
  • Network 604 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • Terminal devices 601, 602, 603 Users can use terminal devices 601, 602, 603 to interact with the server 605 through the network 604 to receive or send messages, etc.
  • Various communication client applications can be installed on the terminal devices 601, 602, and 603, such as shopping applications, web browser applications, search applications, instant messaging tools, email clients, social platform software, etc. (only examples).
  • the terminal devices 601, 602, and 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, and so on.
  • the server 605 may be a server that provides various services, such as a backend management server that provides support for shopping websites browsed by users using the terminal devices 601, 602, and 603 (example only).
  • the background management server can analyze and process the received item information query request and other data, and feed the processing results back to the terminal device.
  • the method for calculating index data provided by the embodiment of the present disclosure is generally executed by the server 605.
  • the device for calculating the index data is generally provided in the server 605.
  • the method for calculating index data provided by the embodiment of the present disclosure can also be executed by terminal devices 601, 602, and 603.
  • the device for calculating index data can be provided in the terminal devices 601, 602, and 603.
  • FIG. 7 a schematic structural diagram of a computer system 700 suitable for implementing a terminal device according to an embodiment of the present disclosure is shown.
  • the terminal device shown in FIG. 7 is only an example and should not impose any restrictions on the functions and scope of use of the embodiments of the present disclosure.
  • computer system 700 includes a central processing unit (CPU) 701 that can operate according to a program stored in a read-only memory (ROM) 702 or loaded from a storage portion 708 into a random access memory (RAM) 703. and perform various appropriate actions and processing.
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for the operation of the system 700 are also stored.
  • the CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to bus 704.
  • the following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, etc.; an output section 707 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., speakers, etc.; and a storage section 708 including a hard disk, etc. ; and a communication section 709 including a network interface card such as a LAN card, a modem, etc.
  • the communication section 709 performs communication processing via a network such as the Internet.
  • Driver 710 is also connected to I/O interface 705 as needed.
  • Removable media 711 such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc., are installed on the drive 710 as needed, so that a computer program read therefrom is installed into the storage portion 708 as needed.
  • embodiments of the present disclosure include a computer program carried on a computer-readable medium, the computer program including program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication portion 709 and/or installed from removable media 711 .
  • the central processing unit (CPU) 701 the above-described functions defined in the system of the present disclosure are performed.
  • the computer-readable medium shown in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmd read-only memory (EPROM or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • computer readable storage may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the foregoing.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block in the block diagram or flowchart illustration, and combinations of blocks in the block diagram or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations, or may be implemented by special purpose hardware-based systems that perform the specified functions or operations. Achieved by a combination of specialized hardware and computer instructions.
  • the modules involved in the embodiments of the present disclosure can be implemented in software or hardware.
  • the described module can also be set in the processor.
  • a processor includes a screening module, a training module and a calculation module, where the names of these modules do not constitute a reference to the module itself in some cases. limitations.
  • the present disclosure also provides a computer-readable medium.
  • the computer-readable medium may be included in the device described in the above embodiment; it may also be a separate exists but is not assembled into the device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the device implements the following method: filter out sample time series data that is similar to the time series data to be measured, and compare the samples Feature extraction is performed on the time series data to construct a data set; wherein the indicators corresponding to the time series data to be measured are different from the indicators corresponding to the sample time series data, and the number of time series data entries to be measured is less than that of the sample time series data.
  • the number of entries use the data set to train the indicator calculation model to obtain the trained indicator calculation model; extract the features to be tested from the time series data to be tested, and input the features to be tested into the trained In the indicator calculation model, the indicator data is output.
  • embodiments of the present disclosure also provide a computer program product, including a computer program that implements the method described in any of the above embodiments when executed by a processor.
  • the technical means of filtering out sample time series data that are similar to the time series data to be measured, extracting features of the sample time series data, and constructing a data set are used, thus overcoming the manpower and time problems in the existing technology.
  • Embodiments of the present disclosure solve the problem of data sparseness by extracting features from sample time series data that are similar to the time series data to be measured and constructing a data set. Even complex machine learning or deep learning models can be used, thereby effectively improving the model. calculation accuracy; moreover, it can also effectively reduce the investment in human resource costs and time resource costs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本公开提供了一种计算指标数据的方法和装置,涉及大数据技术领域。该方法的一具体实施方式包括:筛选出与待测时序数据相近的样本时序数据,对所述样本时序数据进行特征提取,从而构建数据集;其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据条目数量少于所述样本时序数据的条目数量;采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型;从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。该实施方式能够解决人力和时间消耗大以及数据稀疏的技术问题。

Description

一种计算指标数据的方法和装置
相关申请的交叉引用
本申请要求享有2022年7月27日提交的公开名称为“一种计算指标数据的方法和装置”的中国专利申请202210895954.8的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分或全部。
技术领域
本公开涉及大数据技术领域,尤其涉及一种计算指标数据的方法和装置。
背景技术
随着大数据和智能化系统的发展,城市趋于智能化,指标计算是城市智能化不可或缺的一个部分,对于城市发展的分析研判,资源调度等起到至关重要的作用。
在实现本公开过程中,发明人发现现有技术中至少存在如下问题:
1)大量的人力消耗和时间消耗:城市中的指标是众多的,然而目前常用的方法是针对每种指标单独设计一个机器学习或深度学习模型来对数据拟合(训练阶段),从而完成对指标的计算(推理阶段)。针对一个具体的需求场景,如社会零售额预测、销量预测等,数据采集阶段通常从子系统接入数据,特征提取阶段通常采用滑动窗口的方式,模型训练通常使用现有的开源算法库支持,如sklearn等,模型推理阶段使用最近的特征实现对指标的计算。上述过程需要利用人为经验对每个指标进行建模,往往需要巨大的人力消耗和时间消耗。
2)数据的稀疏性:城市中的指标多为宏观指标,历史数据的时间段比较有限,在数据稀疏的情况下,通常较难使用复杂的机器学习或深度学习模型。
发明内容
有鉴于此,本公开实施例提供一种计算指标数据的方法和装置,以解决人力和时间消耗大以及数据稀疏的技术问题。
为实现上述目的,根据本公开实施例的一个方面,提供了一种计算指标数据的方法,包括:
筛选出与待测时序数据相近的样本时序数据,对所述样本时序数据进行特征提取,从而构建数据集;其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据条目数量少于所述样本时序数据的条目数量;
采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型;
从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。
根据本公开的一个或多个实施例,筛选出与待测时序数据相近的样本时序数据,包括:
将待测时序数据以及各个样本时序数据输入到经过训练的编码器中,分别输出所述待测时序数据对应的编码向量以及所述各个样本时序数据对应的编码向量;
采用聚类算法对所述各个样本时序数据对应的编码向量进行聚类,得到多个簇以及各个簇对应的特征中心向量;
基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据。
根据本公开的一个或多个实施例,所述簇的数量为所述各个样本时序数据的总数量的平方根。
根据本公开的一个或多个实施例,基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据,包括:
分别计算所述待测时序数据对应的编码向量与所述各个簇对应的特征中心向量的相似度,筛选出与所述待测时序数据相似度最大的N个簇;
分别计算所述待测时序数据对应的编码向量与所述N个簇中每个样本时序数据对应的编码向量的相似度,筛选出与所述待测时序数据相似度最大的M个样本时序数据;
其中,N小于M,且N、M均为正整数。
根据本公开的一个或多个实施例,对所述样本时序数据进行特征提取,从而构建数据集,包括:
对于每个样本时序数据,采用滑动窗口对所述样本时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;
基于所述各个样本时序数据中的各个时刻对应的样本特征和样本标签,构建数据集。
根据本公开的一个或多个实施例,所述样本特征还包括所述时刻对应的时间特征。
根据本公开的一个或多个实施例,采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型,包括:
将所述数据集划分成训练数据集、验证数据集和测试数据集;
采用所述训练数据集和所述验证数据集并基于网格搜索算法和TPE搜索算法,计算各个模型的最优参数,从而得到所述各个指标计算模型;
采用所述待测时序数据对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型;
采用所述测试集对所述参数调整后的各个指标计算模型进行测试,从而筛选出测试结果最优的指标计算模型。
根据本公开的一个或多个实施例,采用所述待测时序数据对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型,包括:
采用滑动窗口对所述待测时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;
采用所述待测时序数据中的各个时刻对应的样本特征和样本标签,对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型。
根据本公开的一个或多个实施例,所述样本特征还包括所述时刻对应的时间特征。
另外,根据本公开实施例的另一个方面,提供了一种计算指标数据的装置,包括:
筛选模块,用于筛选出与待测时序数据相近的样本时序数据,对所述样本时序数据进行特征提取,从而构建数据集;其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据的条目数量少于所述样本时序数据的条目数量;
训练模块,用于采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型;
计算模块,用于从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。
根据本公开的一个或多个实施例,所述筛选模块还用于:
将待测时序数据以及各个样本时序数据输入到经过训练的编码器中,分别输出所述待测时序数据对应的编码向量以及所述各个样本时序数据对应的编码向量;
采用聚类算法对所述各个样本时序数据对应的编码向量进行聚类,得到多个簇以及各个簇对应的特征中心向量;
基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据。
根据本公开的一个或多个实施例,所述簇的数量为所述各个样本时序数据的总数量的平方根。
根据本公开的一个或多个实施例,所述筛选模块还用于:
分别计算所述待测时序数据对应的编码向量与所述各个簇对应的特征中心向量的相似度,筛选出与所述待测时序数据相似度最大的N个簇;
分别计算所述待测时序数据对应的编码向量与所述N个簇中每个样本时序数据对应的编码向量的相似度,筛选出与所述待测时序数据相似度最大的M个样本时序数据;
其中,N小于M,且N、M均为正整数。
根据本公开的一个或多个实施例,所述筛选模块还用于:
对于每个样本时序数据,采用滑动窗口对所述样本时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;
基于所述各个样本时序数据中的各个时刻对应的样本特征和样本标签,构建数据集。
根据本公开的一个或多个实施例,所述训练模块还用于:
将所述数据集划分成训练数据集、验证数据集和测试数据集;
采用所述训练数据集和所述验证数据集并基于网格搜索算法和TPE搜索算法,计算各个模型的最优参数,从而得到所述各个指标计算模型;
采用所述待测时序数据对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型;
采用所述测试集对所述参数调整后的各个指标计算模型进行测试,从而筛选出测试结果最优的指标计算模型。
根据本公开的一个或多个实施例,所述训练模块还用于:
采用滑动窗口对所述待测时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;
采用所述待测时序数据中的各个时刻对应的样本特征和样本标签,对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型。
根据本公开实施例的另一个方面,还提供了一种电子设备,包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行时,所述一个或多个处理器实现上述任一实施例所述的方法。
根据本公开实施例的另一个方面,还提供了一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现上述任一实施例所述的方法。
根据本公开实施例的另一个方面,还提供了一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现上述任一实施例所述的方法。
上述发明中的一个实施例具有如下优点或有益效果:因为采用筛选出与待测时序数据相近的样本时序数据,对样本时序数据进行特征 提取,从而构建数据集的技术手段,所以克服了现有技术中人力和时间消耗大以及数据稀疏的技术问题。本公开实施例通过对与待测时序数据相近的样本时序数据进行特征提取,构建数据集,解决了数据稀疏的问题,即使是复杂的机器学习或深度学习模型也可以使用,从而能够有效提升模型的计算精度;而且,还可以有效地较少人力资源成本和时间资源成本的投入。
上述的非惯用的可选方式所具有的进一步效果将在下文中结合具体实施方式加以说明。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。其中:
图1是根据本公开实施例的计算指标数据的方法的主要流程的示意图;
图2是根据本公开实施例的筛选样本时序数据的示意图;
图3是根据本公开实施例的采用滑动窗口对样本时序数据提取特征的示意图;
图4是根据本公开一个可参考实施例的计算指标数据的方法的主要流程的示意图;
图5是根据本公开实施例的计算指标数据的装置的主要模块的示意图;
图6是本公开实施例可以应用于其中的示例性系统架构图;
图7是适于用来实现本公开实施例的终端设备或服务器的计算机系统的结构示意图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。
图1是根据本公开实施例的计算指标数据的方法的主要流程的示意图。作为本公开的一个实施例,如图1所示,所述计算指标数据的方法可以包括:
步骤101,筛选出与待测时序数据相近的样本时序数据,对所述样本时序数据进行特征提取,从而构建数据集。
为了解决数据稀疏的问题,本公开实施例筛选出与待测时序数据相近的样本时序数据,对其进行特征提取,从而构建用于训练模型的数据集;其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据的条目数量少于所述样本时序数据的条目数量。由于待测时序数据的条目数量较少(少于预设条目数量阈值,比如条目数量少于500条,或者条目数量少于100条等;其中,条目数量阈值可以根据业务需要预先设定),因此需要筛选出与待测时序数据相近的样本时序数据,以这些样本时序数据作为训练样本。
步骤101可以分别两个步骤,第一步从现有历史数据中筛选出与待测时序数据相近的M个样本时序数据,第二步是对筛选出的M个样本时序数据进行特征提取。
具体地,需要先采集数据,通常数据采集来源于各个业务子系统,每条数据的格式如下,每条数据包含了一个时间字段和若干维度字段(维度字段大于等于1个、维度代表待计算的指标,如销量、销售额等)。

根据本公开的一个或多个实施例,筛选出与待测时序数据相近的样本时序数据,包括:将待测时序数据以及各个样本时序数据输入到经过训练的编码器中,分别输出所述待测时序数据对应的编码向量以及所述各个样本时序数据对应的编码向量;采用聚类算法对所述各个样本时序数据对应的编码向量进行聚类,得到多个簇以及各个簇对应的特征中心向量;基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据。
如图2所示,将待测时序数据以及各个样本时序数据输入到经过训练的编码器(Encoder)中,假设有n个历史的时序数据,则会得到n个编码向量。然后采用聚类算法(比如K-Means聚类算法)对各个样本时序数据对应的编码向量进行聚类,得到多个簇以及各个簇对应的特征中心向量;最后,基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据。
需要说明的是,编码器需要预先训练,通过自编码模型训练一个编码器(Encoder)和解码器(Decoder),Encoder接收时序数据x作为输入,生成一个固定长度的编码向量f=f(x)∈Rd,d为编码向量的维度,可以取128,Decoder负责根据编码向量f还原原始的输入数据得到编码器之后,就可以使用编码器对各个历史的样本时序数据进行编码。
根据本公开的一个或多个实施例,所述簇的数量为所述各个样本时序数据的总数量的平方根。假设对n各样本时序数据进行聚类,最终可以形成个簇及其对应的特征中心向量。聚类的初始点可以预先设定,比如根据样本时序数据的总数量,设定初始点为个,这样可以在后续步骤中提高相似度的计算速度。
根据本公开的一个或多个实施例,基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据,包括:分别计算所述待测时序数 据对应的编码向量与所述各个簇对应的特征中心向量的相似度,筛选出与所述待测时序数据相似度最大的N个簇;分别计算所述待测时序数据对应的编码向量与所述N个簇中每个样本时序数据对应的编码向量的相似度,筛选出与所述待测时序数据相似度最大的M个样本时序数据;其中,N小于M,且N、M均为正整数。
如图2所示,对于一个待测时序数据,首先通过编码器生成编码向量g=g(x),然后分别计算g向量与各个簇对应的特征中心向量之间的相似度,并从中筛选出相似度最大两个特征中心向量U,V,接着再分别计算g向量与两个特征中心向量U,V之间的相似度,并选择相似度最大的10个向量,即
v*=argmin sim(g,v)
v∈U|V
于是,通过上述过程可以得到与待测时序数据最相似的10个样本时序数据。需要说明的是,N和M的值可以预先设定,本公开实施例示例性地示出了N和M的取值,但是不限制实施例中示出的取值。
现有技术通常需要计算待测时序数据与各个样本时序数据两两之间的相似性,计算的时间复杂度为O(n),而本公开实施例的时间复杂度为从而显著降低计算相似度的时间复杂度。因此,本公开实施例通过一次粗排、二次精排,可以降低与每个样本时序数据逐一比较相似度的时间复杂度。
根据本公开的一个或多个实施例,对所述样本时序数据进行特征提取,从而构建数据集,包括:对于每个样本时序数据,采用滑动窗口对所述样本时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;基于所述各个样本时序数据中的各个时刻对应的样本特征和样本标签,构建数据集。
如图3所示,样本时序数据采用滑动窗口的方式进行特征提取,设待提取的样本时序数据每个时刻包含df维特征,滑动窗口的大小为s,需要预测的步长为l,则在时刻i提取的样本特征为 (xi-s+1,…,xi-1,xi),样本标签为(xi+1,xi+2,…,xi+l)。需要说明的是,由于预测的步长为l,为了保证计算准确性,那么样本标签的长度最好也为l。
根据本公开的一个或多个实施例,所述样本特征还包括所述时刻对应的时间特征。在本公开的实施例中,为了提高计算准确性,还需要提取样本时序数据中各个时刻对应的时间特征。具体地,首先提取时间中的维度信息,包括当前时刻对应的月份、天、星期、该日期在全年中的天数、该日期在全年中的周数、该日期所在的季度、该日期是否为工作日或节假日等,如果时间信息的粒度更小如分钟或小时粒度(交通流量预测场景、客流量预测场景),那么还可以提取小时信息、分钟信息等,本公开实施例对此不作限制。然后,将通过滑动窗口提取出的特征和时间特征拼接在一起,形成一个完整的样本特征,该样本特征与对应的样本标签构成一条样本数据。
对于每个样本时序数据,采用滑动窗口的方式可以提取出多条样本数据,因此每个样本时序数据对应的每条样本数据共同组成一个数据集。
步骤102,采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型。
得到数据集之后,采用该数据集中的样本数据对指标计算模型进行训练,指标计算模型对样本数据进行拟合,最终得到经过训练的指标计算模型。
根据本公开的一个或多个实施例,步骤102可以包括:将所述数据集划分成训练数据集、验证数据集和测试数据集;采用所述训练数据集和所述验证数据集并基于网格搜索算法和TPE搜索算法,计算各个模型的最优参数,从而得到所述各个指标计算模型;采用所述待测时序数据对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型;采用所述测试集对所述参数调整后的各个指标计算模型进行测试,从而筛选出测试结果最优的指标计算模型。具体地,可以将数据集划分成训练数据集Dtrain、验证数据集Dval和测试数 据集Dtest;其中,训练数据集用于学习模型m的参数Θ;验证数据集用于选择模型m(即从多个模型中选择表现效果最好的模型,对于预测任务,可以使用MSE进行模型选择,和模型的最优参数Θ*;测试数据集用于测试模型的最终效果。
通常情况下数据集的划分方式为依据时序的等比例划分,比如训练数据:验证数据:测试数据=8:1:1。
为了提升训练模型的准确度,本公开实施例使用超参数优化方法搜索训练模型的最优结果。在超参数优化的过程中,比较重要的是模型空间(可能选择的模型有哪些)和模型对应的参数空间(每个模型超参数可以选择的空间)。根据本公开的一个或多个实施例,本公开实施例包含的模型空间和相应的参数空间如下:
ARIMA模型:ARIMA模型是差分移动自回归模型,通过差分(Δx=xi+1-xi)获取一个平稳的序列,然后使用自回归模型拟合当前数据和历史滑动窗口数据之间的线性关系。对于ARIMA模型,包含的参数空间有差分的次数(d),自回归的长度(p)等。
Ridge模型:Ridge模型使用添加了正则化项的线性模型来拟合训练数据,其中模型假设为:并使用损失函数其中,d为数据特征的维度,样本的特征为(x1,x2,…,xd),yi分别为对应样本标签和对应的预测值,α为正则化系数。对于Ridge模型,包含的参数空间为正则化系数α。
RandomForest模型:RandomForest为随机森林模型,该模型集成多个决策树完成预测。模型的参数空间包括树的个数,分裂点采样的比例,每个叶子节点的最小样本数量等。
Xgboost模型:xgboost是一种提升树模型,每颗树拟合已有树模型和标签数据的残差,即模型的参数空间包括树的个数、学习率、正则化项等。
对于这些参数的搜索,本公开实施例使用网格搜索算法和TPE(Tree-structured Parzen Estimator)搜索算法。其中,网格搜索算法枚举每个参数的每个可能值,通过排列组合的方式完成参数的组合,对 每个参数组合使用训练数据集Dtrain训练模型的参数,并通过验证数据集Dval完成模型效果的验证。假设有两个参数a、b,他们每个参数可能的取值有(a1,a2,a3),(b1,b2),那么网格搜索算法中需要枚举的参数组包括(a1,b1),(a1,b2),(a2,b1),(a2,b2),(a3,b1),a3,b2)。
TPE搜索算法是对于每个参数组θi和其在验证集Dval上的评估指标vali,构建一个样本对(θi,vali),然后通过高斯过程拟合参数组合评估指标组成的数据集y=f(θ;θi,vali),并推断最大的评估指标对应的参数θ*=argmax f(θ;θi,vali)。重复上述过程,直到获得了最优的参数θ*,或者达到了算法的停止条件。
本公开实施例采用网格搜索算法和TPE搜索算法的结合,主要考虑的原因有以下两个方面:1、在待搜索参数组较少的情况下,网格搜索算法的效率较高;2、TPE算法能够弥补网格搜索算法的不足,在更多的空间中探索可能的最优解,主要解决了网格搜索算法通过等距划分造成搜索空间不足的问题。
根据本公开的一个或多个实施例,采用所述待测时序数据对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型,包括:采用滑动窗口对所述待测时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;采用所述待测时序数据中的各个时刻对应的样本特征和样本标签,对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型。
如图3所示,与提取样本时序数据的特征类似,采用滑动窗口的方式对待测时序数据中的各个时刻进行特征提取,滑动窗口的大小为s,需要预测的步长为l,在时刻i提取的样本特征为(xi-s+1,…,xi-1,xi),样本标签为(xi+1,xi+2,…,xi+l)。然后采用待测时序数据中的各个时刻对应的样本特征和样本标签(即训练集)对各个指标计算模型进行参数微调。假设测试结果最优的指标计算模型的参数为则微调后模型的参数其中θ*是微调后模型的最优参数,η是微调阶段模型参数的学习率,L是MSE的损失函数,用于度量模型在 训练集上的误差,(x,y)是训练集中的样本数据。根据本公开的一个或多个实施例,在具体实现过程中,可以通过小批量梯度下降(mini batch SGD)的方式实现。
由于本公开实施例采用的指标计算模型仅使用单维度的数据进行训练,那么在对模型进行微调时,也不针对多维度的数据进行模型微调。
根据本公开的一个或多个实施例,所述样本特征还包括所述时刻对应的时间特征。与提取样本时序数据的特征类似,还需要提取样本时序数据中各个时刻对应的时间特征。具体地,首先提取时间中的维度信息,包括当前时刻对应的月份、天、星期、该日期在全年中的天数、该日期在全年中的周数、该日期所在的季度、该日期是否为工作日或节假日等,如果时间信息的粒度更小如分钟或小时粒度(交通流量预测场景、客流量预测场景),那么还可以提取小时信息、分钟信息等,本公开实施例对此不作限制。然后,将通过滑动窗口提取出的特征和时间特征拼接在一起,形成一个完整的样本特征,该样本特征与对应的样本标签构成一条样本数据。
为了防止验证集过拟合,最后使用测试数据集Dtest测试不同模型的效果,并选择结果最优的模型,其中采用的评估指标为y是样本标签,为模型的预测结果。通过模型验证后,可以得到最优的指标计算模型
步骤103,从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。
对于待测指标来说,对指标数据进行计算通常需要通过最后一段时间的数据进行特征提取,提取完成的特征记为xp(即待测特征),然后使用最优的指标计算模型进行指标数据的计算,得到即为指标计算模型的输出结果。
根据上面所述的各种实施例,可以看出本公开实施例通过筛选出与待测时序数据相近的样本时序数据,对样本时序数据进行特征提取, 从而构建数据集的技术手段,解决了现有技术中人力和时间消耗大以及数据稀疏的技术问题。本公开实施例通过对与待测时序数据相近的样本时序数据进行特征提取,构建数据集,解决了数据稀疏的问题,即使是复杂的机器学习或深度学习模型也可以使用,从而能够有效提升模型的计算精度;而且,还可以有效地较少人力资源成本和时间资源成本的投入。
图4是根据本公开一个可参考实施例的计算指标数据的方法的主要流程的示意图。作为本公开的又一个实施例,如图4所示,所述计算指标数据的方法可以包括:
步骤401,采集历史的各个样本时序数据。
具体地,对于每个指标,采集该指标对应的样本时序数据。
步骤402,从各个样本时序数据中筛选出与待测时序数据相近的样本时序数据。
其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据的条目数量少于所述样本时序数据的条目数量。
根据本公开的一个或多个实施例,步骤402可以包括:将待测时序数据以及各个样本时序数据输入到经过训练的编码器中,分别输出所述待测时序数据对应的编码向量以及所述各个样本时序数据对应的编码向量;采用聚类算法对所述各个样本时序数据对应的编码向量进行聚类,得到多个簇以及各个簇对应的特征中心向量(一次粗排);基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据(二次精排)。本公开实施例通过一次粗排、二次精排,可以降低与每个样本时序数据逐一比较相似度的时间复杂度。
步骤403,对所述样本时序数据进行特征提取,从而构建数据集;对所述待测时序数据进行特征提取,得到微调样本数据。
在本公开的实施例中,样本时序数据和待测时序数据均采用滑动窗口的方式进行特征提取,从而构建数据集以及微调样本数据。数据 集用于对各个模型进行训练,微调样本数据用于对各个模型进行参数微调。
步骤404,将所述数据集划分成训练数据集、验证数据集和测试数据集。
步骤405,采用所述训练数据集和所述验证数据集并基于网格搜索算法和TPE搜索算法,计算各个模型的最优参数,从而得到所述各个指标计算模型。
本公开实施例使用网格搜索算法和TPE搜索算法计算各个模型的最优参数。其中,网格搜索算法枚举每个参数的每个可能值,通过排列组合的方式完成参数的组合,对每个参数组合使用训练数据集Dtrain训练模型的参数,并通过验证数据集Dval完成模型效果的验证
步骤406,采用所述待测时序数据对应的微调样本数据,对所述各个指标计算模型进行参数微调,从而得到微调后的各个指标计算模型。
步骤407,采用所述测试集对所述微调后的各个指标计算模型进行测试,从而筛选出测试结果最优的指标计算模型。
为了防止验证集过拟合,最后使用测试数据集Dtest测试不同模型的效果,并选择结果最优的模型,
步骤408,从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。
从待测时序数据的最后一段时间提取出待测特征,记为xp,然后使用最优的指标计算模型进行指标数据的计算,得到即为指标计算模型的输出结果。
另外,在本公开一个可参考实施例中计算指标数据的方法的具体实施内容,在上面所述计算指标数据的方法中已经详细说明了,故在此重复内容不再说明。
图5是根据本公开实施例的计算指标数据的装置的主要模块的示意图。如图5所示,所述计算指标数据的装置500包括筛选模块501、训练模块502和计算模块503;其中,筛选模块501用于筛选出与待测时序数据相近的样本时序数据,对所述样本时序数据进行特征提取, 从而构建数据集;其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据的条目数量少于所述样本时序数据的条目数量;训练模块502用于采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型;计算模块503用于从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。
根据本公开的一个或多个实施例,所述筛选模块501还用于:
将待测时序数据以及各个样本时序数据输入到经过训练的编码器中,分别输出所述待测时序数据对应的编码向量以及所述各个样本时序数据对应的编码向量;
采用聚类算法对所述各个样本时序数据对应的编码向量进行聚类,得到多个簇以及各个簇对应的特征中心向量;
基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据。
根据本公开的一个或多个实施例,所述簇的数量为所述各个样本时序数据的总数量的平方根。
根据本公开的一个或多个实施例,所述筛选模块501还用于:
分别计算所述待测时序数据对应的编码向量与所述各个簇对应的特征中心向量的相似度,筛选出与所述待测时序数据相似度最大的N个簇;
分别计算所述待测时序数据对应的编码向量与所述N个簇中每个样本时序数据对应的编码向量的相似度,筛选出与所述待测时序数据相似度最大的M个样本时序数据;
其中,N小于M,且N、M均为正整数。
根据本公开的一个或多个实施例,所述筛选模块501还用于:
对于每个样本时序数据,采用滑动窗口对所述样本时序数据中的 各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;
基于所述各个样本时序数据中的各个时刻对应的样本特征和样本标签,构建数据集。
根据本公开的一个或多个实施例,所述训练模块502还用于:
将所述数据集划分成训练数据集、验证数据集和测试数据集;
采用所述训练数据集和所述验证数据集并基于网格搜索算法和TPE搜索算法,计算各个模型的最优参数,从而得到所述各个指标计算模型;
采用所述待测时序数据对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型;
采用所述测试集对所述参数调整后的各个指标计算模型进行测试,从而筛选出测试结果最优的指标计算模型。
根据本公开的一个或多个实施例,所述训练模块502还用于:
采用滑动窗口对所述待测时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;
采用所述待测时序数据中的各个时刻对应的样本特征和样本标签,对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型。
需要说明的是,在本公开所述计算指标数据的装置的具体实施内容,在上面所述计算指标数据的方法中已经详细说明了,故在此重复内容不再说明。
图6示出了可以应用本公开实施例的计算指标数据的方法或计算 指标数据的装置的示例性系统架构600。
如图6所示,系统架构600可以包括终端设备601、602、603,网络604和服务器605。网络604用以在终端设备601、602、603和服务器605之间提供通信链路的介质。网络604可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备601、602、603通过网络604与服务器605交互,以接收或发送消息等。终端设备601、602、603上可以安装有各种通讯客户端应用,例如购物类应用、网页浏览器应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等(仅为示例)。
终端设备601、602、603可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等。
服务器605可以是提供各种服务的服务器,例如对用户利用终端设备601、602、603所浏览的购物类网站提供支持的后台管理服务器(仅为示例)。后台管理服务器可以对接收到的物品信息查询请求等数据进行分析等处理,并将处理结果反馈给终端设备。
需要说明的是,本公开实施例所提供的计算指标数据的方法一般由服务器605执行,相应地,所述计算指标数据的装置一般设置在服务器605中。本公开实施例所提供的计算指标数据的方法也可以由终端设备601、602、603执行,相应地,所述计算指标数据的装置可以设置在终端设备601、602、603中。
应该理解,图6中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
下面参考图7,其示出了适于用来实现本公开实施例的终端设备的计算机系统700的结构示意图。图7示出的终端设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,计算机系统700包括中央处理单元(CPU)701,其可以根据存储在只读存储器(ROM)702中的程序或者从存储部分708加载到随机访问存储器(RAM)703中的程序而执行各种适当的动作 和处理。在RAM 703中,还存储有系统700操作所需的各种程序和数据。CPU 701、ROM 702以及RAM703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
以下部件连接至I/O接口705:包括键盘、鼠标等的输入部分706;包括诸如阴极射线管(CRT)、液晶显示器(LCD)等以及扬声器等的输出部分707;包括硬盘等的存储部分708;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分709。通信部分709经由诸如因特网的网络执行通信处理。驱动器710也根据需要连接至I/O接口705。可拆卸介质711,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器710上,以便于从其上读出的计算机程序根据需要被安装入存储部分708。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分709从网络上被下载和安装,和/或从可拆卸介质711被安装。在该计算机程序被中央处理单元(CPU)701执行时,执行本公开的系统中限定的上述功能。
需要说明的是,本公开所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储 介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的模块也可以设置在处理器中,例如,可以描述为:一种处理器包括筛选模块、训练模块和计算模块,其中,这些模块的名称在某种情况下并不构成对该模块本身的限定。
作为另一方面,本公开还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的设备中所包含的;也可以是单独 存在,而未装配入该设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该设备执行时,该设备实现如下方法:筛选出与待测时序数据相近的样本时序数据,对所述样本时序数据进行特征提取,从而构建数据集;其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据条目数量少于所述样本时序数据的条目数量;采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型;从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。
作为另一方面,本公开实施例还提供了一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现上述任一实施例所述的方法。
根据本公开实施例的技术方案,因为采用筛选出与待测时序数据相近的样本时序数据,对样本时序数据进行特征提取,从而构建数据集的技术手段,所以克服了现有技术中人力和时间消耗大以及数据稀疏的技术问题。本公开实施例通过对与待测时序数据相近的样本时序数据进行特征提取,构建数据集,解决了数据稀疏的问题,即使是复杂的机器学习或深度学习模型也可以使用,从而能够有效提升模型的计算精度;而且,还可以有效地较少人力资源成本和时间资源成本的投入。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,取决于设计要求和其他因素,可以发生各种各样的修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (12)

  1. 一种计算指标数据的方法,包括:
    筛选出与待测时序数据相近的样本时序数据,对所述样本时序数据进行特征提取,从而构建数据集;其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据条目数量少于所述样本时序数据的条目数量;
    采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型;
    从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。
  2. 根据权利要求1所述的方法,其中,筛选出与待测时序数据相近的样本时序数据,包括:
    将待测时序数据以及各个样本时序数据输入到经过训练的编码器中,分别输出所述待测时序数据对应的编码向量以及所述各个样本时序数据对应的编码向量;
    采用聚类算法对所述各个样本时序数据对应的编码向量进行聚类,得到多个簇以及各个簇对应的特征中心向量;
    基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据。
  3. 根据权利要求2所述的方法,其中,所述簇的数量为所述各个样本时序数据的总数量的平方根。
  4. 根据权利要求2所述的方法,其中,基于所述待测时序数据对应的编码向量以及所述各个簇对应的特征中心向量,筛选出与所述待测时序数据相近的若干个样本时序数据,包括:
    分别计算所述待测时序数据对应的编码向量与所述各个簇对应的 特征中心向量的相似度,筛选出与所述待测时序数据相似度最大的N个簇;
    分别计算所述待测时序数据对应的编码向量与所述N个簇中每个样本时序数据对应的编码向量的相似度,筛选出与所述待测时序数据相似度最大的M个样本时序数据;
    其中,N小于M,且N、M均为正整数。
  5. 根据权利要求1所述的方法,其中,对所述样本时序数据进行特征提取,从而构建数据集,包括:
    对于每个样本时序数据,采用滑动窗口对所述样本时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;
    基于所述各个样本时序数据中的各个时刻对应的样本特征和样本标签,构建数据集。
  6. 根据权利要求5所述的方法,其中,所述样本特征还包括所述时刻对应的时间特征。
  7. 根据权利要求1所述的方法,其中,采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型,包括:
    将所述数据集划分成训练数据集、验证数据集和测试数据集;
    采用所述训练数据集和所述验证数据集并基于网格搜索算法和TPE搜索算法,计算各个模型的最优参数,从而得到所述各个指标计算模型;
    采用所述待测时序数据对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型;
    采用所述测试集对所述参数调整后的各个指标计算模型进行测试,从而筛选出测试结果最优的指标计算模型。
  8. 根据权利要求7所述的方法,其中,采用所述待测时序数据对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型,包括:
    采用滑动窗口对所述待测时序数据中的各个时刻进行特征提取,分别得到各个时刻对应的样本特征和样本标签;其中,所述样本特征包括所述时刻以及所述时刻之前的指标数据,所述样本标签包括所述时刻之后的指标数据;
    采用所述待测时序数据中的各个时刻对应的样本特征和样本标签,对所述各个指标计算模型进行参数调整,从而得到参数调整后的各个指标计算模型。
  9. 一种计算指标数据的装置,包括:
    筛选模块,用于筛选出与待测时序数据相近的样本时序数据,对所述样本时序数据进行特征提取,从而构建数据集;其中,所述待测时序数据对应的指标与所述样本时序数据对应的指标不同,且所述待测时序数据的条目数量少于所述样本时序数据的条目数量;
    训练模块,用于采用所述数据集对指标计算模型进行训练,得到训练后的指标计算模型;
    计算模块,用于从所述待测时序数据中提取出待测特征,将所述待测特征输入到所述训练后的指标计算模型中,从而输出指标数据。
  10. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行时,所述一个或多个处理器实现如权利要求1-8中任一所述的方法。
  11. 一种计算机可读介质,其上存储有计算机程序,所述程序被处理器执行时实现如权利要求1-8中任一所述的方法。
  12. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如权利要求1-8中任一项所述的方法。
PCT/CN2023/081815 2022-07-27 2023-03-16 一种计算指标数据的方法和装置 WO2024021630A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210895954.8 2022-07-27
CN202210895954.8A CN115828075A (zh) 2022-07-27 2022-07-27 一种计算指标数据的方法和装置

Publications (1)

Publication Number Publication Date
WO2024021630A1 true WO2024021630A1 (zh) 2024-02-01

Family

ID=85522934

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081815 WO2024021630A1 (zh) 2022-07-27 2023-03-16 一种计算指标数据的方法和装置

Country Status (2)

Country Link
CN (1) CN115828075A (zh)
WO (1) WO2024021630A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107346464A (zh) * 2016-05-06 2017-11-14 腾讯科技(深圳)有限公司 业务指标预测方法及装置
CN109376936A (zh) * 2018-10-31 2019-02-22 平安直通咨询有限公司 房屋价值预测方法、装置、计算机设备和存储介质
CN109993205A (zh) * 2019-02-28 2019-07-09 东软集团股份有限公司 时间序列预测方法、装置、可读存储介质及电子设备
CN110009384A (zh) * 2019-01-07 2019-07-12 阿里巴巴集团控股有限公司 预测业务指标的方法及装置
US20200104174A1 (en) * 2018-09-30 2020-04-02 Ca, Inc. Application of natural language processing techniques for predicting resource consumption in a computing system
CN114202123A (zh) * 2021-12-14 2022-03-18 深圳壹账通智能科技有限公司 业务数据预测方法、装置、电子设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107346464A (zh) * 2016-05-06 2017-11-14 腾讯科技(深圳)有限公司 业务指标预测方法及装置
US20200104174A1 (en) * 2018-09-30 2020-04-02 Ca, Inc. Application of natural language processing techniques for predicting resource consumption in a computing system
CN109376936A (zh) * 2018-10-31 2019-02-22 平安直通咨询有限公司 房屋价值预测方法、装置、计算机设备和存储介质
CN110009384A (zh) * 2019-01-07 2019-07-12 阿里巴巴集团控股有限公司 预测业务指标的方法及装置
CN109993205A (zh) * 2019-02-28 2019-07-09 东软集团股份有限公司 时间序列预测方法、装置、可读存储介质及电子设备
CN114202123A (zh) * 2021-12-14 2022-03-18 深圳壹账通智能科技有限公司 业务数据预测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115828075A (zh) 2023-03-21

Similar Documents

Publication Publication Date Title
US20200050968A1 (en) Interactive interfaces for machine learning model evaluations
Cape et al. Signal-plus-noise matrix models: eigenvector deviations and fluctuations
WO2020114022A1 (zh) 一种知识库对齐方法、装置、计算机设备及存储介质
CN103336790B (zh) 基于Hadoop的邻域粗糙集快速属性约简方法
US20190392258A1 (en) Method and apparatus for generating information
TWI705341B (zh) 特徵關係推薦方法及裝置、計算設備及儲存媒體
CN103336791B (zh) 基于Hadoop的粗糙集快速属性约简方法
CN107145485A (zh) 用于压缩主题模型的方法和装置
CN114329201A (zh) 深度学习模型的训练方法、内容推荐方法和装置
WO2023019933A1 (zh) 构建检索数据库的方法、装置、设备以及存储介质
CN111191825A (zh) 用户违约预测方法、装置及电子设备
CN114549874A (zh) 多目标图文匹配模型的训练方法、图文检索方法及装置
KR20210124109A (ko) 정보 처리, 정보 추천의 방법과 장치, 전자 기기, 저장 매체 및 컴퓨터 프로그램 제품
TW202217597A (zh) 圖像的增量聚類方法、電子設備、電腦儲存介質
CN115796310A (zh) 信息推荐及模型训练方法、装置、设备和存储介质
CN115410199A (zh) 图像内容检索方法、装置、设备及存储介质
CN113282433B (zh) 集群异常检测方法、装置和相关设备
US20220277031A1 (en) Guided exploration for conversational business intelligence
CN112418258A (zh) 一种特征离散化方法和装置
CN111930944B (zh) 文件标签分类方法及装置
CN110503117A (zh) 数据聚类的方法和装置
CN110807097A (zh) 分析数据的方法和装置
CN112231299A (zh) 一种特征库动态调整的方法和装置
WO2024021630A1 (zh) 一种计算指标数据的方法和装置
CN116467141A (zh) 日志识别模型训练、日志聚类方法和相关系统、设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23844848

Country of ref document: EP

Kind code of ref document: A1