CN111476629A - Data prediction method and device, electronic equipment and storage medium - Google Patents

Data prediction method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111476629A
CN111476629A CN202010153163.9A CN202010153163A CN111476629A CN 111476629 A CN111476629 A CN 111476629A CN 202010153163 A CN202010153163 A CN 202010153163A CN 111476629 A CN111476629 A CN 111476629A
Authority
CN
China
Prior art keywords
evaluation
model
target
parameter
evaluation parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010153163.9A
Other languages
Chinese (zh)
Inventor
孙慧楠
王兴星
王永康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010153163.9A priority Critical patent/CN111476629A/en
Publication of CN111476629A publication Critical patent/CN111476629A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/067Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0603Catalogue ordering

Abstract

The application provides a data prediction method, a data prediction device, electronic equipment and a storage medium. The method comprises the following steps: aiming at any optional model in the target scene, obtaining an evaluation sample of the target scene, a real value corresponding to each evaluation sample and a pre-estimated value of the model aiming at each evaluation sample, wherein the target scene comprises an expectation maximization scene aiming at target parameters; sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of the corresponding real values of any two adjacent evaluation samples sequenced this time; determining a target model adapted to a target scene according to the first evaluation parameter and the second evaluation parameter of each model; and acquiring the prediction data of the target parameters of any object in the target scene through the target model. Therefore, a more reasonable and accurate evaluation mode is provided to adapt models for different scenes, and the accuracy of data prediction results is improved.

Description

Data prediction method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of machine learning technologies, and in particular, to a data prediction method, an apparatus, an electronic device, and a storage medium.
Background
In machine learning, an off-line evaluation mode is needed for a trained model, and the model can be compared before being on-line. Particularly, in application scenarios with expectation maximization, such as profit maximization and click rate maximization, the performance of the selected model is important.
Taking the prediction of advertisement GMV (Gross MerchandiseVolume, website transaction amount) as an example, in order to increase the website transaction amount, a commodity with a larger amount can be recommended to a user, the model predicts the commodity price in the website, and currently, RMSE (root mean square Error) is generally used as a measurement index of the model to evaluate whether the value predicted by the model is close to the true value. On the other hand, when recommending commodities, in order to maximize profits, it is necessary to preferentially recommend objects with high transaction success rate, that is, to place the objects at the front end of the recommendation list. Currently, AUC is commonly used to evaluate whether the model predicts positive and negative samples accurately. Among them, the ROC (receiver Operating characterization) curve and AUC are often used to evaluate the merits of a binary classifier (binary classifier), and the area under the ROC curve is the value of AUC. However, the RMSE cannot measure whether the model can preferentially recommend an object meeting the expectation maximization, and the AUC cannot measure the difference between different arrangement modes, that is, the performance of different models in the same scene cannot be accurately estimated in the related technical solution, and thus it is difficult to adapt to a proper model, so that the accuracy of the model prediction result is not high.
Disclosure of Invention
The embodiment of the application provides a data prediction method, a data prediction device, electronic equipment and a storage medium, and aims to solve the problem that the accuracy of a model prediction result is not high in the related technology.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a data prediction method, including:
aiming at any optional model in a target scene, obtaining evaluation samples of the target scene, a real value corresponding to each evaluation sample and a pre-estimated value of the model aiming at each evaluation sample, wherein the target scene comprises an expectation maximization scene aiming at target parameters, and the target parameters comprise at least one of quality parameters, attribute parameters and operation parameters;
sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of the corresponding real values of any two adjacent evaluation samples sequenced this time;
sorting the evaluation samples according to a second sorting mode taking the real value of each evaluation sample as a reference, and acquiring a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples after sorting;
determining a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each model;
and acquiring the prediction data of the target parameters of any object in the target scene through the target model.
In a second aspect, an embodiment of the present application provides a data prediction apparatus, including:
the evaluation data acquisition module is used for acquiring evaluation samples of a target scene, a real value corresponding to each evaluation sample and a pre-evaluation value of the model for each evaluation sample aiming at any optional model in the target scene, wherein the target scene comprises an expectation maximization scene aiming at target parameters, and the target parameters comprise at least one of quality parameters, attribute parameters and operation parameters;
the first evaluation parameter acquisition module is used for sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of real values corresponding to any two adjacent evaluation samples sequenced this time;
the second evaluation parameter acquisition module is used for sequencing the evaluation samples according to a second sequencing mode taking the real value of each evaluation sample as a reference, and acquiring a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples sequenced this time;
the model adaptation module is used for determining a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each model;
and the data prediction module is used for acquiring prediction data of target parameters of any object in the target scene through the target model.
In a third aspect, an embodiment of the present application additionally provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the data prediction method as described above.
In a fourth aspect, the present embodiment provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the data prediction method as described above.
In the embodiment of the application, by aiming at any optional model in a target scene, the evaluation samples of the target scene, the real value corresponding to each evaluation sample and the estimated value of the model aiming at each evaluation sample are obtained; sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of the corresponding real values of any two adjacent evaluation samples sequenced this time; sorting the evaluation samples according to a second sorting mode taking the real value of each evaluation sample as a reference, and acquiring a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples after sorting; determining a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each model; and acquiring the prediction data of the target parameters of any object in the target scene through the target model. Therefore, the method has the advantage of improving the accuracy of the prediction result.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the steps of a data prediction method in an embodiment of the present application;
FIG. 2 is a flow chart of steps of another data prediction method in an embodiment of the present application;
FIG. 3 shows a schematic structural diagram of a model in an embodiment of the present application;
FIG. 4 is a schematic diagram showing a structure of a processing layer in a model in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a data prediction apparatus in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of another data prediction apparatus in the embodiment of the present application;
fig. 7 is a schematic hardware structure diagram of an electronic device in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flow chart illustrating steps of a data prediction method in an embodiment of the present application is shown.
Step 110, aiming at any optional model in a target scene, obtaining evaluation samples of the target scene, a real value corresponding to each evaluation sample, and a pre-estimated value of the model aiming at each evaluation sample, wherein the target scene comprises an expectation maximization scene aiming at target parameters, and the target parameters comprise at least one of quality parameters, attribute parameters, and operation parameters.
In practical applications, due to the rapid development of machine learning, the types of models are more and more, and then the models used in the same application scene can have multiple choices. Moreover, in the same application scenario, the performances of different models may have difference in quality, and accordingly, the performances of the same model in different application scenarios may also have difference in quality, so in the application scenario related to the model, in order to improve the performances of the models in different application scenarios, the model with the best performance is adapted for different fields, and the trained model may be subjected to performance evaluation and model adaptation. In the embodiment of the application, in order to improve the adaptation degree of the model used in the scene to the scene and further improve the accuracy of the prediction result, performance evaluation can be performed on a plurality of models in the same application scene to select the model adapted to the target scene for data prediction.
For example, for the profit maximization application scenario, assume that there are two commodities as evaluation samples, and the actual price of the commodity a, that is, the true value is 20, and the actual price of the commodity b is 30, assume that the model a aims at the estimated price of the commodity a, that is, the estimated value is 27, and the estimated price of the commodity b is 26, then the model a will preferentially recommend the commodity a with a higher estimated price to the user, but because the actual price of the commodity a is lower than that of the commodity b, a higher deal amount is not reached; if the estimated price of the model B for the commodity a is 27 and the estimated price of the commodity B is 33, the model B preferentially recommends the commodity B with a higher estimated price to the user, and the actual price of the commodity B is higher, so that a higher deal amount can be brought. Therefore, different models have different performances in different application scenes, the prediction result of the model with higher adaptation degree to the target scene is more accurate, and the selection of the model with proper performance plays a crucial role.
Moreover, in order to improve the accuracy of the prediction result, evaluation samples of various selectable models in the target scene, a true value corresponding to each evaluation sample, and a predicted value of each evaluation sample by the model may be obtained. The model may be a trained model, and the training sample of the model may include an evaluation sample for evaluation, and of course, the training sample may not include the evaluation sample, and may be specifically set by user according to the requirement, which is not limited in the embodiment of the present application. The evaluation sample and the real value corresponding to the evaluation sample may be preset according to an application scenario of the model, and the like, and the embodiment of the present application is not limited. Furthermore, in the embodiments of the present application, the evaluation sample and the actual value of the evaluation sample may be obtained in any available manner, and the embodiments of the present application are not limited thereto.
For example, for an application scenario with maximized income, in order to increase the website deal amount, a commodity with a higher price is generally recommended to a user within an acceptable range of the user, then the evaluation sample may be a commodity, the true value corresponding to the evaluation sample may be the true price of the commodity, and the predicted value may be the estimated price of the model for the evaluation sample.
And 120, sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of the true values corresponding to any two adjacent evaluation samples sequenced this time.
Step 130, ranking the evaluation samples according to a second ranking mode taking the real value of each evaluation sample as a reference, and obtaining a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples ranked this time.
After obtaining the true value and the estimated value of each evaluation sample, the evaluation parameters for evaluating the model can be further obtained.
Moreover, in practical applications, in order to achieve the expectation maximization, there is generally an optimal mode, and if the prediction result of the model is closer to the optimal mode, the application performance of the model in the corresponding scene is the best. In this embodiment, in order to evaluate the performance of the model, reference data of the optimal mode that can be achieved under the corresponding model and reference data that the model actually achieves may be obtained, and then an adapted model may be selected for the target scene according to the reference data of each model under two conditions.
In addition, for the model, the ranking reference is generally the reference value of each evaluation sample, but the actually generated benefit is the real value of the evaluation sample. Therefore, in the embodiment of the present application, the evaluation samples may be sorted according to a first sorting manner that takes the estimated value of each evaluation sample as a reference, and a first evaluation parameter of the model may be obtained according to a difference between real values corresponding to any two adjacent evaluation samples sorted this time. And sequencing the evaluation samples according to a second sequencing mode taking the real value of each evaluation sample as a reference, and acquiring a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples sequenced this time. The first evaluation parameter can be understood as the actually achieved reference data of the model, and the second evaluation parameter can be understood as the reference data of the best mode achievable by the model. Specifically, the evaluation samples may be sorted according to the estimated value of each evaluation sample, a first evaluation parameter of the model may be obtained according to a difference between true values of each group of two adjacent evaluation samples after the sorting, the evaluation samples may be sorted according to the true value of each evaluation sample, and a second evaluation parameter of the model may be obtained according to a difference between true values of each group of two adjacent evaluation samples after the sorting. Moreover, the relationship between the first evaluation parameter and the difference between the real values of each group of two adjacent evaluation samples sequenced according to the estimated value and the relationship between the second evaluation parameter and the difference between the real values of each group of two adjacent evaluation samples sequenced according to the real value can be set by user according to requirements, and the embodiment of the application is not limited.
For example, the first evaluation parameter may be set as the sum of the differences between the true values of each group of two adjacent evaluation samples after the predicted samples are sorted according to the predicted values, and correspondingly, the sum of the difference between the second evaluation parameter and the true values of each group of two adjacent evaluation samples after the predicted samples are sorted according to the true values; or the first evaluation parameter may be an average value of the difference between the true values of each group of two adjacent evaluation samples after the predicted samples are sorted according to the predicted value, and correspondingly, the second evaluation parameter is an average value of the difference between the true values of each group of two adjacent evaluation samples after the predicted samples are sorted according to the true values; and so on.
When the evaluation samples are sorted according to the estimated value and the real value of each evaluation sample, the sorting mode of each sorting can be set by user according to requirements, and the embodiment of the application is not limited.
For example, the sorting modes of the two sorts can be set to be in the order from big to small, that is, the reverse order; or the sorting modes of the two sorts can be set to be in the order from small to large, namely the positive order; or, the sorting mode of one time can be set to be in a descending order, namely a reverse order, and the sorting mode of the other time is in a descending order, namely a positive order; and so on.
Moreover, when calculating the difference between the real values of each set of two adjacent evaluation samples, the difference between the real value of each evaluation sample and the real value of the next evaluation sample may be obtained uniformly, or the difference between the real value of each evaluation sample and the real value of the previous evaluation sample may also be obtained uniformly, and so on. The user-defined setting can be specifically performed according to the requirement, and the embodiment of the application is not limited.
Step 140, determining a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each model.
After obtaining the first evaluation parameter and the second evaluation parameter, the target model adapted to the target scene may be determined according to the first evaluation parameter and the second evaluation parameter of each model. The selection strategy of the target model and the corresponding relationship between the first evaluation parameter and the second evaluation parameter can be set by user according to requirements, and the embodiment of the application is not limited.
For example, assuming that the first evaluation parameter can be understood as the reference data of the model actually achieved and is proportional to the performance of the model actually achieved, and the second evaluation parameter can be understood as the reference data of the model in the best mode achievable and is also proportional to the best performance achievable by the model, the model with the smallest difference between the first evaluation parameter and the second evaluation parameter can be obtained as the target model adapted to the target scene. And the difference may be characterized by, but not limited to, the difference between the second evaluation parameter and the first evaluation parameter, the ratio of the difference between the second evaluation parameter and the first evaluation parameter to the second evaluation parameter, and so on.
Furthermore, the scheme is applicable to all scenarios including, but not limited to, expectation maximization scenarios, and it is expected that actual values may be ordered, so that the scenario with expectation maximization may be used. Particularly, a more reasonable and accurate model evaluation mode is provided for an expectation maximization scene of a target parameter in a regression problem. The target parameter may include, but is not limited to, at least one of a quality parameter (e.g., a video quality parameter, a text quality parameter, a call quality parameter, a network quality parameter, etc.), an attribute parameter (e.g., a category, an audio/video viewing duration, a profit value, etc.), and an operation parameter (e.g., a click through rate, a list placement rate, etc.). For example, for an application scenario with maximized revenue, the scheme can measure whether to arrange a singleton sample before a singleton sample relative to the RMSE index; relative AUC indexes can measure the difference between the large order before the large order is placed in the list and the small order before the small order is placed in the list; the real income after the model estimation can be measured, the big order is arranged in front of the small order, the accuracy of the prediction result is improved, and the problem that the offline AUC and RMSE are inconsistent with the online actual effect is avoided.
Optionally, in an embodiment, the attribute parameter may include, but is not limited to, at least one of a profit value and an audiovisual viewing duration; the operation parameter may include, but is not limited to, at least one of click rate, order placing rate, recommendation rate, and browsing rate.
And 150, acquiring prediction data of target parameters of any object in the target scene through the target model.
After determining the target model adapted to the target scene, the prediction data of the target parameter of any object in the target scene may be further obtained through the target model. The target parameters can be set in a user-defined manner according to the requirements of the target scene, and the embodiment of the application is not limited.
In the embodiment of the application, by aiming at any optional model in a target scene, the evaluation samples of the target scene, the real value corresponding to each evaluation sample and the estimated value of the model aiming at each evaluation sample are obtained; sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of the corresponding real values of any two adjacent evaluation samples sequenced this time; sorting the evaluation samples according to a second sorting mode taking the real value of each evaluation sample as a reference, and acquiring a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples after sorting; determining a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each model; and acquiring the prediction data of the target parameters of any object in the target scene through the target model. Therefore, a proper target model can be accurately adapted to the target scene, and the accuracy of the prediction result is further improved.
Referring to fig. 2, in this embodiment of the present application, the step 120 may further include: according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, sequencing the evaluation samples, and obtaining the sum of the differences of the real values corresponding to any two adjacent evaluation samples sequenced this time as the first evaluation parameter;
the step 130 may further include: and sequencing the evaluation samples according to a second sequencing mode taking the real value of each evaluation sample as a reference, and acquiring the sum of the difference values of the real values corresponding to any two adjacent evaluation samples sequenced this time as the second evaluation parameter.
In the embodiment of the present application, in order to obtain a first evaluation parameter and a second evaluation parameter quickly and accurately, the evaluation samples may be sorted according to a first sorting manner that takes the estimated value of each evaluation sample as a reference, and a sum of differences of real values corresponding to any two adjacent evaluation samples after the sorting is obtained as the first evaluation parameter; correspondingly, the evaluation samples are sorted according to a second sorting mode taking the real value of each evaluation sample as a reference, and the sum of the difference values of the real values corresponding to any two adjacent evaluation samples after the sorting is taken as the second evaluation parameter.
The first ordering manner may be the same as or different from the second ordering manner, and the embodiment of the present application is not limited thereto. For example, the first sorting manner and the second sorting manner may be set to be any one of a large-to-small order or a small-to-large order.
Referring to fig. 2, in the embodiment of the present application, the step 140 may further include:
step 141, obtaining the adaptation degree of each model according to the first evaluation parameter and the second evaluation parameter of each model.
And 142, selecting the model with the highest adaptation degree as the target model adapted to the target scene according to the adaptation degree of each model in the target scene.
In order to accurately select the target model most suitable for the target scene from the selectable models, the degree of adaptation of each model may be further obtained according to the first evaluation parameter and the second evaluation parameter of each model. The adaptation degree and the corresponding relation between the first evaluation parameter and the second evaluation parameter can be set by user according to requirements, and the embodiment of the application is not limited.
For example, assuming that the first evaluation parameter can be understood as the reference data of the model actually achieved, and the first evaluation parameter is proportional to the performance of the model actually achieved, and the second evaluation parameter can be understood as the reference data of the model in the best mode achievable, and the second evaluation parameter is also proportional to the best performance achievable by the model, the ratio between the first evaluation parameter and the second evaluation parameter can be obtained as the degree of adaptation of the model to the target scene. At this time, the higher the adaptation degree is, the more the corresponding model is adapted to the target scene; alternatively, a ratio of the second evaluation parameter to a first difference may also be obtained as the adaptation degree of the model to the target scene, where the first difference is an absolute value of a difference between the first evaluation parameter and the second evaluation parameter, and so on.
In the embodiment of the application, after performance evaluation is performed on a plurality of models in the same application scene and the adaptation degrees are obtained, in order to obtain the optimal effect in the target scene, the target model with the highest adaptation degree may be selected as the actual application model in the application scene according to the adaptation degrees of each model and the target scene which are alternative in the same target scene.
Optionally, in this embodiment of the present application, the step 141 further includes:
step 1411, in response to that the sorting order of the first sorting manner is the same as the sorting order of the second sorting manner, obtaining a ratio of the first evaluation parameter to the second evaluation parameter as the degree of adaptation of the model;
step 1412, in response to that the sorting order of the first sorting manner is opposite to that of the second sorting manner, obtaining a negative number of a ratio of the first evaluation parameter to the second evaluation parameter as the adaptation degree of the model.
If the sequencing order of the first sequencing mode is the same as that of the second sequencing mode, the correlation between the first evaluation parameter and the actual performance of the model is the same as that between the second evaluation parameter and the actual performance of the model, for example, both positive correlation and negative correlation are obtained, then the ratio of the first evaluation parameter and the second evaluation parameter can be directly taken as the adaptation degree of the corresponding model and the target scene,
and if the sorting order of the first sorting mode is opposite to that of the second sorting mode, the negative number of the ratio of the first evaluation parameter to the second evaluation parameter can be obtained as the adaptation degree of the model to the target scene.
For example, the estimated samples may be arranged in a reverse order of the estimated values, and then the sum of the differences between the true value of each estimated sample after this ordering and the true value of each sample point after the estimated sample is calculated to obtain the first estimation parameter a, that is:
for i<length;
forj=i+1<length;
A+=(label_i-label_j)
correspondingly, the estimated samples can be arranged in reverse order according to the true values, then the sum of the difference values of the true value of each estimated sample after the current ordering and the true value of each sample point behind the estimated sample is calculated, and the maximum benefit which can be obtained by the model is obtained as the denominator Z, namely:
form<length:
forn=m+1<length:
Z+=(label_m-label_n)
wherein, length is the total number of the evaluation samples, and label _ i represents the real value of the ith evaluation sample after sorting.
For example, assume that there are two items as evaluation samples, and that the actual price of item 1, i.e., the true value label1The actual price of item 2 is label ═ 202Assume for 30 that the model is for the estimated price of item 1, i.e. the estimated value Pre1Estimated price Pre for item 2, 271At this time, if the first sorting manner and the second sorting manner are both reverse orders, the goods 1 and the goods 2 can be obtained by sorting the evaluation samples according to the first sorting manner with the pre-evaluation value of each evaluation sample as a reference, and the goods 2 and the goods 1 can be obtained by sorting the evaluation samples according to the second sorting manner with the real value of each evaluation sample as a reference, then the first evaluation parameter label can be obtained1-label2-10, the second evaluation parameter is label2-label1Then the fitness is-10/10, i.e., -1, 10.
If the first ordering mode is positive ordering and the second ordering modes are reverse ordering, b and a can be obtained by ordering the evaluation samples according to the first ordering mode taking the pre-estimation value of each evaluation sample as reference, and b and a can be obtained by ordering the evaluation samples according to the second ordering mode taking the real value of each evaluation sample as reference, then the first evaluation parameter can be obtained to be 30-20-10, the second evaluation parameter can be 30-20-10, the adaptation degree is- (10/10), and the same is-1.
Moreover, since the degree of adaptation at this time is a ratio, when the target model is selected according to the degree of adaptation, the model with the degree of adaptation closest to 1 may also be selected as the target model, and the embodiment of the present application is not limited thereto.
Referring to fig. 2, in this embodiment of the present application, the step 110 may further include:
step 111, obtaining an object which is successfully launched within a preset time period in the target scene and a real value of the object during launching, and taking the object as the evaluation sample;
step 112, obtaining the delivery information of each object, and according to the delivery information of each object, estimating and obtaining a pre-estimated value of each object through the model; the delivery information comprises at least one of delivery environment parameters of the object, attribute parameters of the object, user parameters of a receiving user of the object and operation parameters of the receiving user for the object.
In the embodiment of the application, in order to improve the accuracy of the adaptation model, the quality of the evaluation sample may be improved, and then, for the target scene, an object that is successfully delivered within a preset time period in the corresponding target scene may be selected as the evaluation sample, and accordingly, a real value of each object that is successfully delivered during delivery may be obtained. Moreover, in order to obtain the estimated value of each evaluation sample, the delivery information of each object may be further obtained, and the estimated value of each object may be output through the model estimation according to the delivery information of each object as the input of the model.
The delivery information may include, but is not limited to, at least one of a delivery environment parameter of the subject, an attribute parameter of the subject, a user parameter of a receiving user of the subject, and an operation parameter of the receiving user for the subject. The parameters of the delivery environment may include parameters related to the real environment and the virtual environment, such as the network environment (e.g. wireless network, broadband network, etc.) when the object is delivered, the scene environment (e.g. delivery time, delivery platform, etc.), and the like; the attribute parameters of the object may be any parameters related to the object itself, such as the name of the object, the category to which the object belongs, the description of the object, etc.; the user parameters of the receiving user of the object may include any user parameters of the receiving user of the corresponding object at the time of successful delivery, such as user identification, user gender, user occupation, user age, user preferences, and so on; receiving the user's operational parameters for the object may include receiving parameters of any operation of the user for the respective object after receiving the delivered content for the respective object, such as the operation specifically performed (e.g., selecting, viewing, collecting, placing an order, etc.), the duration of each operation, and so forth. The content specifically contained in the release information can be set by user according to the requirement, and the embodiment of the application is not limited. The preset time period can also be set by self according to requirements, and the embodiment of the application is not limited. For example, the preset time period may be set to one week, one month, or the like before performance evaluation is performed for each model in the same application scenario.
In addition, it should be noted that, in the embodiment of the present application, when the user parameter in the delivery information of the object is obtained, authorization of the corresponding user may be applied in advance, and then the required user parameter is obtained under the condition of user authorization.
Secondly, in different application scenes, conditions which need to be met when the releasing is successful can be set in a user-defined mode according to requirements. For example, in an application scenario where the volume of interest of the website is maximized, the conditions that may need to be met for successful delivery may include that objects such as commodities are pushed to the user and the corresponding user successfully places an order for the corresponding object after receiving the push, that is, the volume of interest is also maximized; in the promoted application scene, the condition that the delivery success needs to be met can include that each object to be recommended is successfully rendered at any client; and so on.
Moreover, in the embodiment of the present application, in order to improve the accuracy of the model adaptation degree, a new evaluation object may be periodically obtained to evaluate the models in the same application scenario, so as to continuously update the adaptation degree of each model.
In addition, in the embodiment of the present application, in order to improve the balance of the adaptation degree of each model, the evaluation samples of each model in the same period in the same application scenario may be set to be the same, which may be different, and the embodiment of the present application is not limited herein.
For example, assuming that the evaluation sample and the real value and the estimated value of each evaluation sample in a certain application scene are obtained currently, the performance of each model in the corresponding application scene can be evaluated by using the currently obtained evaluation sample and the real value and the estimated value of each evaluation sample.
Optionally, in another embodiment of the present disclosure, the model may include: a handle layer and an output layer, the handle layer comprising: the input of the processing layer is the input of the model, the input of the first normalization layer is the input of the processing layer, the output of the first normalization layer is used as the input of the first full connection layer, the output of the first full connection layer is used as the input of the first activation layer, and the output of the first activation layer is used as the input of each output layer. Further, the output layer includes: the input of the second full connection layer is the input of the output layer, the output of the second full connection layer is the input of the second activation layer, and the output of the second activation layer is the quality score of the model.
Referring to the structural schematic diagram of a model shown in fig. 3, BN (Batch Normalization layer) in the processing layer is a first Normalization layer, FC (Full Connect ) in the processing layer is a first Full Connect layer, E L U in the processing layer is a first active layer, the first active layer uses E L U as an active function, and a vector obtained by the processing layer is input to each output layer.
It should be noted that, in fig. 3, the processing layer only includes one BN, FC and E L U, in a specific implementation, multiple BNs, multiple FCs and multiple E L U may be included in the processing layer, so that the depth of the processing layer may be increased, which is helpful for improving the operation accuracy of the processing layer.
In the embodiment of the application, the evaluation samples are sorted according to a first sorting mode taking the estimated value of each evaluation sample as a reference, and the sum of the differences of the real values corresponding to any two adjacent evaluation samples after the sorting is obtained as the first evaluation parameter; and sequencing the evaluation samples according to a second sequencing mode taking the real value of each evaluation sample as a reference, and taking the sum of the difference values of the real values corresponding to any two adjacent evaluation samples sequenced this time as the second evaluation parameter. Therefore, the evaluation parameters of the model can be rapidly and accurately obtained, the adaptation degree of the selected target model and the target scene is improved, and the accuracy of the prediction result is improved.
Moreover, in this embodiment of the application, a ratio of the first evaluation parameter to the second evaluation parameter may also be obtained as the adaptation degree of the model in response to that the sorting order of the first sorting manner is the same as the sorting order of the second sorting manner; and in response to the fact that the sorting order of the first sorting mode is opposite to that of the second sorting mode, acquiring the negative number of the ratio of the first evaluation parameter to the second evaluation parameter as the adaptation degree of the model. Therefore, the adaptation degree can be reasonably calculated according to the sorting sequence, and the accuracy of the adaptation degree is improved.
In addition, in the embodiment of the application, an object which is successfully delivered within a preset time period in the target scene and a real value of the object during delivery are obtained, and the object is used as the evaluation sample; obtaining the delivery information of each object, and estimating the pre-evaluation value of each object through the model according to the delivery information of each object; the delivery information comprises at least one of delivery environment parameters of the object, attribute parameters of the object, user parameters of a receiving user of the object and operation parameters of the receiving user for the object. The models are evaluated by screening evaluation samples which are successfully put in, and the evaluation samples of the models in the same period under the same application scene can be the same, so that the accuracy of the evaluation results of the models can be improved, and a proper target model is selected for the target scene to improve the accuracy of the prediction results.
Referring to fig. 5, a schematic structural diagram of a data prediction apparatus in an embodiment of the present application is shown.
The data prediction device of the embodiment of the application comprises: an evaluation data acquisition module 210, a first evaluation parameter acquisition module 220, a second evaluation parameter acquisition module 230, a model adaptation module 240, and a data prediction module 250.
The functions of the modules and the interaction relationship between the modules are described in detail below.
An evaluation data obtaining module 210, configured to obtain, for any optional model in a target scene, evaluation samples of the target scene, a true value corresponding to each evaluation sample, and an estimated value of the model for each evaluation sample, where the target scene includes an expectation maximization scene for a target parameter, and the target parameter includes at least one of a quality parameter, an attribute parameter, and an operation parameter.
The first evaluation parameter obtaining module 220 is configured to sort the evaluation samples according to a first sorting manner that takes the estimated value of each evaluation sample as a reference, and obtain a first evaluation parameter of the model according to a difference between real values corresponding to any two adjacent evaluation samples sorted this time.
The second evaluation parameter obtaining module 230 is configured to rank the evaluation samples according to a second ranking mode that takes the real value of each evaluation sample as a reference, and obtain a second evaluation parameter of the model according to a difference between the real values corresponding to any two adjacent evaluation samples ranked this time.
A model adapting module 240, configured to determine a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each of the models.
A data prediction module 250, configured to obtain prediction data of a target parameter of any object in the target scene through the target model.
In the embodiment of the application, by aiming at any optional model in a target scene, the evaluation samples of the target scene, the real value corresponding to each evaluation sample and the estimated value of the model aiming at each evaluation sample are obtained; sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of the corresponding real values of any two adjacent evaluation samples sequenced this time; determining a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each model; and acquiring the prediction data of the target parameters of any object in the target scene through the target model. Thereby improving the accuracy of the prediction result.
Referring to fig. 6, in the embodiment of the present application, the first evaluation parameter obtaining module 220 is further configured to rank the evaluation samples according to a first ranking mode taking the predicted value of each evaluation sample as a reference, and obtain a sum of differences of real values corresponding to any two adjacent evaluation samples ranked this time as the first evaluation parameter;
the second evaluation parameter obtaining module 230 is further configured to rank the evaluation samples according to a second ranking mode taking the real value of each evaluation sample as a reference, and obtain a sum of differences between the real values corresponding to any two adjacent evaluation samples ranked this time as the second evaluation parameter.
Referring to fig. 6, in the embodiment of the present application, the model adaptation module 240 may further include:
the fitness obtaining submodule 241 is configured to obtain the fitness of each model according to the first evaluation parameter and the second evaluation parameter of each model;
and the model adaptor module 242 is configured to select, according to the degree of adaptation of each model in the target scene, a model with the highest degree of adaptation as the target model adapted to the target scene.
Optionally, in this embodiment of the application, the adaptation degree obtaining sub-module 241 further includes:
a first fitness obtaining unit, configured to obtain, in response to that a sorting order of the first sorting manner is the same as a sorting order of the second sorting manner, a ratio of the first evaluation parameter to the second evaluation parameter as a fitness of the model;
and the second fitness obtaining unit is used for obtaining a negative number of a ratio of the first evaluation parameter and the second evaluation parameter as the fitness of the model in response to the fact that the sorting order of the first sorting mode is opposite to that of the second sorting mode.
Referring to fig. 6, in the embodiment of the present application, the evaluation data obtaining module 210 may further include:
an evaluation sample obtaining submodule 211, configured to obtain an object that is successfully delivered within a preset time period in the target scene and a real value of the object during delivery, and use the object as the evaluation sample;
a pre-evaluation value obtaining sub-module 212, configured to obtain delivery information of each object, and obtain a pre-evaluation value of each object through the model pre-evaluation according to the delivery information of each object;
the delivery information comprises at least one of delivery environment parameters of the object, attribute parameters of the object, user parameters of a receiving user of the object and operation parameters of the receiving user for the object.
Optionally, the attribute parameter includes at least one of a profit value and an audio/video watching duration; the operation parameters comprise at least one of click rate, order placing rate, recommendation rate and browsing rate.
In the embodiment of the application, the evaluation samples are sorted according to a first sorting mode taking the estimated value of each evaluation sample as a reference, and the sum of the differences of the real values corresponding to any two adjacent evaluation samples after the sorting is obtained as the first evaluation parameter; and sequencing the evaluation samples according to a second sequencing mode taking the real value of each evaluation sample as a reference, and taking the sum of the difference values of the real values corresponding to any two adjacent evaluation samples sequenced this time as the second evaluation parameter. Therefore, the evaluation parameters of the model can be rapidly and accurately obtained, the adaptation degree of the selected target model and the target scene is improved, and the accuracy of the prediction result is improved.
Moreover, in this embodiment of the application, a ratio of the first evaluation parameter to the second evaluation parameter may also be obtained as the adaptation degree of the model in response to that the sorting order of the first sorting manner is the same as the sorting order of the second sorting manner; and in response to the fact that the sorting order of the first sorting mode is opposite to that of the second sorting mode, acquiring the negative number of the ratio of the first evaluation parameter to the second evaluation parameter as the adaptation degree of the model. Therefore, the adaptation degree can be reasonably calculated according to the sorting sequence, and the accuracy of the adaptation degree is improved.
In addition, in the embodiment of the application, an object which is successfully delivered within a preset time period in the target scene and a real value of the object during delivery are obtained, and the object is used as the evaluation sample; obtaining the delivery information of each object, and estimating the pre-evaluation value of each object through the model according to the delivery information of each object; the delivery information comprises at least one of delivery environment parameters of the object, attribute parameters of the object, user parameters of a receiving user of the object and operation parameters of the receiving user for the object. The models are evaluated by screening evaluation samples which are successfully put in, and the evaluation samples of the models in the same period under the same application scene can be the same, so that the accuracy of the evaluation results of the models can be improved, and a proper target model is selected for the target scene to improve the accuracy of the prediction results.
The data prediction apparatus provided in the embodiment of the present application can implement each process implemented in the method embodiments of fig. 1 to fig. 2, and is not described here again to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present application, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
It should be understood that, in the embodiment of the present application, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light Emitting Diode (O L ED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 7, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present application further provides an electronic device, including: the processor 510, the memory 509, and a computer program stored in the memory 509 and capable of running on the processor 510, where the computer program, when executed by the processor 510, implements each process of the data prediction method embodiment described above, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements each process of the data prediction method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed in the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of data prediction, comprising:
aiming at any optional model in a target scene, obtaining evaluation samples of the target scene, a real value corresponding to each evaluation sample and a pre-estimated value of the model aiming at each evaluation sample, wherein the target scene comprises an expectation maximization scene aiming at target parameters, and the target parameters comprise at least one of quality parameters, attribute parameters and operation parameters;
sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of the corresponding real values of any two adjacent evaluation samples sequenced this time;
sorting the evaluation samples according to a second sorting mode taking the real value of each evaluation sample as a reference, and acquiring a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples after sorting;
determining a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each model;
and acquiring the prediction data of the target parameters of any object in the target scene through the target model.
2. The method according to claim 1, wherein the step of sorting the evaluation samples according to a first sorting manner with the predicted value of each evaluation sample as a reference, and obtaining the first evaluation parameter of the model according to the difference between the corresponding real values of any two adjacent evaluation samples after the sorting comprises:
according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, sequencing the evaluation samples, and obtaining the sum of the differences of the real values corresponding to any two adjacent evaluation samples sequenced this time as the first evaluation parameter;
the step of sorting the evaluation samples according to a second sorting mode taking the real value of each evaluation sample as a reference and acquiring a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples after sorting this time includes:
and sequencing the evaluation samples according to a second sequencing mode taking the real value of each evaluation sample as a reference, and acquiring the sum of the difference values of the real values corresponding to any two adjacent evaluation samples sequenced this time as the second evaluation parameter.
3. The method according to claim 1 or 2, wherein the step of determining an object model adapted to the object scene from the first evaluation parameter and the second evaluation parameter of each of the models comprises:
obtaining the adaptation degree of each model according to the first evaluation parameter and the second evaluation parameter of each model;
and selecting the model with the highest adaptation degree as the target model adapted to the target scene according to the adaptation degree of each model in the target scene.
4. The method according to claim 3, wherein the step of obtaining the fitness of each model according to the first evaluation parameter and the second evaluation parameter of each model comprises:
in response to that the sorting order of the first sorting mode is the same as that of the second sorting mode, obtaining a ratio of the first evaluation parameter to the second evaluation parameter as the adaptation degree of the model;
and in response to the fact that the sorting order of the first sorting mode is opposite to that of the second sorting mode, acquiring the negative number of the ratio of the first evaluation parameter to the second evaluation parameter as the adaptation degree of the model.
5. The method of claim 1, wherein the step of obtaining, for any optional model in the target scene, the evaluation samples of the target scene, the real value corresponding to each evaluation sample, and the estimated value of the model for each evaluation sample comprises:
obtaining an object which is successfully launched within a preset time period in the target scene and a real value of the object during launching, and taking the object as the evaluation sample;
obtaining the delivery information of each object, and estimating the pre-evaluation value of each object through the model according to the delivery information of each object;
the delivery information comprises at least one of delivery environment parameters of the object, attribute parameters of the object, user parameters of a receiving user of the object and operation parameters of the receiving user for the object.
6. The method of claim 1, wherein the model comprises: a handle layer and an output layer, the handle layer comprising: a first normalization layer, a first fully-connected layer, and a first active layer, wherein the input of the processing layer is the input of the model, the input of the first normalization layer is the input of the processing layer, the output of the first normalization layer is the input of the first fully-connected layer, the output of the first fully-connected layer is the input of the first active layer, and the output of the first active layer is the input of each of the output layers, and the output layers include: the input of the second full connection layer is the input of the output layer, the output of the second full connection layer is used as the input of the second activation layer, and the output of the second activation layer is used as the output of the model.
7. The method of claim 1, wherein the attribute parameters comprise at least one of a revenue value, an audio-video viewing duration; the operation parameters comprise at least one of click rate, order placing rate, recommendation rate and browsing rate.
8. A data prediction apparatus, comprising:
the evaluation data acquisition module is used for acquiring evaluation samples of a target scene, a real value corresponding to each evaluation sample and a pre-evaluation value of the model for each evaluation sample aiming at any optional model in the target scene, wherein the target scene comprises an expectation maximization scene aiming at target parameters, and the target parameters comprise at least one of quality parameters, attribute parameters and operation parameters;
the first evaluation parameter acquisition module is used for sequencing the evaluation samples according to a first sequencing mode taking the estimated value of each evaluation sample as a reference, and acquiring a first evaluation parameter of the model according to the difference value of real values corresponding to any two adjacent evaluation samples sequenced this time;
the second evaluation parameter acquisition module is used for sequencing the evaluation samples according to a second sequencing mode taking the real value of each evaluation sample as a reference, and acquiring a second evaluation parameter of the model according to the difference value of the real values corresponding to any two adjacent evaluation samples sequenced this time;
the model adaptation module is used for determining a target model adapted to the target scene according to the first evaluation parameter and the second evaluation parameter of each model;
and the data prediction module is used for acquiring prediction data of target parameters of any object in the target scene through the target model.
9. An electronic device, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the data prediction method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the data prediction method according to any one of claims 1 to 7.
CN202010153163.9A 2020-03-06 2020-03-06 Data prediction method and device, electronic equipment and storage medium Pending CN111476629A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010153163.9A CN111476629A (en) 2020-03-06 2020-03-06 Data prediction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010153163.9A CN111476629A (en) 2020-03-06 2020-03-06 Data prediction method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111476629A true CN111476629A (en) 2020-07-31

Family

ID=71748038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010153163.9A Pending CN111476629A (en) 2020-03-06 2020-03-06 Data prediction method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111476629A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326638A (en) * 2021-08-03 2021-08-31 北京赛目科技有限公司 Method and device for determining automatic driving test scene
CN113408221A (en) * 2021-07-06 2021-09-17 太仓比泰科自动化设备有限公司 Probe service life prediction method, system, device and storage medium
CN113627681A (en) * 2021-08-25 2021-11-09 平安国际智慧城市科技股份有限公司 Data prediction method and device based on prediction model, computer equipment and medium
CN115796556A (en) * 2023-02-01 2023-03-14 北京有竹居网络技术有限公司 Decoration scheme determination method and device, electronic equipment and readable storage medium
CN117113137A (en) * 2023-08-07 2023-11-24 国网冀北电力有限公司信息通信分公司 Power model matching method and device, storage medium and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408221A (en) * 2021-07-06 2021-09-17 太仓比泰科自动化设备有限公司 Probe service life prediction method, system, device and storage medium
CN113326638A (en) * 2021-08-03 2021-08-31 北京赛目科技有限公司 Method and device for determining automatic driving test scene
CN113326638B (en) * 2021-08-03 2021-11-09 北京赛目科技有限公司 Method and device for determining automatic driving test scene
CN113627681A (en) * 2021-08-25 2021-11-09 平安国际智慧城市科技股份有限公司 Data prediction method and device based on prediction model, computer equipment and medium
CN115796556A (en) * 2023-02-01 2023-03-14 北京有竹居网络技术有限公司 Decoration scheme determination method and device, electronic equipment and readable storage medium
CN117113137A (en) * 2023-08-07 2023-11-24 国网冀北电力有限公司信息通信分公司 Power model matching method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN108520058B (en) Merchant information recommendation method and mobile terminal
CN111476629A (en) Data prediction method and device, electronic equipment and storage medium
CN110472145B (en) Content recommendation method and electronic equipment
CN108632658B (en) Bullet screen display method and terminal
CN108255382B (en) Method and device for recommending floating menu content
CN109543099B (en) Content recommendation method and terminal equipment
CN109032719B (en) Object recommendation method and terminal
CN109558512A (en) A kind of personalized recommendation method based on audio, device and mobile terminal
CN111143697B (en) Content recommendation method and related device
CN111737573A (en) Resource recommendation method, device, equipment and storage medium
CN108289057B (en) Video editing method and device and intelligent mobile terminal
CN110162653B (en) Image-text sequencing recommendation method and terminal equipment
CN110458655B (en) Shop information recommendation method and mobile terminal
WO2021120875A1 (en) Search method and apparatus, terminal device and storage medium
CN108718389B (en) Shooting mode selection method and mobile terminal
CN109246474B (en) Video file editing method and mobile terminal
CN108307039B (en) Application information display method and mobile terminal
CN111444425A (en) Information pushing method, electronic equipment and medium
CN112597361A (en) Sorting processing method and device, electronic equipment and storage medium
CN110990679A (en) Information searching method and electronic equipment
CN109658198B (en) Commodity recommendation method and mobile terminal
CN112000264B (en) Dish information display method and device, computer equipment and storage medium
CN110378798B (en) Heterogeneous social network construction method, group recommendation method, device and equipment
CN110045892B (en) Display method and terminal equipment
CN112131473A (en) Information recommendation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200731

WD01 Invention patent application deemed withdrawn after publication