CN114443896B - Data processing method and method for training predictive model - Google Patents

Data processing method and method for training predictive model Download PDF

Info

Publication number
CN114443896B
CN114443896B CN202210088356.XA CN202210088356A CN114443896B CN 114443896 B CN114443896 B CN 114443896B CN 202210088356 A CN202210088356 A CN 202210088356A CN 114443896 B CN114443896 B CN 114443896B
Authority
CN
China
Prior art keywords
sample
prediction
processed
result
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210088356.XA
Other languages
Chinese (zh)
Other versions
CN114443896A (en
Inventor
杨浩
郭宇
胡杏
刘文婷
余睿哲
赵子汉
苏东
郑宇航
彭志洺
秦首科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210088356.XA priority Critical patent/CN114443896B/en
Publication of CN114443896A publication Critical patent/CN114443896A/en
Priority to PCT/CN2022/107883 priority patent/WO2023142408A1/en
Priority to JP2022581432A priority patent/JP2024507602A/en
Application granted granted Critical
Publication of CN114443896B publication Critical patent/CN114443896B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification

Abstract

The present disclosure provides a data processing method and a method for training a prediction model, and relates to the field of computer technology, in particular to artificial intelligence technology. The implementation scheme is as follows: determining an object to be processed; determining an object category to which the object to be processed belongs based on the classification attribute of the object to be processed; determining a predictive model for the object to be processed based on the object class; and processing at least one prediction feature of the object to be processed by using the prediction model to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.

Description

Data processing method and method for training predictive model
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to an artificial intelligence technique, and more particularly, to a data processing method and a method, apparatus, electronic device, computer readable storage medium, and computer program product for training a predictive model.
Background
Artificial intelligence is the discipline of studying the process of making a computer mimic certain mental processes and intelligent behaviors (e.g., learning, reasoning, thinking, planning, etc.) of a person, both hardware-level and software-level techniques. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
Predictive tasks have important tasks in a variety of artificial intelligence application scenarios. For example, in a video recommendation scenario, predicting the stay time of a user at a recommended video asset has a critical effect on the video recommendation results.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a data processing method and a method, apparatus, electronic device, computer readable storage medium and computer program product for training a predictive model.
According to an aspect of the present disclosure, there is provided a data processing method including: determining an object to be processed; determining an object category to which the object to be processed belongs based on the classification attribute of the object to be processed; determining a predictive model for the object to be processed based on the object class; and processing at least one prediction feature of the object to be processed by using the prediction model to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.
According to another aspect of the present disclosure, there is provided a method for training a predictive model, comprising: determining a sample set comprising a plurality of sample objects; determining a first sample subset and a second sample subset in a sample set based on classification properties of the sample objects; training a first predictive model using first sample objects in a first subset of samples; training a second predictive model using a second sample object in a second subset of samples; and wherein the first prediction model and the second prediction model are used for processing at least one prediction feature of the object to be processed to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.
According to another aspect of the present disclosure, there is provided a data processing apparatus including: a to-be-processed object determining unit configured to determine an object to be processed; an object category determination unit configured to determine an object category to which the object to be processed belongs based on a classification attribute of the object to be processed; a prediction model determination unit configured to determine a prediction model for the object to be processed based on the object class; and a prediction unit configured to process at least one prediction feature of the object to be processed using the prediction model to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are the same type of variable.
According to another aspect of the present disclosure, there is provided an apparatus for training a predictive model, comprising: a sample determination unit configured to determine a sample set including a plurality of sample objects; a classification unit configured to determine a first sample subset and a second sample subset of a sample set based on classification properties of the sample object; a first predictive model training unit configured to train a first predictive model using first sample objects in a first subset of samples; a second predictive model training unit configured to train a second predictive model using a second sample object in a second subset of samples; and wherein the first prediction model and the second prediction model are used for processing at least one prediction feature of the object to be processed to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method as described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method as described above.
According to one or more embodiments of the present disclosure, for a single variable prediction problem, an object class to which an object to be processed belongs may be determined based on a value of a classification attribute of a variable belonging to the same type as the single variable to be predicted, and a prediction result may be obtained based on a prediction model trained for an object belonging to the object class. By using the method, the prediction model trained for variable prediction of different intervals can be utilized for prediction, so that the characteristics of the objects to be processed in different categories can be better identified.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates an exemplary flow chart of a data processing method according to an embodiment of the present disclosure;
FIG. 3 illustrates an exemplary flow chart of a method for training a predictive model in accordance with an embodiment of the disclosure;
FIG. 4A illustrates an example graph of regression loss calculated from a loss function according to an embodiment of the disclosure;
FIG. 4B illustrates an example graph of gradients of a loss function according to an embodiment of the disclosure;
FIG. 5 illustrates an example diagram of a multi-task training framework in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates an exemplary block diagram of a data processing apparatus according to an embodiment of the present disclosure;
FIG. 7 illustrates an exemplary block diagram of an apparatus for training a predictive model in accordance with an embodiment of the disclosure; and
Fig. 8 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable execution of methods according to embodiments of the present disclosure.
In some embodiments, server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may use client devices 101, 102, 103, 104, 105, and/or 106 to obtain user input and provide the user with processing results obtained by methods according to embodiments of the present disclosure. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as MICROSOFT Windows, APPLE iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., GOOGLE Chrome OS); or include various mobile operating systems such as MICROSOFT Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays (such as smart glasses) and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and/or 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and/or 106.
In some implementations, the server 120 may be a server of a distributed system or a server that incorporates a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or intelligent cloud host with artificial intelligence technology. The cloud server is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. Database 130 may reside in various locations. For example, the database used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. Database 130 may be of different types. In some embodiments, the database used by server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 shows an exemplary flowchart of a data processing method according to an embodiment of the present disclosure. The method 200 shown in fig. 2 may be performed with a client device or server shown in fig. 1.
As shown in fig. 2, in step S202, an object to be processed is determined.
In step S204, an object class to which the object to be processed belongs is determined based on the classification attribute of the object to be processed.
In step S206, a predictive model for the object to be processed is determined based on the object class.
In step S208, at least one prediction feature of the object to be processed is processed by using the prediction model to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.
According to the method provided by one or more embodiments of the present disclosure, for the problem of predicting a single variable, an object class to which the object to be processed belongs may be determined based on a value of a classification attribute of a variable belonging to the same type as the single variable to be predicted, and a prediction result may be obtained based on a prediction model trained for the object belonging to the object class. By using the method, the prediction model trained for variable prediction of different intervals can be utilized for prediction, so that the characteristics of the objects to be processed in different categories can be better identified.
The data processing method provided by the present disclosure will be described in detail below.
In step S202, an object to be processed may be determined.
In step S204, an object class to which the object to be processed belongs may be determined based on the classification attribute of the object to be processed.
In some embodiments, the object to be processed may be a video and the classification attribute is a video length. The video may be processed using the methods provided by the present disclosure to obtain a predicted time for indicating the user to view the video as a predicted result. Wherein the predicted time may indicate a length of time a user remains in the video when the video is recommended to the user. In other embodiments, the object to be processed is weather history data, and the classification attribute may be one of the weather parameters (e.g., temperature, humidity, wind, precipitation, etc.) included in the weather history data. The weather history data may be processed using the methods provided by the present disclosure to obtain a prediction result indicative of predicted weather. The prediction may be indicative of a prediction of a meteorological parameter. In other embodiments, any other form of univariate continuous value estimation task may also be implemented using the methods provided by embodiments of the present disclosure.
The principles of the present disclosure will be described hereinafter taking the example that the object to be processed is video.
In some embodiments, step S204 may include determining a video length interval to which the video belongs based on the video length of the video, and may determine the identification of the determined video length interval as the object class to which the video belongs.
For video resources included in the video platform, taking the number of video resources as 1000 and the longest video length as 100 seconds as an example, the video resources may be classified based on the number of video resources of different video lengths. For example, taking the number of video length intervals to be classified as 2 as an example, 1000 video resources may be ordered according to video lengths, where the video length interval to which the first half of video belongs is determined to be a first video category, and the video length interval to which the second half of video belongs is determined to be a second video category. For example, the first video category may include video assets having a video length in the interval 0-45 seconds, while the second video category may include video assets having a video length in the interval 46-100 seconds. Video assets on a platform may be divided into two or more categories using similar methods.
In some examples, if there is a large difference in the length of the video intervals, which is obtained by the average classification according to the number distribution, the result of the video classification may be adjusted using various methods. For example, taking the longest video length as 100 seconds and classifying two video categories as an example, if the first video category includes video resources with a length between 0 and 10 seconds and the second video category includes video resources with a length between 11 and 100 seconds, the result of the video classification may be adjusted. For example, the difference in the length intervals of the video assets corresponding to the two video categories may be limited to not more than 50 seconds. The results of the video classification may be adjusted based on such restrictions without restricting the number of video resources included in both video categories to be the same (or substantially the same).
In step S206, a predictive model for the object to be processed may be determined based on the object class. Taking the example that the object class includes a first object class (e.g., a first video class) and a second object class (e.g., a second video class), in response to determining that the object class is the first object class, it may be determined that the first predictive model is to be used for the object to be processed, and in response to determining that the object class is the second object class, it may be determined that the second predictive model is to be used for the object to be processed. Wherein the first predictive model may be a predictive model trained using video assets belonging to a first video category and the second predictive model may be a predictive model trained using video assets belonging to a second video category. In this way, different predictive models can be trained to identify different characteristics (e.g., different characteristics of longer video and shorter video) that the objects in different object classes have, resulting in more accurate predictions for different objects to be processed that belong to different object classes.
In some examples, the first predictive model and the second predictive model may be homogeneous models with different parameters, such as polynomials, neural networks, or any other mathematical model that can be adapted.
In step S208, at least one prediction feature of the object to be processed is processed by using the prediction model to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.
Wherein the variable value of the prediction result and the variable value of the classification attribute have the same variable unit. Taking video watching duration prediction as an example, the classification attribute is video duration, the prediction result is video watching duration, and the variable units of the classification attribute and the prediction result are time units. By using the method, the prediction problems of the continuous values of the single variables can be conveniently classified and predicted for the variables of different categories.
In some embodiments, step S208 may include: processing at least one prediction feature of the object to be processed by using the prediction model to obtain a normalized prediction result of the object to be processed; and processing the normalized prediction result by using the normalization parameters for the object class to obtain the prediction result.
Taking an object to be processed as a video, taking a classification attribute as a video length as an example, and for videos in intervals with different lengths, normalizing real results of user watching time periods of all sample videos when training a model, so that the prediction effect of the model is not influenced by the time periods when training the model. That is, the model parameters are obtained by adopting normalized prediction results when the videos in the intervals with different lengths are trained, so that the model can learn the watching characteristics of the video users in the intervals with different lengths.
In some embodiments, the normalized parameter corresponding to an object class may be an interval parameter of a classification variable corresponding to the object class. Taking the case that the classification variable is the video length as an example, the interval parameter may be the maximum value (i.e., the rightmost endpoint) of the video length interval corresponding to the video category. Thus, the true results of the user's viewing time period for sample video within the video category can be normalized to between 0 and 1. In other examples, the interval parameter may also be determined as any other value in the video length interval corresponding to the video category. In this case, the normalized true result may be greater than 1. For example, when the rightmost end point of the section to which the classification variable corresponds includes an infinite term, the value of any intermediate point in the section may be selected as the normalization parameter.
The prediction model trained in this way will output the normalized prediction result of the object to be processed. And carrying out inverse processing on the normalized prediction result output by the prediction model by utilizing the normalization parameters of the object class corresponding to the prediction model so as to obtain the prediction result. For example, for video objects with video lengths in the interval 0-45 seconds, the normalization parameter may be 45. The actual results of the user's viewing duration of the sample video during training will be divided by 45 to normalize. The prediction model trained in this way will output a normalized prediction result, e.g., 0.4, when processing the video to be processed. The normalized prediction result may be multiplied by 45 again to obtain the actual prediction result, i.e., 45×0.4=18 seconds. The normalized prediction results may also be inverted in a similar manner to obtain actual prediction results when the normalization parameters are set to other values.
FIG. 3 illustrates an exemplary flow chart of a method for training a predictive model in accordance with an embodiment of the disclosure. The predictive model used in method 200 may be trained using the method shown in fig. 3. The method 300 shown in fig. 3 may be performed with a client device or server shown in fig. 1.
As shown in fig. 3, in step S302, a sample set including a plurality of sample objects is determined.
In step S304, a first sample subset and a second sample subset of the sample set are determined based on the classification properties of the sample objects.
In step S306, a first predictive model is trained using first sample objects in the first subset of samples.
In step S308, a second predictive model is trained using a second sample object in the second subset of samples.
The first prediction model and the second prediction model are used for processing at least one prediction feature of the object to be processed to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.
By utilizing the training method provided by the embodiment of the invention, different prediction models can be trained for different objects with different classification attributes, so that more accurate prediction results can be provided for the objects with different characteristics.
The method for training the predictive model provided by the present disclosure will be described in detail below. The predictive models referred to in this disclosure may be implemented using polynomials, neural networks, or any other form of mathematical model.
In step S302, a sample set including a plurality of sample objects may be determined. It should be noted that, the sample object in this embodiment is from the public data set.
In step S304, a first sample subset and a second sample subset of the sample set may be determined based on the classification properties of the sample objects.
In some embodiments, the object to be processed for which the sample object and the predictive model are directed may be a video and the classification attribute may be a video length. In other embodiments, the sample object and the object to be processed for which the predictive model is directed may be weather history data, and the classification attribute may be one of the weather parameters (e.g., temperature, humidity, wind, precipitation, etc.) included in the weather history data. The principles of the present disclosure will be described below taking the example that the sample object is video. However, it is understood that in other embodiments, any other form of univariate continuous value estimation task may be implemented using the methods provided by embodiments of the present disclosure.
The first subset of samples may comprise at least one first sample object belonging to a first video length interval and the second subset of samples may comprise at least one second sample object belonging to a second video length interval. As previously described, sample video may be classified based on the video length of the video asset. The sample video may be classified based on a longest length of video resources in the sample set and a distribution of video resources in different video length intervals to obtain a first sample subset of the first video category and a second sample subset of the second video category. For example, taking the number of video assets as 1000 and the longest video length as 100 seconds, the first subset of samples may include video assets having a video length in the interval of 0-45 seconds, and the second subset of samples may include video assets having a video length in the interval of 46-100 seconds. In some examples, the number of sample objects in the first sample subset and the number of sample objects in the second sample subset may be the same. The same number may refer to the same number, or may refer to the number of sample objects in the two sample subsets being substantially the same, i.e. the number difference is less than a predetermined threshold. In other examples, when the interval difference obtained by performing the average distribution according to the number of samples is too large (for example, when the video length interval corresponding to the first sample subset is 0-10 seconds, the video length interval corresponding to the second sample subset is 11-100 seconds), the difference between the length intervals of the video resources corresponding to the two video categories may be limited to be not more than 50 seconds. The results of the sample video classification may be adjusted based on such limitations without limiting the number of video resources included in the sample subsets of the two video categories to be the same (or substantially the same).
In step S306, a first predictive model may be trained using first sample objects in the first subset of samples.
In step S308, a second predictive model may be trained using a second sample object in the second subset of samples.
The first prediction model and the second prediction model are used for processing at least one prediction feature of the object to be processed to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.
Wherein the sample objects used by the first predictive model and the second predictive model in training belong to different object classes, but the same method may be used to train parameters in the models. The training method according to the embodiment of the present disclosure will be described below taking the first predictive model as an example. A similar method may be applied to the second subset of samples to train to obtain a second predictive model.
The first predictive model may be trained by: determining a first current parameter of a first predictive model; processing at least one first sample feature of the first sample object with the first current parameter to obtain a first sample prediction result of the first sample object; determining a first real sample result of the first sample; the first current parameter is adjusted based on the first sample prediction result and the first real sample result.
When training begins, the first current parameter of the first predictive model may be a preset first initial parameter. After each round of training is completed, the first current parameter used for the current training round number may be updated. The current parameters may be updated using various optimization methods, such as various iterative methods including first-order iterative methods, second-order iterative methods, gradient descent methods, newton methods, and the like. For a neural network implemented predictive model, the first current parameters may also be updated using a back-propagation approach.
In some implementations, the current parameter may be adjusted based on a loss result from the loss function.
For example, a first loss may be determined based on the first sample prediction result and the first real sample result, and a first current parameter of the first prediction model may be adjusted based on the first loss. Wherein the first loss may be determined based on a preset loss function, wherein the preset loss function may be indicative of a difference between the first sample prediction result and the first real sample result.
In some examples, the optimization of the current parameters of the predictive model during training may be improved by applying different loss functions for sample points of different errors.
For example, the first loss is determined using a first loss function when the difference between the first sample prediction result and the first real sample result is less than a training threshold, and the first loss is determined using a second loss function when the difference between the first sample prediction result and the first real sample result is not less than the training threshold. Wherein the loss determined using the first loss function and the loss determined using the second loss function are the same when the difference between the first sample prediction result and the first real sample result is equal to the training threshold. The larger the difference between the first sample prediction result and the first real sample result, the larger the optimization error of the indicating sample. Different loss functions may be applied to sample points of different optimization errors to improve the optimization efficiency of the model.
In some examples, the training threshold may be varied as the number of exercises increases. For example, the training threshold may decay as the number of exercises increases. The training threshold may be attenuated using various learning rate attenuation methods used in machine learning methods, such as exponential attenuation, fixed step attenuation, multi-step length attenuation, cosine annealing attenuation, and the like. Wherein, the larger the training threshold, the larger the influence and contribution of the sample points with larger optimization errors to model optimization. Therefore, by attenuating the training threshold value along with the increase of the training times, the influence of the sample point with larger optimization error on the parameter adjustment of the model in the initial stage of training can be larger, so that the rapid adjustment of the parameter can be realized, and the influence of the sample point with smaller optimization error on the parameter adjustment of the model in the later stage of training is larger, so that the effect of fine adjustment of the parameter can be realized.
In some examples, a first loss function L 1 Can be represented by the following formula (1):
L 1 =0.5|f(x)-y| 2 /β,if|f(x)-y|<β; (1)
second loss function L 2 Can be represented by the following formula (2):
L 2 =|f(x)-y|-β,if|f(x)-y|≥β (2)
where f (x) represents the first sample prediction result, y represents the first real sample result, and β represents the training threshold. During the initial stages of training, β may take on a value of 1 or any other suitable value.
For videos in intervals with different lengths, real results of the watching time length of a user of each sample video can be normalized when the model is trained, so that the prediction effect of the model is not influenced by the time length when the model is trained. That is, the model parameters are obtained by adopting normalized prediction results when the videos in the intervals with different lengths are trained, so that the model can learn the watching characteristics of the video users in the intervals with different lengths.
In some embodiments, the normalized parameter corresponding to an object class may be an interval parameter of a classification variable corresponding to the object class. Taking the case that the classification variable is the video length as an example, the interval parameter may be the maximum value of the video length interval corresponding to the video class. Thus, the true results of the user's viewing time period for sample video within the video category can be normalized to between 0 and 1. In other examples, the interval parameter may also be determined as any other value in the video length interval corresponding to the video category. In this case, the normalized true result may be greater than 1. For example, when the end point of the section to which the classification variable corresponds includes an infinite term, the value of any intermediate point in the section may be selected as the normalization parameter. In the training process, the normalization parameters can be attenuated along with the increase of the training round number. Such attenuation may be linear, exponential, or any other possible form of attenuation.
FIG. 4A illustrates an example graph of regression loss calculated from a loss function according to an embodiment of the disclosure. As can be seen from fig. 4A, regardless of the value of the training threshold β, the loss calculated using the loss function may be closer to the result of the L2 loss when the regression loss error is small (i.e., the difference between the predicted result and the actual result is small), and may be closer to the result of the L1 loss when the regression loss error is large (i.e., the difference between the predicted result and the actual result is large).
Fig. 4B shows an example graph of gradients of a loss function according to an embodiment of the disclosure. As can be seen from fig. 4B, as the value of the training threshold β is continuously attenuated, the gradient value of the loss function corresponding to the sample with smaller regression loss error (i.e., smaller difference between the predicted result and the real result) is continuously increased. That is, attenuating the training threshold β as training proceeds can increase the impact and contribution of sample points with smaller errors to the model parameters.
In some embodiments, after training the first and second prediction models using the first and second sample sets, respectively, the method 300 may further include multitasking the first and second prediction models using a multitasking training framework to obtain final parameters of the first and second prediction models. The generalization capability of the first prediction model and the second prediction model can be further improved by using the method of multitasking training. The first predictive model and the second predictive model may be multitasking using any existing multitasking framework. For example, the multitasking training framework used in embodiments of the present disclosure may be implemented using the MMoe (Multi-Gate mix-of-expertise) or PLE (Progressive Layered Extraction) models.
Fig. 5 illustrates an example diagram of a multitasking training framework in accordance with an embodiment of the present disclosure.
As shown in fig. 5, the multitasking framework 500 may include an input 501, an expert network 502, a Gate network 503-a, 503-B, a first model 504, a second model 505, and a first output 506 and a second output 507 corresponding to the first model 504 and the second model 505, respectively.
The first model 504 and the second model 505 may be a first prediction model and a second prediction model that are trained by using the training method described in connection with fig. 3. For example, the first model 504 may be used for user viewing duration predictions for videos of a first video category and the second model 505 may be used for user viewing duration predictions for videos of a second video category.
It will be appreciated that although the first model 504 and the second model 505 are two different models, since the feature identities that the two tasks need to learn for the user viewing duration predictions for the video of the first video category and the user viewing duration predictions for the video of the second video category are similar, the commonality of the predicted tasks corresponding to the first model and the tasks predicted by the second model can be learned through the shared expert network 502, gate networks 503-a, 503-B, thereby enabling further enhancement of the generalization capability of the models.
Although only two models are shown in fig. 5, it is to be understood that the multitasking training framework shown in fig. 5 may be used for more models of multitasking.
In some embodiments, the input 501 may correspond to at least one predicted feature of the object to be processed. Taking the video as an example of an object to be processed, the input 501 may be a user characteristic of a user who viewed the video, an individual (item) characteristic of the video, and so on. Where individual characteristics of the video may include video identifiers, video content categories, endorsements, comments, collections, historical click rates, etc. Further, the input 501 may also include a tag indicating a classification attribute of the corresponding object to be processed. With such a tag it is possible to determine the object class to which the object to be processed to which the input 501 corresponds belongs and further to determine which model (e.g. which of the first model and the second model) to use in the subsequent process to obtain the final output result.
In the multitasking training framework shown in fig. 5, for the kth task (k=1 or 2), its output can be expressed as formula (3):
where n represents the number of sub-expert networks in expert network 502, g k (x) u Gate network representing kth task when input is x for output of ith sub-expert network, E i (x) The output for the ith sub-expert network when the input is x is shown. f (f) k A model representing the kth task. When k=1, f may represent the first model. When k=2, f may represent the second model.
Wherein each sub-expert network may be implemented as a fully connected network, and each Gate network may be implemented as a combination of linear transformation and softmax. Gate networks can be used to map the input x to n dimensions, and then applying a softmax function to the results for each dimension can result in weights for each sub-expert network.
As previously described, since the input x includes a label indicating the object category, the final output result can be obtained using the first model or the second model based on the label determination.
In multitasking training, the loss function of the multitasking can be expressed as equation (4):
wherein l i Can represent the loss of the ith task, alpha i The weight for the ith task may be represented, N representing the total number of tasks.
Wherein the weight of each task may be determined by the importance of the respective weight. For example, heuristic algorithms (such as reinforcement learning or evolutionary learning) may be used to determine the weights of the various tasks.
Fig. 6 shows an exemplary block diagram of a data processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the data processing apparatus 600 may include a pending object determination unit 610, an object class determination unit 620, a prediction model determination unit 630, and a prediction unit 640.
Wherein the object to be processed determining unit 610 may be configured to determine the object to be processed. The object class determination unit 620 may be configured to determine an object class to which the object to be processed belongs based on the classification attribute of the object to be processed. The prediction model determination unit 630 may be configured to determine a prediction model for the object to be processed based on the object class. The prediction unit 640 may be configured to process at least one prediction feature of the object to be processed using the prediction model to obtain a prediction result of the object to be processed. Wherein the prediction result and the classification attribute are the same type of variable.
Steps S202 to S208 shown in fig. 2 may be performed by using units 610 to 640 shown in fig. 6, and will not be described again.
FIG. 7 illustrates an exemplary block diagram of an apparatus for training a predictive model in accordance with an embodiment of the disclosure.
As shown in fig. 7, the apparatus 700 may include a sample determination unit 710, a classification unit 720, a first prediction model training unit 730, and a second model training unit 740.
Wherein the sample determination unit 710 may be configured to determine a sample set comprising a plurality of sample objects. The classification unit 720 may be configured to determine a first sample subset and a second sample subset of the sample set based on the classification properties of the sample object. The first predictive model training unit 730 may be configured to train the first predictive model using the first sample objects in the first subset of samples. The second model training unit 740 may be configured to train the second predictive model with the second sample object in the second sample subset. The first prediction model and the second prediction model are used for processing at least one prediction feature of the object to be processed to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type.
Steps S302 to S308 shown in fig. 3 may be performed by using the units 710 to 740 shown in fig. 7, and will not be described again.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
Referring to fig. 8, a block diagram of an electronic device 800 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the electronic device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in electronic device 800 are connected to I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the electronic device 800, the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 807 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. The storage unit 808 may include, but is not limited to, magnetic disks, optical disks. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices over computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the various methods and processes described above, such as methods 200, 300. For example, in some embodiments, the methods 200, 300 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. One or more of the steps of the methods 200, 300 described above may be performed when a computer program is loaded into the RAM 803 and executed by the computing unit 801. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the methods 200, 300 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (12)

1. A method for training a predictive model, comprising:
Determining a sample set comprising a plurality of sample objects;
determining a first sample subset and a second sample subset in a sample set based on classification properties of the sample objects, wherein the first sample subset comprises at least one first sample object belonging to a first video length interval and the second sample subset comprises at least one second sample object belonging to a second video length interval;
training a first predictive model using first sample objects in a first subset of samples;
training a second predictive model with a second sample object in a second subset of samples, wherein the training process for either of the first predictive model and the second predictive model comprises:
determining current parameters of the model;
processing at least one sample characteristic of a sample object in a sample subset corresponding to the model by utilizing the current parameter to obtain a sample prediction result of the sample object;
determining a true sample result of the sample object; and
adjusting the current parameter based on the sample prediction result and the real sample result, including:
determining a first loss based on the sample prediction result and the true sample result, comprising:
When the difference between the sample prediction result and the real sample result is less than a training threshold, determining the first loss using a first loss function,
determining the first loss by using a second loss function when the difference between the sample prediction result and the real sample result is not less than a training threshold, wherein the larger the difference between the sample prediction result and the real sample result is, the larger the corresponding first loss is, and the training threshold is attenuated with the increase of training times; and
adjusting the current parameter based on the first penalty; and
the first prediction model and the second prediction model are used for processing at least one prediction feature of an object to be processed to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type, the sample object and the object to be processed are videos, and the classification attribute is a video length.
2. The method of claim 1, wherein the number of sample objects in the first sample subset is the same as the number of sample objects in the second sample subset.
3. The method of claim 1, wherein the loss determined using the first loss function and the loss determined using the second loss function are the same when the difference between the sample prediction result and the real sample result is equal to the training threshold.
4. The method of claim 1, wherein the first loss function is represented by the following formula (1):
L 1 =0.5|f(x)-y| 2 /β; (1)
the second loss function is represented by the following formula (2):
L 2 =|f(x)-y|-β (2)
where f (x) represents the first sample prediction result, y represents the first real sample result, and β represents the training threshold.
5. The method of claim 1, further comprising: and performing multi-task training on the first prediction model and the second prediction model by using a multi-task training framework to obtain final parameters of the first prediction model and final parameters of the second prediction model.
6. A data processing method, comprising:
determining an object to be processed;
determining an object class to which the object to be processed belongs based on a classification attribute of the object to be processed, wherein the object to be processed is a video, the classification attribute is a video length, and determining the object class to which the object to be processed belongs based on the classification attribute of the object to be processed comprises:
determining a video length interval to which the video belongs based on the video length of the video; and
determining the identification of the video length interval as the object category;
determining a prediction model for the object to be processed in a first prediction model and a second prediction model based on the object class, wherein the first prediction model and the second prediction model are obtained based on the training of the method according to any one of claims 1-5;
And processing at least one prediction feature of the object to be processed by using the prediction model to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables with the same type, and the prediction result is the prediction time of the user watching the video.
7. The data processing method of claim 6, wherein the determining a prediction model for the object to be processed in a first prediction model and a second prediction model based on the object class comprises:
determining to use the first prediction model for the object to be processed in response to determining that the object class to which the object to be processed belongs is a first video length interval; and
in response to determining that the object class to which the object to be processed belongs is a second video length interval, determining to use the second predictive model for the object to be processed.
8. The data processing method according to claim 6, wherein the processing the at least one prediction feature of the object to be processed using the prediction model to obtain the prediction result of the object to be processed includes:
processing at least one prediction feature of the object to be processed by using the prediction model to obtain a normalized prediction result of the object to be processed; and
And processing the normalized prediction result by using the normalization parameter for the object class to obtain the prediction result.
9. An apparatus for training a predictive model, comprising:
a sample determination unit configured to determine a sample set including a plurality of sample objects;
a classification unit configured to determine a first sample subset and a second sample subset in a sample set based on classification properties of the sample objects, wherein the first sample subset comprises at least one first sample object belonging to a first video length interval and the second sample subset comprises at least one second sample object belonging to a second video length interval;
a first predictive model training unit configured to train a first predictive model using first sample objects in a first subset of samples;
a second prediction model training unit configured to train a second prediction model with a second sample object in a second subset of samples, the first and second prediction models being homogeneous models, wherein the training process for any one of the first and second prediction models comprises:
determining current parameters of the model;
Processing at least one sample characteristic of a sample object in a sample subset corresponding to the model by utilizing the current parameter to obtain a sample prediction result of the sample object;
determining a true sample result of the sample object; and
adjusting the current parameter based on the sample prediction result and the real sample result, including:
determining a first loss based on the sample prediction result and the true sample result, comprising:
when the difference between the sample prediction result and the real sample result is less than a training threshold, determining the first loss using a first loss function,
determining the first loss by using a second loss function when the difference between the sample prediction result and the real sample result is not less than a training threshold, wherein the larger the difference between the sample prediction result and the real sample result is, the larger the corresponding first loss is, and the training threshold is attenuated with the increase of training times; and
adjusting the current parameter based on the first penalty; and
the first prediction model and the second prediction model are used for processing at least one prediction feature of an object to be processed to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables of the same type, the sample object and the object to be processed are videos, and the classification attribute is a video length.
10. A data processing apparatus comprising:
a to-be-processed object determining unit configured to determine an object to be processed;
an object class determining unit configured to determine an object class to which the object to be processed belongs based on a classification attribute of the object to be processed, wherein the object to be processed is a video, the classification attribute is a video length, and determining the object class to which the object to be processed belongs based on the classification attribute of the object to be processed includes:
determining a video length interval to which the video belongs based on the video length of the video; and
determining the identification of the video length interval as the object category;
a prediction model determination unit configured to determine a prediction model for the object to be processed in a first prediction model and a second prediction model based on the object class, wherein the first prediction model and the second prediction model are obtained based on a training of the method according to any one of claims 1-5;
and a prediction unit configured to process at least one prediction feature of the object to be processed by using the prediction model to obtain a prediction result of the object to be processed, wherein the prediction result and the classification attribute are variables with the same type, and the prediction result is a prediction time of a user watching the video.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
12. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202210088356.XA 2022-01-25 2022-01-25 Data processing method and method for training predictive model Active CN114443896B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202210088356.XA CN114443896B (en) 2022-01-25 2022-01-25 Data processing method and method for training predictive model
PCT/CN2022/107883 WO2023142408A1 (en) 2022-01-25 2022-07-26 Data processing method and method for training prediction model
JP2022581432A JP2024507602A (en) 2022-01-25 2022-07-26 Data processing methods and methods for training predictive models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210088356.XA CN114443896B (en) 2022-01-25 2022-01-25 Data processing method and method for training predictive model

Publications (2)

Publication Number Publication Date
CN114443896A CN114443896A (en) 2022-05-06
CN114443896B true CN114443896B (en) 2023-09-15

Family

ID=81369719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210088356.XA Active CN114443896B (en) 2022-01-25 2022-01-25 Data processing method and method for training predictive model

Country Status (3)

Country Link
JP (1) JP2024507602A (en)
CN (1) CN114443896B (en)
WO (1) WO2023142408A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114443896B (en) * 2022-01-25 2023-09-15 百度在线网络技术(北京)有限公司 Data processing method and method for training predictive model

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331778A (en) * 2015-07-06 2017-01-11 腾讯科技(深圳)有限公司 Video recommendation method and device
CN111353631A (en) * 2019-11-26 2020-06-30 国网山东省电力公司电力科学研究院 Thermal power plant condenser vacuum degree prediction method based on multilayer LSTM
CN111428008A (en) * 2020-06-11 2020-07-17 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for training a model
CN111523575A (en) * 2020-04-13 2020-08-11 中南大学 Short video recommendation model based on short video multi-modal features
CN111738441A (en) * 2020-07-31 2020-10-02 支付宝(杭州)信息技术有限公司 Prediction model training method and device considering prediction precision and privacy protection
CN111753863A (en) * 2019-04-12 2020-10-09 北京京东尚科信息技术有限公司 Image classification method and device, electronic equipment and storage medium
CN111898744A (en) * 2020-08-10 2020-11-06 维森视觉丹阳有限公司 TDLAS trace gas concentration detection method based on pooled LSTM
CN113065614A (en) * 2021-06-01 2021-07-02 北京百度网讯科技有限公司 Training method of classification model and method for classifying target object
CN113221689A (en) * 2021-04-27 2021-08-06 苏州工业职业技术学院 Video multi-target emotion prediction method and system
CN113554180A (en) * 2021-06-30 2021-10-26 北京百度网讯科技有限公司 Information prediction method, information prediction device, electronic equipment and storage medium
CN113569129A (en) * 2021-02-02 2021-10-29 腾讯科技(深圳)有限公司 Click rate prediction model processing method, content recommendation method, device and equipment
CN113723378A (en) * 2021-11-02 2021-11-30 腾讯科技(深圳)有限公司 Model training method and device, computer equipment and storage medium
CN113821682A (en) * 2021-09-27 2021-12-21 深圳市广联智通科技有限公司 Multi-target video recommendation method and device based on deep learning and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103744928B (en) * 2013-12-30 2017-10-03 北京理工大学 A kind of network video classification method based on history access record
CN104657468B (en) * 2015-02-12 2018-07-31 中国科学院自动化研究所 The rapid classification method of video based on image and text
CN110532996B (en) * 2017-09-15 2021-01-22 腾讯科技(深圳)有限公司 Video classification method, information processing method and server
US20200134734A1 (en) * 2018-10-26 2020-04-30 Cover Financial, Inc. Deep learning artificial intelligence for object classification
DE102019213547A1 (en) * 2019-09-05 2021-03-11 Robert Bosch Gmbh Device and method for training a model and vehicle
CN114443896B (en) * 2022-01-25 2023-09-15 百度在线网络技术(北京)有限公司 Data processing method and method for training predictive model

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331778A (en) * 2015-07-06 2017-01-11 腾讯科技(深圳)有限公司 Video recommendation method and device
CN111753863A (en) * 2019-04-12 2020-10-09 北京京东尚科信息技术有限公司 Image classification method and device, electronic equipment and storage medium
CN111353631A (en) * 2019-11-26 2020-06-30 国网山东省电力公司电力科学研究院 Thermal power plant condenser vacuum degree prediction method based on multilayer LSTM
CN111523575A (en) * 2020-04-13 2020-08-11 中南大学 Short video recommendation model based on short video multi-modal features
CN111428008A (en) * 2020-06-11 2020-07-17 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for training a model
CN111738441A (en) * 2020-07-31 2020-10-02 支付宝(杭州)信息技术有限公司 Prediction model training method and device considering prediction precision and privacy protection
CN111898744A (en) * 2020-08-10 2020-11-06 维森视觉丹阳有限公司 TDLAS trace gas concentration detection method based on pooled LSTM
CN113569129A (en) * 2021-02-02 2021-10-29 腾讯科技(深圳)有限公司 Click rate prediction model processing method, content recommendation method, device and equipment
CN113221689A (en) * 2021-04-27 2021-08-06 苏州工业职业技术学院 Video multi-target emotion prediction method and system
CN113065614A (en) * 2021-06-01 2021-07-02 北京百度网讯科技有限公司 Training method of classification model and method for classifying target object
CN113554180A (en) * 2021-06-30 2021-10-26 北京百度网讯科技有限公司 Information prediction method, information prediction device, electronic equipment and storage medium
CN113821682A (en) * 2021-09-27 2021-12-21 深圳市广联智通科技有限公司 Multi-target video recommendation method and device based on deep learning and storage medium
CN113723378A (en) * 2021-11-02 2021-11-30 腾讯科技(深圳)有限公司 Model training method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN114443896A (en) 2022-05-06
WO2023142408A1 (en) 2023-08-03
JP2024507602A (en) 2024-02-21

Similar Documents

Publication Publication Date Title
CN113807440B (en) Method, apparatus, and medium for processing multimodal data using neural networks
CN112579909A (en) Object recommendation method and device, computer equipment and medium
CN114004985B (en) Character interaction detection method, neural network, training method, training equipment and training medium thereof
CN114791982B (en) Object recommendation method and device
CN114445667A (en) Image detection method and method for training image detection model
CN114443896B (en) Data processing method and method for training predictive model
CN113642635B (en) Model training method and device, electronic equipment and medium
CN114443989A (en) Ranking method, training method and device of ranking model, electronic equipment and medium
CN115600646B (en) Language model training method, device, medium and equipment
CN113722594B (en) Training method and device of recommendation model, electronic equipment and medium
CN114881170B (en) Training method for neural network of dialogue task and dialogue task processing method
CN114219046B (en) Model training method, matching method, device, system, electronic equipment and medium
CN113596011B (en) Flow identification method and device, computing device and medium
CN114120416A (en) Model training method and device, electronic equipment and medium
CN116842156B (en) Data generation method, device, equipment and medium
CN116070711B (en) Data processing method, device, electronic equipment and storage medium
CN115033782B (en) Object recommendation method, training method, device and equipment of machine learning model
CN114117046B (en) Data processing method, device, electronic equipment and medium
CN114219079A (en) Feature selection method and device, model training method and device, equipment and medium
CN116306862A (en) Training method, device and medium for text processing neural network
CN117710504A (en) Image generation method, training method, device and equipment of image generation model
CN114692780A (en) Entity information classification method, classification model training method, device and electronic equipment
CN116842156A (en) Data generation method, device, equipment and medium
CN116739136A (en) Data prediction method, device, electronic equipment and medium
CN116579404A (en) Model training method, image processing device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant