CN113486982A - Model training method and device and electronic equipment - Google Patents

Model training method and device and electronic equipment Download PDF

Info

Publication number
CN113486982A
CN113486982A CN202110876129.9A CN202110876129A CN113486982A CN 113486982 A CN113486982 A CN 113486982A CN 202110876129 A CN202110876129 A CN 202110876129A CN 113486982 A CN113486982 A CN 113486982A
Authority
CN
China
Prior art keywords
sample
target
target training
training
training task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110876129.9A
Other languages
Chinese (zh)
Inventor
衣建中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110876129.9A priority Critical patent/CN113486982A/en
Publication of CN113486982A publication Critical patent/CN113486982A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the disclosure discloses a model training method, a model training device and electronic equipment. One embodiment of the method comprises: acquiring a reference sample, wherein the reference sample is used for training a push model of recommendation information; modifying the reference sample based on a sample modification rule of the target training task for a target training task in at least one training task to generate a target training sample required by the target training task; and transmitting the required target training sample to the target training task, so that the target training task utilizes the received target training sample to train and generate a target pushing model. The embodiment can improve the model training efficiency of the target training task.

Description

Model training method and device and electronic equipment
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a model training method and device and electronic equipment.
Background
Before training the model, training samples for training the model need to be generated. When the number of training samples is large, a model with good performance can be usually trained.
In practical application, different model training modes are often adopted for different model training scenes.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The embodiment of the disclosure provides a model training method and device and electronic equipment, which can improve the model training efficiency of a target training task.
In a first aspect, an embodiment of the present disclosure provides a model training method, including: acquiring a reference sample, wherein the reference sample is used for training a push model of recommendation information; modifying the reference sample based on a sample modification rule of the target training task for a target training task in at least one training task to generate a target training sample required by the target training task; and transmitting the required target training sample to the target training task, so that the target training task utilizes the received target training sample to train and generate a target pushing model.
In a second aspect, an embodiment of the present disclosure provides a model training apparatus, including: the device comprises an acquisition unit, a recommendation unit and a recommendation unit, wherein the acquisition unit is used for acquiring a reference sample, and the reference sample is used for training a push model of recommendation information; a generating unit, configured to modify, for a target training task in at least one training task, the reference sample based on a sample modification rule of the target training task to generate a target training sample required by the target training task; and the transmission unit is used for transmitting the required target training sample to the target training task so that the target training task utilizes the received target training sample to train and generate a target push model.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the model training method according to the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium, on which a computer program is stored, which when executed by a processor, performs the steps of the model training method according to the first aspect.
The model training method, the model training device and the electronic device provided by the embodiments of the present disclosure first obtain a reference sample, where the reference sample is used to train a push model of recommendation information, and then, for a target training task in at least one training task, modify the reference sample based on a sample modification rule of the target training task to generate a target training sample required by the target training task, and further, transmit the required target training sample to the target training task, so that the target training task trains and generates the target push model by using the received target training sample. On one hand, after the target training task receives the target training sample, the target training sample can be directly used for training to generate the target pushing model, and therefore the model training efficiency of the target training task can be improved. On the other hand, the target training task does not need to store the reference sample, so that the waste of storage space of the equipment where the target training task is located can be reduced.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of a model training method of the present disclosure;
FIG. 2 is a flow chart of still further embodiments of the model training method of the present disclosure;
FIG. 3 is a flow diagram of the generation of a second intermediate sample in some embodiments of the model training method of the present disclosure;
FIG. 4 is a schematic structural diagram of some embodiments of a model training apparatus of the present disclosure;
FIG. 5 is an exemplary system architecture to which the model training method of the present disclosure may be applied in some embodiments;
fig. 6 is a schematic diagram of a basic structure of an electronic device provided in accordance with some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, a flow diagram of some embodiments of a model training method according to the present disclosure is shown. As shown in fig. 1, the model training method includes the following steps:
step 101, a reference sample is obtained.
In this embodiment, the executing subject of the model training method may acquire the reference sample. The reference sample is used for training a push model of the recommendation information.
The recommendation information may include various information recommended to the user. In some scenarios, the recommendation information may include advertisement information recommended to the user. At this time, the push model may be a model for pushing an advertisement to a user.
And 102, modifying the reference sample for a target training task in at least one training task based on a sample modification rule of the target training task to generate a target training sample required by the target training task.
In this embodiment, for a target training task in at least one training task, the executive agent may modify the reference sample based on a sample modification rule of the target training task, and further generate a target training sample required by the target training task.
The at least one training task may be located on the same or different servers.
The target training task may include some or all of the at least one training task.
The target training task has corresponding sample modification rules. Wherein the sample modification rule is a rule for modifying the reference sample. In practical applications, the sample modification rules may be stored in various forms. For example, the sample modification rules may be in the form of a script file.
It should be noted that different target training tasks may have the same or different sample modification rules. Thus, the same modification or different modifications to the reference sample may be achieved for different target training tasks. Further, the same or different target training samples may be generated for different target training tasks.
It will be appreciated that the reference samples described above may include a plurality of samples used to train the push model. Accordingly, the target training samples may include a plurality of samples that the target training task uses to generate the target push model.
And 103, transmitting the required target training sample to the target training task so that the target training task utilizes the received target training sample to train and generate a target push model.
In this embodiment, the executing entity may transmit the target training sample required by the executing entity to the target training task. Further, the target training task may utilize the received target training samples to train and generate a target push model.
Optionally, each reference sample includes a sample user characteristic, a sample recommendation information browsing characteristic, and sample identification information. The sample identification information is used for identifying whether a sample user executes a target operation aiming at the sample recommendation information.
The sample recommendation information features may include relevant features of the sample recommendation information. For example, the sample recommendation information features may include, but are not limited to, one of: the product information of the sample product, the brand information of the sample product and the exhibition time of the sample recommendation information are displayed by the sample recommendation information.
The sample recommendation information browsing characteristics may include relevant characteristics characterizing the sample user browsing the sample recommendation information. For example, the sample recommendation information browsing features may include, but are not limited to, at least one of: the browsing time of the sample user on the sample recommendation information and the browsing place of the sample user on the sample recommendation information.
The sample identification information is used to identify whether the sample user performs a target operation with respect to the sample recommendation information. Wherein the target operation may include, but is not limited to, at least one of: and clicking the sample recommendation information, and purchasing the sample product described by the sample recommendation information.
In some scenarios, the target training task may train the initial model to be the target push model by using the sample user characteristics, the sample recommendation information characteristics, and the sample recommendation information browsing characteristics included in the reference sample as inputs of the initial model and using the sample identification information included in the reference sample as an expected output of the initial model.
It should be noted that the initial models used for different target training tasks may be the same or different. Accordingly, the target push models finally generated by different target training tasks may be the same or different.
In the related art, in order to train at least one training task to generate a target push model, first, each training task needs to read and store a reference sample, and then each training task generates a target training sample required by each training task by modifying the reference sample. On one hand, after each training task reads and stores the reference sample, the training of the model is completed through a series of operations such as modification of the reference sample, and the model training efficiency of each training task is possibly low. On the other hand, each training task stores the reference sample, which may cause a waste of storage space of the device where each training task is located.
In this embodiment, for a target training task in at least one training task, the reference sample may be modified based on a sample modification rule of the target training task to generate a target training sample required by the target training task, and further, the required target training sample may be transmitted to the target training task, so that the target training task utilizes the received target training sample to train and generate a target push model. On one hand, after the target training task receives the target training sample, the target training sample can be directly used for training to generate the target pushing model, and therefore the model training efficiency of the target training task can be improved. On the other hand, the target training task does not need to store the reference sample, so that the waste of storage space of the equipment where the target training task is located can be reduced.
In some scenarios, the target training samples required for each target training task are less distant from the reference sample. Therefore, on the basis of the reference sample, the target training sample required by each target training task can be obtained through minor modification. Thus, target training samples required for multiple target training tasks may be generated quickly by making minor modifications to the same reference sample. Therefore, the efficiency of simultaneously training the model by a plurality of training tasks can be improved.
In some embodiments, the sample modification rule is used to implement at least one of the following processes for the reference sample: at least one of addition, deletion, replacement, and renaming of sample user characteristics; at least one of adding, deleting, replacing, and renaming sample recommendation information features; the sample recommends at least one of addition, deletion, replacement, and renaming of information browsing features.
In practical applications, the features added, deleted, replaced, and renamed to the sample user features by different training tasks may be the same or different.
In some embodiments, the target training task may perform the following steps.
Specifically, the target push model generated by training is sent to the corresponding target service platform, so that the target service platform pushes recommendation information to the user by using the target push model.
Different target training tasks can send the target push models generated by the different target training tasks to different target service platforms.
In some scenarios, the target service platform may input the user characteristics, the recommendation information characteristics, and the recommendation information browsing characteristics into the target push model to obtain an output result of the target push model. Further, if the output result indicates that the user performs the target operation on the recommendation information, the target service platform may push the recommendation information to the user.
Therefore, after the target training task generates the target push model, the target push model can be sent to the corresponding target service platform, so that the corresponding target service platform can push recommendation information to the user. Therefore, on the basis of improving the model training efficiency of the target training task, the efficiency of pushing recommendation information to the user by the target service platform can be improved.
In some embodiments, the target training task trains the generation of the target push model by the following means.
In a first step, at least one initial model is trained as a push model using target training samples.
Similar to the foregoing analysis, the target training task may use the sample user characteristic, the sample recommendation information characteristic, and the sample recommendation information browsing characteristic included in the reference sample as inputs of the initial model, and use the sample identification information included in the reference sample as an expected output of the initial model, so as to implement training of the initial model into a push model.
And secondly, taking the push model with the optimal performance in at least one push model generated by training as a target push model.
In some scenarios, the target training task may determine a performance index value of each push, and then use the push model with the highest performance index value as the target push model. The performance index may be an auc (area Under cut) index.
Therefore, the target training task can train at least one initial model into a push model by using the same target training sample, and further select the push model with the optimal performance as the target push model. Therefore, the target push model with better performance can be generated by the target training task.
On the basis, if the target service platform pushes recommendation information to the user by using the target pushing model sent by the target training task, the accuracy of pushing the recommendation information to the user by the target service platform can be improved.
In some embodiments, the reference samples are pre-serialized.
After serialization, the reference samples may be samples in the form of a byte stream. In some scenarios, the reference sample may be obtained by serializing the data on the line.
It will be appreciated that the reference samples are serialized into a byte stream to facilitate storage and transmission of the reference samples.
At this time, the execution body may execute the step 102 according to the flow shown in fig. 2. The process comprises the following steps:
step 201, a first intermediate sample is generated based on the deserialization of the reference sample. Through deserialization, the reference samples in the form of byte streams can be converted into real samples (i.e., samples that can be queried, deleted, or added with features).
Typically, by deserializing each reference sample, a corresponding first intermediate sample may be generated.
Step 202, for a target training task in the at least one training task, executing the following generation steps.
Step 2021, modifying the first intermediate sample based on the sample modification rule of the target training task to generate a second intermediate sample.
Step 2022, generating the target training samples required by the target training task based on the re-serialization of the second intermediate samples.
It can be seen that in a scene in which the reference sample is serialized in advance, first an intermediate sample is generated by deserializing the reference sample, and then the feature is modified on the basis of the intermediate sample. Therefore, in a scene that the reference sample is serialized in advance, the characteristic of modifying the reference sample can still be realized. Thus, the application scenarios for generating the target training samples required by the target training task can be broadened.
In some embodiments, the sample modification rules are used to implement adding target features to the reference samples described above.
Optionally, the target feature includes at least one of a sample user feature, a sample recommendation information feature, and a sample recommendation information browsing feature that need to be added to the reference sample.
At this time, the execution body may execute the step 2021 according to the flow shown in fig. 3. The process comprises the following steps:
step 301, obtaining target features that need to be added to the reference sample by the target training task based on the sample modification rule of the target training task.
Alternatively, the target feature may be obtained from online data (i.e., data generated online).
In some scenarios, the sample modification rule includes a storage address of the target feature. The execution agent may obtain the target feature from the storage address.
Step 302, adding the feature-coded target feature to the first intermediate sample to generate a second intermediate sample.
In practical applications, the execution subject may perform feature coding on the target feature in various ways. For example, the target feature may be encoded by hash coding.
Typically, by adding a target feature to each first intermediate sample, a corresponding second intermediate sample may be generated.
In practical applications, the executing subject may add the target feature to part or all of the first intermediate samples.
It can be seen that the target feature is feature-encoded first, and then the encoded target feature is added to the first intermediate sample. Thereby, the security of the target feature added to the first intermediate sample can be ensured. On the basis, the safety of the generated target training sample in the transmission and storage processes can be ensured.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a model training apparatus, which correspond to the method embodiment shown in fig. 1, and which can be applied in various electronic devices.
As shown in fig. 4, the model training apparatus of the present embodiment includes an acquisition unit 401, a generation unit 402, and a transmission unit 403. The obtaining unit 401 may be configured to obtain a reference sample, where the reference sample is used to train a push model of recommendation information. The generating unit 402 may be configured to, for a target training task of at least one training task, modify the reference sample based on a sample modification rule of the target training task to generate a target training sample required by the target training task. The transmitting unit 403 may be configured to transmit a required target training sample to the target training task, so that the target training task utilizes the received target training sample to train and generate a target push model.
In this embodiment, specific processes of the obtaining unit 401, the generating unit 402, and the transmitting unit 403 of the model training apparatus and technical effects brought by the specific processes may refer to related descriptions of step 101, step 102, and step 103 in the corresponding embodiment of fig. 1, which are not described herein again.
In some embodiments, each reference sample comprises a sample user characteristic, a sample recommendation information browsing characteristic, and sample identification information, wherein the sample identification information is used to identify whether the sample user performs a target operation on the sample recommendation information.
In some embodiments, the sample modification rule is used to implement at least one of the following processes for the reference sample: at least one of addition, deletion, replacement, and renaming of sample user characteristics; at least one of adding, deleting, replacing, and renaming sample recommendation information features; the sample recommends at least one of addition, deletion, replacement, and renaming of information browsing features.
In some embodiments, the reference samples are pre-serialized; the generating unit 402 is further configured to: generating a first intermediate sample based on the deserialization of the reference sample; for a target training task of the at least one training task, executing the following generating steps: modifying the first intermediate sample based on a sample modification rule of the target training task to generate a second intermediate sample; and generating the target training samples required by the target training task based on the re-serialization of the second intermediate samples.
In some embodiments, the sample modification rule is used to implement adding a target feature to the reference sample; the generating unit 402 is further configured to: acquiring target features of the target training task, which need to be added to the reference sample, based on a sample modification rule of the target training task; and adding the characteristic-coded target characteristic to the first intermediate sample to generate a second intermediate sample.
In some embodiments, the target feature includes at least one of a sample user feature, a sample recommendation information feature, and a sample recommendation information browsing feature that need to be added to the reference sample.
In some embodiments, the target training task may perform the following steps: and sending the target push model generated by training to a corresponding target service platform so that the target service platform pushes recommendation information to a user by using the target push model.
In some embodiments, the target training task is trained to generate the target push model by: training at least one initial model into a push model by using a target training sample; and taking the push model with the optimal performance in at least one push model generated by training as a target push model.
With further reference to fig. 5, fig. 5 illustrates an exemplary system architecture to which the model training methods of some embodiments of the present disclosure may be applied.
As shown in fig. 5, the system architecture may include a server 501, a network 502, and servers 503, 504. Network 502 serves as a medium for providing communication links between server 501 and servers 503 and 504, among other things. Network 502 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
Training tasks 5031 and 5041 are provided on server 503 and server 504, respectively. Therein, the training task 5031 and the training task 5041 respectively have corresponding sample modification rules.
In some scenarios, for a target training task of the training tasks 5031 and 5041, the server 501 may modify the reference sample based on a sample modification rule of the target training task to generate a target training sample required for the target training task. Further, the server 501 may transmit the target training samples it needs to the target training task. Therefore, the target training task can utilize the received target training sample to train and generate the target push model.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (for example, multiple pieces of software or software modules for providing distributed services), or may be implemented as a single piece of software or software module, and is not limited in particular.
The server shown in fig. 5 may be a terminal device. The terminal device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc., among others.
It should be noted that the model training method provided by the embodiment of the present disclosure may be executed by the server 501, and accordingly, the model training apparatus may be disposed in the server 501.
It should be understood that the number of servers, networks, and training tasks in FIG. 5 are merely illustrative. There may be any number of servers, networks, and training tasks, as desired for the implementation.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., server 501 in fig. 5) suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be included in the electronic device or may exist separately without being incorporated in the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a reference sample, wherein the reference sample is used for training a push model of recommendation information; modifying the reference sample based on a sample modification rule of the target training task for a target training task in at least one training task to generate a target training sample required by the target training task; and transmitting the required target training sample to the target training task, so that the target training task utilizes the received target training sample to train and generate a target pushing model.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. Where the names of these elements do not in some cases constitute a limitation of the elements themselves, for example, the acquisition element may also be described as an element for "acquiring a reference sample".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure in the embodiments of the present disclosure is not limited to the particular combination of the above-described features, but also encompasses other embodiments in which any combination of the above-described features or their equivalents is possible without departing from the scope of the present disclosure. For example, the above features may be interchanged with other features disclosed in this disclosure (but not limited to) those having similar functions.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (11)

1. A method of model training, comprising:
acquiring a reference sample, wherein the reference sample is used for training a push model of recommendation information;
for a target training task in at least one training task, modifying the reference sample based on a sample modification rule of the target training task to generate a target training sample required by the target training task;
and transmitting the required target training sample to the target training task, so that the target training task utilizes the received target training sample to train and generate a target push model.
2. The method of claim 1, wherein each reference sample comprises a sample user characteristic, a sample recommendation information browsing characteristic, and sample identification information, wherein the sample identification information is used to identify whether the sample user performs a target operation on the sample recommendation information.
3. The method of claim 2, wherein a sample modification rule is used to implement at least one of the following processes for the reference sample:
at least one of addition, deletion, replacement, and renaming of sample user characteristics;
at least one of adding, deleting, replacing, and renaming sample recommendation information features;
the sample recommends at least one of addition, deletion, replacement, and renaming of information browsing features.
4. The method of claim 1, wherein the reference samples are pre-serialized; and
the modifying, for a target training task of the at least one training task, the reference sample based on the sample modification rule of the target training task to generate a target training sample required by the target training task includes:
generating a first intermediate sample based on the deserialization of the reference sample;
for a target training task of the at least one training task, performing the following generating steps: modifying the first intermediate sample based on a sample modification rule of the target training task to generate a second intermediate sample; generating target training samples required for the target training task based on the re-serialization of the second intermediate samples.
5. The method of claim 4, wherein a sample modification rule is used to implement adding a target feature to the reference sample; and
modifying, by the target training task based on the sample modification rule, the first intermediate sample to generate a second intermediate sample, comprising:
acquiring target features of the target training task, which need to be added to the reference sample, based on a sample modification rule of the target training task;
adding feature-encoded target features to the first intermediate samples to generate second intermediate samples.
6. The method of claim 5, wherein target features comprise at least one of sample user features, sample recommendation information features, and sample recommendation information browsing features that need to be added to the reference sample.
7. The method according to any of claims 1-6, wherein the target training task performs the steps of:
and sending the target push model generated by training to a corresponding target service platform so that the target service platform pushes recommendation information to a user by using the target push model.
8. The method according to any one of claims 1-6, wherein the target training task is trained to generate a target push model by:
training at least one initial model into a push model by using a target training sample;
and taking the push model with the optimal performance in at least one push model generated by training as a target push model.
9. A model training apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a reference sample, and the reference sample is used for training a push model of recommendation information;
the generating unit is used for modifying the reference sample based on a sample modification rule of the target training task for a target training task in at least one training task to generate a target training sample required by the target training task;
and the transmission unit is used for transmitting the required target training sample to the target training task so that the target training task utilizes the received target training sample to train and generate a target push model.
10. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202110876129.9A 2021-07-30 2021-07-30 Model training method and device and electronic equipment Pending CN113486982A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110876129.9A CN113486982A (en) 2021-07-30 2021-07-30 Model training method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876129.9A CN113486982A (en) 2021-07-30 2021-07-30 Model training method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113486982A true CN113486982A (en) 2021-10-08

Family

ID=77944966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110876129.9A Pending CN113486982A (en) 2021-07-30 2021-07-30 Model training method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113486982A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291253A (en) * 2018-12-06 2020-06-16 北京嘀嘀无限科技发展有限公司 Model training method, consultation recommendation method, device and electronic equipment
CN112182359A (en) * 2019-07-05 2021-01-05 腾讯科技(深圳)有限公司 Feature management method and system of recommendation model
US20210081788A1 (en) * 2019-09-17 2021-03-18 Ricoh Company, Ltd. Method and apparatus for generating sample data, and non-transitory computer-readable recording medium
CN112836128A (en) * 2021-02-10 2021-05-25 脸萌有限公司 Information recommendation method, device, equipment and storage medium
CN112905839A (en) * 2021-02-10 2021-06-04 北京有竹居网络技术有限公司 Model training method, model using device, storage medium and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291253A (en) * 2018-12-06 2020-06-16 北京嘀嘀无限科技发展有限公司 Model training method, consultation recommendation method, device and electronic equipment
CN112182359A (en) * 2019-07-05 2021-01-05 腾讯科技(深圳)有限公司 Feature management method and system of recommendation model
US20210081788A1 (en) * 2019-09-17 2021-03-18 Ricoh Company, Ltd. Method and apparatus for generating sample data, and non-transitory computer-readable recording medium
CN112836128A (en) * 2021-02-10 2021-05-25 脸萌有限公司 Information recommendation method, device, equipment and storage medium
CN112905839A (en) * 2021-02-10 2021-06-04 北京有竹居网络技术有限公司 Model training method, model using device, storage medium and equipment

Similar Documents

Publication Publication Date Title
CN110781373B (en) List updating method and device, readable medium and electronic equipment
CN111596991A (en) Interactive operation execution method and device and electronic equipment
CN111857720A (en) Method and device for generating user interface state information, electronic equipment and medium
CN111597107A (en) Information output method and device and electronic equipment
CN111309304A (en) Method, device, medium and electronic equipment for generating IDL file
CN113191257B (en) Order of strokes detection method and device and electronic equipment
CN114170342A (en) Image processing method, device, equipment and storage medium
CN113850890A (en) Method, device, equipment and storage medium for generating animal image
CN111858381B (en) Application fault tolerance capability test method, electronic device and medium
CN111262907B (en) Service instance access method and device and electronic equipment
CN113220281A (en) Information generation method and device, terminal equipment and storage medium
CN111752644A (en) Interface simulation method, device, equipment and storage medium
CN111756953A (en) Video processing method, device, equipment and computer readable medium
CN111815508A (en) Image generation method, device, equipment and computer readable medium
CN111754600A (en) Poster image generation method and device and electronic equipment
CN113628097A (en) Image special effect configuration method, image recognition method, image special effect configuration device and electronic equipment
CN110705536A (en) Chinese character recognition error correction method and device, computer readable medium and electronic equipment
CN112507676B (en) Method and device for generating energy report, electronic equipment and computer readable medium
CN115134254A (en) Network simulation method, device, equipment and storage medium
CN113240108A (en) Model training method and device and electronic equipment
CN113486982A (en) Model training method and device and electronic equipment
CN113435528A (en) Object classification method and device, readable medium and electronic equipment
CN112230986A (en) Project file generation method and device, electronic equipment and computer readable medium
CN112488947A (en) Model training and image processing method, device, equipment and computer readable medium
CN111680754A (en) Image classification method and device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20211008

RJ01 Rejection of invention patent application after publication