CN112862516B - Resource release method and device, electronic equipment and storage medium - Google Patents
Resource release method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112862516B CN112862516B CN202110050768.XA CN202110050768A CN112862516B CN 112862516 B CN112862516 B CN 112862516B CN 202110050768 A CN202110050768 A CN 202110050768A CN 112862516 B CN112862516 B CN 112862516B
- Authority
- CN
- China
- Prior art keywords
- resource
- candidate
- prediction result
- materials
- resources
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 239000000463 material Substances 0.000 claims abstract description 284
- 238000010801 machine learning Methods 0.000 claims abstract description 110
- 230000000694 effects Effects 0.000 claims abstract description 91
- 238000012545 processing Methods 0.000 claims abstract description 65
- 239000013077 target material Substances 0.000 claims abstract description 55
- 230000004913 activation Effects 0.000 claims description 47
- 239000013598 vector Substances 0.000 claims description 34
- 238000012549 training Methods 0.000 claims description 32
- 230000015654 memory Effects 0.000 claims description 16
- 238000013468 resource allocation Methods 0.000 claims description 9
- 238000002716 delivery method Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 3
- 230000001976 improved effect Effects 0.000 abstract description 8
- 239000000523 sample Substances 0.000 description 51
- 238000010586 diagram Methods 0.000 description 12
- 238000000605 extraction Methods 0.000 description 10
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000003578 releasing effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000005457 optimization Methods 0.000 description 6
- 230000008520 organization Effects 0.000 description 6
- 239000002994 raw material Substances 0.000 description 6
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000002354 daily effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000008093 supporting effect Effects 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0242—Determining effectiveness of advertisements
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Development Economics (AREA)
- Accounting & Taxation (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The disclosure relates to a resource release method, a device, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: inputting a plurality of candidate materials into a first machine learning model for processing to obtain a first prediction result, wherein the first prediction result is used for describing the throwing effect of the plurality of candidate materials predicted by the first machine learning model; selecting a plurality of target materials from a plurality of candidate materials according to a first prediction result; combining the plurality of target materials to obtain a plurality of candidate resources; inputting the plurality of candidate resources into a second machine learning model for processing to obtain a second prediction result, wherein the second prediction result is used for describing the throwing effect of the plurality of candidate resources predicted by the second machine learning model; selecting a target resource from the plurality of candidate resources according to the second prediction result; and putting target resources. The scheme provided by the disclosure can automatically and intelligently generate the resources, so that the resource release efficiency and the resource release effect are improved.
Description
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a resource release method, a resource release device, electronic equipment and a storage medium.
Background
At present, many advertisers will place resources, such as advertisements, on some platforms where users can view the resources.
In the related art, resources are put in a manual putting mode. Specifically, first, the creative of the resource is made manually by the operation and maintenance personnel. Thereafter, the operation and maintenance personnel organizes one or more creatives into a resource and sets configuration parameters. Thereafter, the operation and maintenance personnel organize one or more resources into a resource plan, and the resource plan is launched through the platform tool.
The above method relies on a large number of manual operations, resulting in a low delivery efficiency.
Disclosure of Invention
The disclosure provides a resource release method, a device, an electronic device and a storage medium, so as to at least solve the problem of low release efficiency in the related technology. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a resource delivery method, including:
inputting a plurality of candidate materials into a first machine learning model for processing to obtain a first prediction result, wherein the first prediction result is used for describing the throwing effect of the plurality of candidate materials predicted by the first machine learning model;
Selecting a plurality of target materials from the plurality of candidate materials according to the first prediction result;
combining the target materials to obtain a plurality of candidate resources;
inputting the plurality of candidate resources into a second machine learning model for processing to obtain a second prediction result, wherein the second prediction result is used for describing the throwing effect of the plurality of candidate resources predicted by the second machine learning model;
selecting a target resource from the plurality of candidate resources according to the second prediction result;
and throwing the target resource.
In some embodiments, the inputting the plurality of candidate materials into the first machine learning model for processing to obtain a first prediction result includes:
combining the self characteristics of the first candidate materials and the performance characteristics of the first candidate materials to obtain a characteristic combination, wherein the first candidate materials are one candidate material in the plurality of candidate materials, the self characteristics are used for describing the characteristics of the first candidate materials, and the performance characteristics are used for describing the throwing effect of the first candidate materials in a historical time period;
and processing the feature combination through the first machine learning model to obtain a first prediction result of the first candidate material.
In some embodiments, the first candidate material comprises a video, and the native features of the first candidate material comprise at least one of an embedded vector of video key frames, a video duration, a video style, or a video category; or,
the first candidate material comprises a cover image of a video, and the self characteristics of the first candidate material comprise at least one of an embedded vector of the image, an image label and an image tone; or,
the first candidate material includes text, and the first candidate material is characterized by word embedded vectors or one-hot codes.
In some embodiments, the performance characteristics of the first candidate material include one or more of:
click rate;
the play completion rate is the play completion rate;
an exposure number, which refers to the number of times displayed;
and the activation number is the total number of the activation users in the historical time period, and the activation users are users for triggering the first candidate material so as to activate the account.
In some embodiments, the first machine learning model is trained by:
acquiring a training sample, wherein the training sample comprises the self characteristics of sample materials and the performance characteristics of the sample materials, and the label of the training sample is the throwing effect data of the sample materials in a historical time period;
And inputting the training sample into an initial machine learning model for processing, and adjusting parameters of the initial machine learning model according to the deviation between the output result of the initial machine learning model and the label to obtain the first machine learning model.
In some embodiments, the selecting a plurality of target materials from the plurality of candidate materials according to the first prediction result includes:
selecting materials with the throwing effect higher than a first threshold value from the plurality of candidate materials as the target materials according to the first prediction result; or,
and selecting a material with the first preset bit number of the throwing effect row from the plurality of candidate materials according to the first prediction result, and taking the material as the target material.
In some embodiments, the plurality of candidate resources includes a first resource that has been released, and the inputting the plurality of candidate resources into a second machine learning model for processing, to obtain a second prediction result, includes:
and determining a prediction result of the first resource according to the throwing effect data of the first resource in the historical time period.
In some embodiments, the selecting a target resource from the plurality of candidate resources according to the second prediction result includes:
Selecting resources with the release effect higher than a second threshold value from the plurality of candidate resources as the target resources according to the second prediction result; or,
and selecting a resource with the release effect arranged in a second preset bit number from the plurality of candidate resources according to the second prediction result, and taking the resource as the target resource.
According to a second aspect of the embodiments of the present disclosure, there is provided a resource delivery device, which is characterized in that the resource delivery device includes:
the processing unit is configured to input a plurality of candidate materials into a first machine learning model for processing to obtain a first prediction result, wherein the first prediction result is used for describing the throwing effect of the plurality of candidate materials predicted by the first machine learning model;
a selecting unit configured to perform selecting a plurality of target materials from the plurality of candidate materials according to the first prediction result;
a combining unit configured to perform combining the plurality of target materials to obtain a plurality of candidate resources;
the processing unit is further configured to input the plurality of candidate resources into a second machine learning model for processing, so as to obtain a second prediction result, wherein the second prediction result is used for describing the throwing effect of the plurality of candidate resources predicted by the second machine learning model;
The selecting unit is further configured to perform selecting a target resource from the plurality of candidate resources according to the second prediction result;
and the throwing unit is configured to execute throwing of the target resource.
In some embodiments, the processing unit is configured to perform: combining the self characteristics of the first candidate materials and the performance characteristics of the first candidate materials to obtain a characteristic combination, wherein the first candidate materials are one candidate material in the plurality of candidate materials, the self characteristics are used for describing the characteristics of the first candidate materials, and the performance characteristics are used for describing the throwing effect of the first candidate materials in a historical time period; and processing the feature combination through the first machine learning model to obtain a first prediction result of the first candidate material.
In some embodiments, the first candidate material comprises a video, and the native features of the first candidate material comprise at least one of an embedded vector of video key frames, a video duration, a video style, or a video category; or,
the first candidate material comprises a cover image of a video, and the self characteristics of the first candidate material comprise at least one of an embedded vector of the image, an image label and an image tone; or,
The first candidate material includes text, and the first candidate material is characterized by word embedded vectors or one-hot codes.
In some embodiments, the performance characteristics of the first candidate material include one or more of:
click rate;
the play completion rate is the play completion rate;
an exposure number, which refers to the number of times displayed;
and the activation number is the total number of the activation users in the historical time period, and the activation users are users for triggering the first candidate material so as to activate the account.
In some embodiments, the first machine learning model is trained by:
acquiring a training sample, wherein the training sample comprises the self characteristics of sample materials and the performance characteristics of the sample materials, and the label of the training sample is the throwing effect data of the sample materials in a historical time period;
and inputting the training sample into an initial machine learning model for processing, and adjusting parameters of the initial machine learning model according to the deviation between the output result of the initial machine learning model and the label to obtain the first machine learning model.
In some embodiments, the selecting unit is configured to perform: selecting materials with the throwing effect higher than a first threshold value from the plurality of candidate materials as the target materials according to the first prediction result; or selecting a material with the first preset bit number of the throwing effect arranged in front from the plurality of candidate materials according to the first prediction result, and taking the material as the target material.
In some embodiments, the plurality of candidate resources includes a first resource that has been cast, the processing unit configured to perform: and determining a prediction result of the first resource according to the throwing effect data of the first resource in the historical time period.
In some embodiments, the selecting unit is configured to perform: selecting resources with the release effect higher than a second threshold value from the plurality of candidate resources as the target resources according to the second prediction result; or selecting a resource with the effect of delivery arranged in a second preset bit number from the plurality of candidate resources according to the second prediction result, and taking the resource as the target resource.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
One or more processors;
one or more memories for storing the processor-executable program code;
wherein the one or more processors are configured to execute the program code to implement the resource delivery method described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the above-described resource allocation method.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising one or more program code which, when executed by a processor of an electronic device, enables the electronic device to perform the above-described resource allocation method.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method for releasing resources based on machine learning uses one model to predict the releasing effect of materials, so that target materials which can perform well after releasing are selected, and uses another model to predict the releasing effect of resources formed by combining the materials, so that target resources which can perform well after releasing are selected, and the selected target resources are released. On one hand, the method gets rid of the dependence of manual operation to a certain extent because of automatic and intelligent generation of resources, thereby improving the release efficiency of the resources. On the other hand, the method improves the resource release effect because the resource with good prediction performance is released.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram illustrating interaction of an advertiser, a user, and a platform according to an example embodiment;
FIG. 2 is a schematic diagram illustrating a relationship of an ad group, an ad campaign, and an ad creative in accordance with an exemplary embodiment;
FIG. 3 is a block diagram of an advertisement delivery system, according to an example embodiment;
FIG. 4 is a flowchart illustrating a method of advertising according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating one intelligent advertisement delivery according to an example embodiment;
FIG. 6 is a block diagram of an advertising device, according to an example embodiment;
FIG. 7 is a block diagram of a terminal shown in accordance with an exemplary embodiment;
fig. 8 is a block diagram of a server, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The resource in the embodiments of the present disclosure is, for example, an advertisement resource. The advertising asset belongs to a multimedia asset. The multimedia assets include any one or a combination of images, text, audio, video. For example, the advertisement is a video advertisement, i.e., one advertisement is one video clip. For example, the advertisement is a teletext advertisement, one advertisement comprising an image and text. For example, the advertisement is a text advertisement (a plain advertisement word), and one advertisement is a piece of text. For example, an advertisement includes a combination of multiple types of multimedia assets, for example, one advertisement is content obtained by adding text and cover images on the basis of one video clip.
Cover image of video: for describing an image of video content. The cover image of the video is an image displayed when the terminal displays the video in a non-playing state. For example, the terminal displays a video list, where the video list includes a cover image of video 1, the cover image of video 1 describes key content of video 1, and the user jumps to a play page of video 1 after clicking on the cover image of video 1.
Referring to FIG. 1, FIG. 1 shows a schematic diagram of interactions between three parties, an advertiser, a user, and a platform. The advertiser is the putting team of the advertisement. For example, the advertiser is an Application (APP) vendor, a game vendor, or the like. The platform is for example a short video platform. In the process of three-party interaction, advertisers purchase traffic at a traffic platform. In addition, advertisers need to construct ad creatives from raw materials (short videos, picture covers, advertising words). Then, the advertiser packages the advertising creative and sets corresponding delivery parameters of the advertising creative to finally form an advertising plan. The advertiser places the advertising campaign in an interface provided by the platform. The platform is responsible for promoting the advertising creative in the platform and collecting the fee (i.e., advertising fee) from the advertiser. The user downloads the APP and activates it by watching the advertisement and clicking on a link in the advertisement, becoming the advertiser's user. The user consumes directly or indirectly inside the APP and the advertiser gets revenue.
Referring to fig. 2, for example, an account number is established on a platform for an advertiser, and a plurality of groups of advertisement plans are established under the account number. An advertising campaign contains one or more ad groups. Each ad group corresponds to one or more ad creatives and a set of impression configuration parameters. Wherein a set of drop configuration parameters is the set configuration in fig. 2. Advertisements in the same advertisement group have the same placement configuration parameters. The delivery configuration parameters include delivery scope, target application, target population, budget and schedule, optimization objective, charging mode, price, etc. Advertisement creatives are the basic units of placement. An ad creative is a single ad. Advertising creatives mainly comprise raw materials (short videos, covers, advertising words, etc.). Each ad creative corresponds to a creative configuration parameter. Creative configuration parameters include, but are not limited to, ad placement location, creative production, ad monitoring, and the like.
Traditional advertising uses manual creation of advertising creatives and generation of advertising plans as a primary means.
In some studies, advertisements were manually placed using a provided interface of a flow platform, the specific procedure comprising the following three steps.
And (1) manually manufacturing the advertising creative, namely constructing the advertising creative by using the original materials.
Step (2) organizes one or more creatives into an ad group and sets configuration parameters.
Step (3) organizes one or more ad groups into an ad campaign, which is delivered via the platform tool.
However, this approach has two drawbacks as follows.
The disadvantage (1) is that manual delivery requires a lot of manpower to do some repetitive operations.
The disadvantage (2) the manual delivery effect is mostly dependent on the intuition and experience of the optimizer (the delivery person), and there is no universal method that can be generalized.
The embodiment is mainly based on a machine learning model, and automatic and intelligent generation of advertisement creatives and advertisement plans are formed, so that the efficiency and effect of advertisement delivery are improved.
The system operating environment provided by the embodiments of the present disclosure is described below.
Fig. 3 is a block diagram illustrating a resource delivery system in accordance with an exemplary embodiment. The resource release system comprises: a terminal 101 and a traffic platform 110.
The terminal 101 is connected to the flow platform 110 through a wireless network or a wired network. The terminal 101 may be at least one of a smart phone, a game console, a desktop computer, a tablet computer, an e-book reader, an MP3 (Moving Picture Experts Group Audio Layer III, moving picture experts compression standard audio layer 3) player or an MP4 (Moving Picture Experts Group Audio Layer IV, moving picture experts compression standard audio layer 4) player and a laptop portable computer. The terminal 101 installs and runs an application supporting the presentation or release of resources. The application may be a live application, a multimedia application, a short video application, etc. The terminal 101 is an exemplary terminal used by a user, and a user account is logged into an application running in the terminal 101.
The terminal 101 is connected to the flow platform 110 through a wireless network or a wired network.
The flow platform 110 includes at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The flow platform 110 is used to provide background services for applications that support the presentation or release of resource functionality. Alternatively, the flow platform 110 and the terminal 101 may work cooperatively during the release of resources. For example, the flow platform 110 takes on primary work and the terminal 101 takes on secondary work; alternatively, the flow platform 110 takes on secondary work and the terminal 101 takes on primary work; alternatively, the flow platform 110 or the terminal 101, respectively, may take on the generation effort alone.
Optionally, the flow platform 110 includes: a resource generation server 1101, a resource delivery server 1103, and a database 1102. The resource creation server 1101 and the resource delivery server 1103 may be provided on separate computers or may be provided on the same computer. The resource generation server 1101 is configured to provide a background service related to generating resources. The resource delivery server 1103 is configured to deliver the generated resource. The resource generation server 1101 and the resource delivery server 1103 may be one or more. For example, when there are multiple resource generating servers 1101, there are at least two resource generating servers 1101 for providing different services, and/or there are at least two resource generating servers 1101 for providing the same service, such as providing the same service in a load-balanced manner, which is not limited by the embodiments of the present disclosure.
The terminal 101 may refer broadly to one of a plurality of terminals, and the present embodiment is illustrated only with the terminal 101.
Those skilled in the art will appreciate that the number of terminals 101 may be greater or lesser. For example, the number of terminals 101 may be only one, or the number of terminals 101 may be tens or hundreds, or more, where the resource delivery system further includes other terminals. The embodiment of the present disclosure does not limit the number of terminals and the type of devices.
FIG. 4 is a flow chart illustrating a method of resource delivery according to an exemplary embodiment. The method shown in fig. 4 includes the following steps S401 to S406. The method is used in an electronic device, such as a server.
The method shown in fig. 4 involves a plurality of machine learning models. In order to distinguish between different machine learning models, a machine learning model used when selecting materials is described with a "first machine learning model", and a machine learning model used when selecting resources is described with a "second machine learning model". The first machine learning model or the second machine learning model is, for example, a regression model, and is, for example, a classification model. The first machine learning model or the second machine learning model includes, but is not limited to, a support vector machine, a linear model, a tree model (e.g., random forest), a neural network (e.g., deep neural network), and the like. The form of the input data of the first machine learning model or the second machine learning model includes, but is not limited to, a vector, a matrix, a tensor, or the like.
In step S401, a plurality of candidate materials are input into a first machine learning model for processing, and a first prediction result is obtained.
For example, n candidate materials exist, the candidate material 1 is input into a first machine learning model for processing, and the first machine learning model outputs a prediction result of the candidate material 1; inputting the candidate materials 2 into a first machine learning model for processing, and outputting a prediction result of the candidate materials 2 by the first machine learning model; and by analogy, inputting the candidate material n into a first machine learning model for processing, and outputting a prediction result of the candidate material n by the first machine learning model. The first prediction result includes a prediction result of the candidate material 1, a prediction result … … of the candidate material 2, and a prediction result of the candidate material n.
Candidate material is sometimes referred to as raw material. Candidate material includes, but is not limited to, at least one of short video, cover image, text (e.g., resource words). The candidate material is, for example, a material stored in a material library.
The first machine learning model is used for predicting the performance of the materials after the materials are put in, so that potential materials (target materials) are screened out according to the prediction result of the first machine learning model.
The first prediction result is used to describe the performance of the candidate material. Specifically, the first prediction result is used for describing the throwing effect of at least one candidate material predicted by the first machine learning model. The first predictor is sometimes also referred to as a model scoring candidate material. For example, the data form of the first prediction result is a number, and the larger the number is, the higher the model scores the candidate materials, that is, the better the model predicts the delivering effect of the candidate materials.
In some embodiments, the first prediction result includes one or more of: predicted return on investment, predicted activation number, predicted overall revenue. Because the parameters such as the return on investment, the activation number and the comprehensive benefits can be used as the optimization targets of the resources, the parameters are predicted by using the model, and the resources are released according to the prediction result of the model, the resource release effect is optimized, such as the return on investment, the activation number and the comprehensive benefits of the released resources are improved.
Return on investment (return on investment, ROI) refers to the ratio between revenue and investment. The return on investment is the ratio between the revenue the delivering resource brings to the presenter and the fee the presenter pays to the target platform. The target platform is used for displaying the resources. For example, the target platform is a short video platform, a life sharing platform, a question-answer community, a social platform, and so on.
For example, where the resource is a resource, the investment in return on investment refers to the cost paid to the platform at which the resource is presented. Revenue in return on investment refers to revenue that a user of the platform downloads the APP by viewing the resource and brings to the owner of the resource.
The activation number is the total number of active users in a period of time, and active users refer to users who have activated an account by viewing a resource. For example, in the case where the resource is a resource, activating the user refers to downloading the APP by viewing the resource and registering the user that activated the account.
The aggregate revenue is the difference between the total revenue of all users from the resource over a period of time and the fee charged by the target platform. For example, where the resource is a resource, the aggregate revenue refers to the total revenue for all users of the resource creative over a period of time minus the fee charged by the platform over the period of time.
In some embodiments, the first machine learning model is trained by: acquiring a training sample, wherein the training sample comprises the self characteristics of sample materials and the performance characteristics of the sample materials, and the labels of the training sample are the throwing effect data of the sample materials in a historical time period; and inputting the training sample into the initial machine learning model for processing, and adjusting parameters of the initial machine learning model according to the deviation between the output result of the initial machine learning model and the label to obtain a first machine learning model.
The intrinsic features are used to describe the characteristics that the sample material itself has. For example, the sample material is a video (e.g., a short video), and the characteristics of the sample material may include: at least one of an embedded vector of a video key frame, a video duration, a video style, or a video category. As another example, the sample material is a cover image, and the self-characteristics of the sample material include at least one of an embedded vector of the image, an image label, and an image tone. As another example, the sample material is text (e.g., resource words), and the sample material itself is characterized by word2vec vectors or one-hot codes.
In this way, the features of various sample materials such as short videos, cover images and texts can be accurately quantized through embedding specific data such as vectors, duration, labels and the like, so that the accuracy of the features is improved.
Performance characteristics are sometimes also referred to as effect characteristics. The performance characteristics are used for describing the throwing effect of the sample materials in the historical time period. The performance characteristics of the sample material may include at least one of click rate, run-out rate, exposure count, activation count. Wherein the click rate is the ratio of the number of clicks to the number of displays. The playback completion rate is the playback completion rate. The exposure number refers to the number of times displayed. The activation number is the total number of the activation users in the historical time period, and the activation users refer to the users who perform triggering operation on the first candidate materials so as to activate the account. For example, the sample material is a short video, and the performance characteristics of the sample material include click-through rate of the past 24 hours, three-second playback rate, and the like.
By the method, the performance of the sample materials can be accurately quantized through specific data such as click rate, play rate, exposure number, activation number and the like, so that the accuracy of the performance characteristics is improved.
The model is trained by using the characteristics of the sample material and the performance of the sample material for a period of time as training samples, so that the model is helped to learn the mapping relation between the characteristics of the sample material and the performance of the sample material, and therefore, in the prediction stage, after the model obtains the characteristics of a given sample material, the quality of the performance of the sample material can be accurately predicted.
In the embodiment of the application, the first machine learning model is used for processing a plurality of candidate materials. For the convenience of the reader, the following will take the processing manner for the first candidate material as an example. The first candidate material is one of a plurality of candidate materials. The processing manners of the other candidate materials except the first candidate material can refer to the processing manners of the first candidate material. Alternatively, each candidate material is processed in the same manner as the first candidate material.
In some embodiments, the first machine learning model includes a video prediction network, an image prediction network, and a text prediction network, and processing at least one candidate material through the first machine learning model, outputting a first prediction result, including: if the first candidate material is a video, inputting the first candidate material into a video prediction network for processing; if the first candidate material is an image, inputting the first candidate material into an image prediction network for processing; and if the first candidate material is text, inputting the first candidate material into a text prediction network for processing.
By the method, the model can be independently built for each dimension material, so that the accuracy of the prediction results of the different dimension materials is improved.
In some embodiments, processing the at least one candidate material by a first machine learning model, outputting a first prediction result, comprising: and combining the self characteristics of the first candidate materials and the performance characteristics of the first candidate materials to obtain the characteristic combination. And processing the feature combination through a first machine learning model to obtain a first prediction result of the first candidate material.
The feature combination includes the own features of the first candidate material and the performance features of the first candidate material. In other words, the feature combination is feature data constructed in a single material dimension. The own features are used to describe the features that the first candidate material itself has. The performance characteristics are used for describing the throwing effect of the first candidate materials in the historical time period.
By using the combination of the characteristics and the expression characteristics to predict, the model not only considers the characteristics of the material, but also considers the historical expression of the material during prediction, thereby improving the accuracy of prediction.
In some embodiments, stitching is used to combine the own features and the performance features. For example, the self feature is a vector containing m dimensions, the expression feature is a vector containing n dimensions, and after the self feature and the expression feature are spliced, a vector containing (m+n) dimensions is obtained, and the vector containing (m+n) dimensions is the feature combination. Wherein m and n represent positive integers.
In some embodiments, the first candidate material comprises a video (e.g., a short video), and the native features of the first candidate material comprise at least one of an embedded vector of video key frames, a video duration, a video style, or a video category.
In some embodiments, the first candidate material comprises a cover image of the video, and the native features of the first candidate material comprise at least one of an embedded vector of the image, an image tag, and an image hue. The embedded vector of the image refers to the embedded vector output by the neural network after the image passes through the neural network. Image tags are used to indicate the type of object (e.g., person or item) contained by an image. Image tags are, for example, real person, quadratic element, scenery, etc.
In some embodiments, the first candidate material comprises text, and the first candidate material itself is characterized as a word embedded vector or a one-hot code.
In some embodiments, the performance characteristics of the first candidate material include one or more of: click rate, play rate, exposure number, activation number, etc.
The click rate (click through rate, CTR) is a ratio of the number of times clicked to the number of times displayed, for example, the first candidate material is a cover image, and the performance characteristic of the first candidate material is the click rate of the cover image over a period of time (e.g., one day or one week). The playback completion rate is the playback completion rate. The exposure number refers to the number of times displayed. The activation number is the total number of the activation users in the historical time period, and the activation users refer to the users who perform triggering operation on the first candidate materials so as to activate the account.
In some embodiments, the native or table features of the first candidate material are extracted by a feature extraction network. For example, the first candidate material is input into a feature extraction network, the feature extraction network performs feature extraction on the first candidate material, and the own feature of the first candidate material or the expression feature of the first candidate material is output. The feature extraction network includes, but is not limited to, at least one of a word vector model, a Multi-Layer Perceptron (MLP, also known as an artificial neural network), a deep neural network (Deep Neural Networks, DNN), or a convolutional neural network (Convolutional Neural Networks, CNN). Feature extraction includes, for example, at least one convolution operation, a linear mapping operation, and a nonlinear mapping operation. The feature extraction network is separate from or integrated with the first machine learning model. The feature extraction network and the first machine learning model are two independent models. The combination means that the feature extraction network is arranged in a first machine learning model, and the first machine learning model comprises the feature extraction network.
In step S402, selecting a plurality of target materials from a plurality of candidate materials according to the first prediction result;
The target material is sometimes also referred to as potential material. The target material is a material which is predicted by the model and performs well after being put in. The number of target material is one or more. The types of target material include, but are not limited to, short videos, cover images, text (e.g., resource words), and the like. In other words, the selected at least one target material includes at least one of at least one target short video, at least one target cover image, or at least one target text.
In some embodiments, selecting at least one target material from the at least one candidate material based on the first prediction result comprises: selecting a material with the throwing effect higher than a first threshold value from at least one candidate material as a target material according to a first prediction result; for example, the candidate material is a short video, the first prediction result is an ROI, the threshold of the ROI is 0.5, and the target material is selected by selecting all short videos with the first-day ROI greater than 0.5, where 0.5 is an illustration of the first threshold.
In some embodiments, selecting at least one target material from the at least one candidate material based on the first prediction result comprises: and selecting a material with the first preset bit number of the throwing effect row from at least one candidate material according to the first prediction result, and taking the material as a target material. For example, the candidate material is a cover image, the first prediction result is a click rate, the first preset number of bits is 10% of the total number of cover images, and the target material is selected in such a way that the cover with the highest click rate is selected by the first 10%.
Through the mode of selecting the target materials, accurate screening of materials with good throwing effects is facilitated.
In step S403, a plurality of target materials are combined to obtain a plurality of candidate resources;
combining the target material into the resource includes a number of ways, and is exemplified below in connection with two combinations.
Combining mode one, one or more target materials of the same type are combined into resources.
For example, one or more texts (target material) are combined into one resource; as another example, one or more pictures (target materials) are combined into one resource; as another example, one or more videos (target material) are combined into a resource.
And combining the two combination modes and combining one or more target materials of different types into resources.
For example, video and cover images are combined into a asset. As another example, video, cover images, and text are combined into a resource. As another example, the cover image and text are combined into a resource.
In step S404, a plurality of candidate resources are input into a second machine learning model for processing, and a second prediction result is obtained.
The second machine learning model is used to predict the performance of the resource after release of the resource in order to screen out potential resources (target resources).
The second prediction is sometimes also referred to as scoring of the creative. The second prediction result is used for describing the release effect of at least one candidate resource predicted by the second machine learning model. The second predictor is sometimes also referred to as a model scoring candidate resources. For example, the data form of the second prediction result is a number, and the larger the number is, the higher the model scores the candidate resources, that is, the better the model predicts the delivering effect of the candidate resources.
In some embodiments, the second prediction result includes one or more of: predicted return on investment; a predicted activation number; predicted comprehensive benefit. The return on investment is the ratio between the income brought by the resource to the dispenser and the fee paid by the dispenser to the target platform, which is used for displaying the resource. The activation number is the total number of active users in a period of time, and active users refer to users who have activated an account by viewing a resource. The aggregate revenue is the difference between the total revenue of all users from the resource over a period of time and the fee charged by the target platform. By using the return on investment, the activation number, the comprehensive benefits and the like as model prediction targets, the prediction results of the model are facilitated to describe the quality of the resource performance more accurately.
Optionally, the second predictor and the first predictor have different dimensions. For example, the first prediction result is the ROI of the predicted candidate material. The second prediction result is the predicted activation number of the candidate resource.
In this way, since the index that may be concerned when recalling the material is the ROI and the index that may be concerned when ordering the material is the activation number, it is helpful to promote the accuracy of recall and ordering as a whole.
In some embodiments, the at least one candidate resource includes a first resource that has been cast, processing the at least one candidate resource through a second machine learning model, outputting a second prediction result, including: and determining a prediction result of the first resource according to the throwing effect data of the first resource in the historical time period. The larger the delivery effect data of the first resource is, the larger the prediction result of the first resource is. For example, if a resource creative (first resource) has been delivered, the model (second machine learning model) predicts the performance of the resource creative for a period of time based on the performance of the resource creative for a period of time in the past (second prediction result). For example, the current time point is 11 months and 27 days, the time reaches 11 months and 28 days and 0 early morning, the effect data of the release of the resource creative on 11 months and 27 days is input into the model, the model predicts the performance of the resource creative on the next day, namely the performance of the resource creative on 11 months and 28 days, and the prediction result output by the model selects which resource creative is released on 11 months and 28 days.
In one possible implementation, the effect data of the first resource in the historical period is taken as a prediction result of the first resource, and in this case, the prediction result of the first resource is the same as the effect data of the first resource in the historical period. In another possible implementation, conversion is performed on the effect data of the first resource in the historical time period based on a preset algorithm, for example, the effect data of the first resource in the historical time period is multiplied by a preset coefficient and added to obtain a prediction result of the first resource.
In this way, for the resources which have appeared, future performance of the resources is predicted by combining the effect data of the delivery before the resources, which is helpful to improve the accuracy of the prediction result and reduce the implementation complexity.
In some embodiments, the at least one candidate resource includes a second resource that has not been cast, the at least one candidate resource is processed by a second machine learning model, and outputting a second prediction result includes: determining a third resource with similarity higher than a threshold value between the second resource and at least one candidate resource;
and determining a prediction result of the second resource according to the release effect data of the third resource in the historical time period, wherein the larger the release effect data of the third resource is, the larger the prediction result of the second resource is. For example, if one resource creative (second resource) is not cast, another resource creative (third resource) similar to the one is selected, and the model (second machine learning model) predicts the performance (second prediction result) of the resource creative (second resource) that has not been cast for a period of time from the performance of the other resource creative (third resource) for a period of time in the past.
In this way, for the resources which do not appear, the effect data of the released resources are used for complementing the effect data of the resources which do not appear, so that the accuracy of the prediction result for the resources which do not appear is improved.
As can be seen from the above-mentioned material and resource generation process, in this embodiment, the materials selected by using the model are combined, and the resources selected by using the model are used to help to make more excellent resources.
In step S405, selecting a target resource from the plurality of candidate resources according to the second prediction result;
the target resource is sometimes also referred to as a potential resource. The target resource refers to a well-behaved resource after model prediction is put in. For example, the model predicts that a certain resource creative will become the head resource creative after being delivered, and then the resource creative will play the role of the target resource in the embodiment, and the delivering effect is improved by selecting the resource creative.
In some embodiments, selecting the target resource from the at least one candidate resource according to the second prediction result comprises: and selecting resources with the throwing effect higher than a second threshold value from the plurality of candidate resources as target resources according to the second prediction result.
In some embodiments, selecting the target resource from the at least one candidate resource according to the second prediction result comprises: and selecting a resource with the release effect arranged in the second preset bit number from the plurality of candidate resources as a target resource according to the second prediction result.
By the method for selecting the target resources, the resources with good throwing effect can be accurately screened out.
In step S406, the target resource is released.
In one exemplary scenario, the target resource is delivered through terminal and server interactions. For example, the terminal generates and transmits a resource acquisition request to the server in response to an operation triggered by the user to the resource presentation application. The server receives a resource acquisition request sent by the terminal, and responds to the resource acquisition request to send target resources to the terminal. And the terminal receives the target resource, and displays the target resource in the resource display application, so that the target resource is exposed to the user.
In some embodiments, before combining the at least one target material to obtain the at least one candidate resource, the method further comprises: and selecting new materials from at least one candidate material as target materials. The new material refers to the material which is not put in. For example, if one material in the material library has not been cast as a resource, the material is new.
By selecting the new material as the material to be put in, the new material obtains more probabilities of being exposed as resources, and the new material cold start is facilitated.
In some embodiments, before combining the at least one target material to obtain the at least one candidate resource, the method further comprises: at least one target material is randomly selected from the at least one candidate material.
The materials are randomly selected to be combined into the resources to be released, so that more resources can be automatically explored when no manual intervention exists, and the resources which cannot be explored by people can be supplemented when manual intervention exists, so that the exploration of a larger creative space is facilitated.
In some embodiments, before combining the at least one target material to obtain the at least one candidate resource, the method further comprises: and selecting materials with the effect data meeting the conditions in the historical time period from at least one candidate material as target materials.
In some embodiments, after combining the at least one target material to obtain the at least one candidate resource, the method further comprises: a target resource is randomly selected from the at least one candidate resource.
The resources are selected randomly for delivery, so that more resources can be automatically explored when no manual intervention exists, and the resources which cannot be explored by people can be supplemented when manual participation exists, so that the exploration of a larger creative space is facilitated.
In some embodiments, after combining the at least one target material to obtain the at least one candidate resource, the method further comprises: and selecting the resources with the preset digits before the delivery effect data row in the historical time period from at least one candidate resource as target resources. For example, the delivery effect data is an activation number, at least one candidate resource is ordered according to the order from high to low of the activation number, and the first few resources with high activation numbers are selected. For another example, the delivery effect data is an activation price, at least one candidate resource is ordered according to the order from low activation price to high activation price, and the first few resources with low activation price are selected.
In some embodiments, after the target resource is released, the method further comprises: and stopping throwing the target resource if the throwing effect data of the target resource meets the stopping condition.
By the method, the shutdown strategy is provided, the resources can be shut down according to the performance of the released resources, and the released resources of the platform are prevented from being wasted for a long time due to the poor performance of the released resources.
In some embodiments, the delivery effect data satisfies a stop condition, including one or more of the following (1) to (5):
(1) The return on investment is less than a threshold. For example, the return on investment threshold is 1, i.e., resources with a return on investment of less than 1 are stopped.
In this way, loss is helped to be avoided.
(2) The activation price is higher than the average price. For example, the threshold for activation rates is average rates, i.e., stopping the delivery of resources with activation rates higher than average rates.
In this way, economic waste is avoided.
(3) The cost is less than the threshold for each of a plurality of consecutive time periods. For example, a number of consecutive time periods are 3 days and cost as daily costs, i.e., stopping to deliver resources that cost less than the threshold for 3 consecutive days.
By the method, resources which tend to die can be turned off more timely, and the situation that the resources which tend to die after being put in waste the put-in resources of the platform for a long time is avoided.
(4) The active duration is less than a threshold.
(5) The balance trend accords with a preset trend.
The method for releasing resources based on machine learning uses one model to predict the releasing effect of materials, so that target materials which can perform well after releasing are selected, and uses another model to predict the releasing effect of resources formed by combining the materials, so that target resources which can perform well after releasing are selected, and the selected target resources are released. On one hand, the method gets rid of the dependence of manual operation to a certain extent because of automatic and intelligent generation of resources, thereby improving the release efficiency of the resources. On the other hand, the method improves the resource release effect because the resource with good prediction performance is released.
The technical solution is described below in connection with an example.
The advertisements in the following examples are originally intended to be resources in the method shown in FIG. 4. The optimization objective in the following example is the prediction in the method shown in fig. 4.
In the following examples, a closed loop of the released full link can be formed without human intervention. The functions of the following examples include, but are not limited to, the following (a) to (e).
(a) A new material cold start can be performed.
(b) Independent of human experience, a larger creative space can be explored.
(c) The advertising creative can be made according to the materials.
(d) Advertisement planning and delivery cycles can be automatically organized.
(e) The ad campaign and creative can be automatically uninterrupted.
The present example provides several sub-schemes to achieve the above functionality. The sub-schemes provided in this example include the following (a) to (E).
(A) New material cold start
(B) Creative space exploration
(C) Intelligent creative production
(D) Advertisement plan organization/delivery strategy
(E) Shutdown strategy
Wherein the first three sub-schemes, namely (a) through (C), are based essentially on recall and combined ordering frameworks.
Referring to the main framework of the foundation of this example before the description of the specific scheme: recall and combine ordering frameworks.
Recall refers to recall of the original material. Recall is essentially understood to be the selection of some of the material from a library of material. Recall means include, but are not limited to, one or more of (1) to (4) below.
(1) New material full recall: i.e. selecting new material.
(2) Random recall of materials: i.e. randomly selecting a portion of the material.
(3) Rule recall: and analyzing the historical data, and adding human experience to form recall rules. Recalling according to rules.
(4) Model recall: and designing and developing a machine learning model. Specifically, features are built in a single material dimension, and optimization targets (ROI, activation number, comprehensive benefits and the like) of advertisement delivery are taken as predicted targets. And (3) using the characteristics and the optimized target training model to predict the single material performance through the trained model, so that the potential materials are selected according to the material performance. Optionally, a machine learning model is built separately for each dimension material.
The specific implementation manner of training the machine learning model is that, for example, the electronic device builds a training sample, then the electronic device uses the training sample to train the model, and finally the electronic device uses the trained model to predict.
The training samples include features and labels. In some embodiments, the training samples are generated as follows: the electronic equipment combines the characteristics of the material and the performance characteristics of the past period of time to form a characteristic vector x; the electronic device determines a target y to be predicted; the electronic device concatenates the features of the previous time period and the targets of the next time period into samples (x, y). After generating training samples in this manner, the electronic device trains the model through the samples (x, y). When predicting the representation of the material, the electronic device obtains a new feature vector x, takes the new feature vector x as the input of the model, and outputs a prediction result y after the model is processed. Where y is, for example, ROI (return on investment), activation number, comprehensive benefits, etc.
For example, the electronic device divides the time period, with granularity of the divided time period, such as the level of the day. The electronic equipment counts the characteristics x of each dimension of the material and the optimization target y of advertisement delivery every day. The electronic device concatenates yesterday's features of the material and today's optimization targets into a sample. After the electronic device trains the model by using the sample, the characteristic vector of the material on the current day is input into the model, and the output of the model is the predicted tomorrow performance (such as ROI). And then the electronic equipment makes a decision according to the prediction result output by the model.
Combining and sorting refers to combining recalled raw materials to construct advertisement creatives and sorting the constructed advertisement creatives to obtain a sorting result. After the electronic equipment obtains the sorting result, the head creative can be accurately selected according to the sorting result.
Wherein the ordering means includes, but is not limited to, at least one of the following means (1) to (3).
Mode (1) random/no ordering. Random ordering refers to randomly picking (some or all) ad creatives from a combined constructed ad creative.
Mode (2) rule ordering: and forming ordering rules according to certain characteristics forming the creative, ordering according to the rules, and taking out the head creative according to the ordering result.
Mode (3) model ordering: and scoring the creative based on the machine learning model to obtain the score of the creative. And then sorting the creatives according to the scores, and selecting the head creatives according to the sorting results.
In some embodiments, the raw materials are combined based on a Cartesian product. For example, the raw materials are short videos and covers. For m short videos and n covers, m x n ad creatives may be composed. Some of these ad creatives may have appeared before and thus some impression effects (e.g., ROI data, etc.) may be scored as historical impression effects. While other combinations have not previously appeared, the score is unknown for ad creatives that have not appeared. This corresponds to a matrix complement problem. For example, by industry-ready methods such as collaborative filtering, matrix decomposition, various recommendation algorithms, etc.
Cold start of new materials: based on the framework of recall and combined ordering described above, the new material cold start can be summarized as: complete recall of new material and random ordering. In addition, upon cold start of new material (e.g., short video), other material (e.g., cover, advertising word) that cooperates with the new material may optionally be recalled in other ways.
The creative space exploration method is equivalent to randomly manufacturing creatives. The creative space exploration method can be summarized as follows: random recall/rule recall + random order/rule order. Without human intervention, the method can form a seed creative for exploration. When the human participation exists, the creative space exploration method can supplement creatives which cannot be explored by people and expand the creative space.
The goal of the intelligent creative production method is to produce "excellent" ad creatives. The flow of the intelligent creative production method can be summarized as follows: model recall + model ordering.
The ad organization policy is used to indicate the manner in which ad creatives are organized into an ad campaign. For example, there are 60 ad creatives, and when 60 ad creatives are organized, an ad campaign optionally contains 60 ad groups, each ad group including one ad creative. Or, alternatively, an ad campaign contains 6 ad groups, each including 10 ad creatives. The advertisement organization policy includes a random combination policy or a ranking score policy. The random combination policy is used to indicate a random partitioning of ad creatives into ad groups. For example, the 80 ad creatives that are made are grouped into 8 groups of 10 ad creatives each, out of order. The ranking score policy is used to indicate that ad creatives are sequentially classified into ad groups by ranking score. For example, sorting the 80 ad creatives by score, and then cutting into 8 groups according to the order of the scores, each group containing 10 ad creatives; or 16 groups, each containing 5 ad creatives.
The placement strategy is used to indicate the manner in which the advertising program is placed. For example, the placement policy indicates the number of accounts to place the advertising campaign and the number of advertising campaigns per account per time period. For example, a placement policy indicates how multiple accounts are used, how many advertising campaigns are placed per account per day.
The adjustment of the two strategies, namely the advertisement organization strategy and the delivery strategy, is equivalent to the adjustment of super parameters. These two strategies are optionally determined empirically and manually.
Optionally, the above-mentioned organization policy, sorting score policy and delivery policy are applied to other scenes except for delivering advertisements, the advertisement creative is replaced by other types of advertisements except for advertisements, the advertisement group is replaced by the advertisement group, the advertisement plan is replaced by the advertisement delivery plan, and the corresponding implementation manner is the same as that of the above-mentioned organization policy, sorting score policy and delivery policy.
The shutdown strategy is optionally performed based on the performance of a dimension of the advertising creative. The shutdown strategy includes, but is not limited to, the following modes (1) to (3).
Mode (1) is based on the ROI. An ad campaign or ad group with ROI less than 1 is turned off, i.e., not losing money.
Mode (2) is based on the activation price: the activation of an advertising program or group with a higher bid than average bid is turned off, i.e., without wasting money.
Mode (3) is based on cost: an advertising program that spends less than a certain threshold (e.g., 5-membered) daily is shut down for a continuous period of time (e.g., 3 days), i.e., an advertising program that tends to die is shut down.
Fig. 6 is a block diagram illustrating a resource delivery device according to an example embodiment. Referring to fig. 6, the apparatus includes:
The processing unit 601 is configured to perform inputting the plurality of candidate materials into the first machine learning model for processing, so as to obtain a first prediction result, where the first prediction result is used for describing the throwing effect of the plurality of candidate materials predicted by the first machine learning model;
a selecting unit 602 configured to perform selecting a plurality of target materials from a plurality of candidate materials according to the first prediction result;
a combining unit 603 configured to perform combining of the plurality of target materials to obtain a plurality of candidate resources;
the processing unit 601 is further configured to perform inputting the plurality of candidate resources into the second machine learning model for processing, so as to obtain a second prediction result, where the second prediction result is used for describing the throwing effect of the plurality of candidate resources predicted by the second machine learning model;
a selecting unit 602 further configured to perform selecting a target resource from the plurality of candidate resources according to the second prediction result;
and a release unit 604 configured to perform release of the target resource.
In some embodiments, the processing unit 601 is configured to perform: combining the self characteristics of the first candidate materials and the performance characteristics of the first candidate materials to obtain a characteristic combination, wherein the first candidate materials are one candidate material in a plurality of candidate materials, the self characteristics are used for describing the characteristics of the first candidate materials, and the performance characteristics are used for describing the throwing effect of the first candidate materials in a historical time period; and processing the feature combination through a first machine learning model to obtain a first prediction result of the first candidate material.
In some embodiments, the first candidate material comprises video, and the native features of the first candidate material comprise at least one of an embedded vector of video key frames, a video duration, a video style, or a video category; or,
the first candidate material comprises a cover image of the video, and the self characteristics of the first candidate material comprise at least one of an embedded vector of the image, an image label and an image tone; or,
the first candidate material includes text, and the first candidate material itself is characterized by a word embedding vector or a one-hot encoding.
In some embodiments, the performance characteristics of the first candidate material include one or more of:
click rate;
the play completion rate is the play completion rate;
exposure number, which refers to the number of times displayed;
the activation number is the total number of the activation users in the historical time period, and the activation users are the users for triggering the first candidate materials so as to activate the account.
In some embodiments, the first machine learning model is trained by:
acquiring a training sample, wherein the training sample comprises the self characteristics of sample materials and the performance characteristics of the sample materials, and the labels of the training sample are the throwing effect data of the sample materials in a historical time period;
And inputting the training sample into the initial machine learning model for processing, and adjusting parameters of the initial machine learning model according to the deviation between the output result of the initial machine learning model and the label to obtain a first machine learning model.
In some embodiments, the selection unit 602 is configured to perform: selecting materials with the throwing effect higher than a first threshold value from a plurality of candidate materials according to a first prediction result, and taking the materials as target materials; or selecting the materials with the first preset bit number of the throwing effect from the plurality of candidate materials according to the first prediction result, and taking the materials with the first preset bit number of the throwing effect as target materials.
In some embodiments, the plurality of candidate resources includes a first resource that has been cast, the processing unit 601 configured to perform: and determining a prediction result of the first resource according to the throwing effect data of the first resource in the historical time period.
In some embodiments, the selection unit 602 is configured to perform: selecting resources with the release effect higher than a second threshold value from the plurality of candidate resources as target resources according to the second prediction result; or selecting the resource with the release effect arranged in the second preset bit number from the plurality of candidate resources as the target resource according to the second prediction result.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The electronic device in the above-described method embodiment may be implemented as a terminal or a server, for example, fig. 7 shows a block diagram of a structure of a terminal 700 provided in an exemplary embodiment of the present disclosure. The terminal 700 may be: a smart phone, a tablet, an MP3 (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3) player, an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook or a desktop. Terminal 700 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 700 includes: one or more processors 701, and one or more memories 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and drawing of content that the display screen is required to display. In some embodiments, the processor 701 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one program code for execution by processor 701 to implement the resource provisioning method provided by the method embodiments in the present disclosure.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display 705, a camera assembly 706, audio circuitry 707, and a power supply 709.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (Near Field Communication ) related circuitry, which is not limited by the present disclosure.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one, providing a front panel of the terminal 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
A power supply 709 is used to power the various components in the terminal 700. The power supply 709 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 700 further includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 711. The acceleration sensor 711 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may collect a 3D motion of the user to the terminal 700 in cooperation with the acceleration sensor 711. The processor 701 may implement the following functions based on the data collected by the gyro sensor 712: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 713 may be disposed at a side frame of the terminal 700 and/or at a lower layer of the display screen 705. When the pressure sensor 713 is disposed at a side frame of the terminal 700, a grip signal of the user to the terminal 700 may be detected, and the processor 701 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at the lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically provided on the front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front face of the terminal 700 gradually decreases, the processor 701 controls the display 705 to switch from the bright screen state to the off screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually increases, the processor 701 controls the display screen 705 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 7 is not limiting of the terminal 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The electronic device in the above method embodiment may be implemented as a server, for example, fig. 8 is a schematic structural diagram of a server provided in the embodiment of the present disclosure, where the server 800 may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 801 and one or more memories 802, where at least one program code is stored in the memory 802, and the at least one program code is loaded and executed by the processor 801 to implement the resource allocation method provided in the above method embodiments. Of course, the server may also have a wired or wireless network interface, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a storage medium is also provided, comprising program code, for example a memory comprising program code, executable by a processor of an electronic device to perform the above-described resource allocation method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Read-Only optical disk (Compact Disc Read-Only Memory, CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
The user information referred to in the present disclosure may be information authorized by the user or sufficiently authorized by each party.
Resources in embodiments of the present disclosure are sometimes also referred to as content items. A content item refers to an item for presenting content, which is any one or a combination of multiple items of image, text, audio, video.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (17)
1. A method of resource delivery, the method comprising:
inputting a plurality of candidate materials into a first machine learning model for processing to obtain a first prediction result, wherein the first prediction result is used for describing the throwing effect of the plurality of candidate materials predicted by the first machine learning model;
selecting a plurality of target materials from the plurality of candidate materials according to the first prediction result;
combining the plurality of target materials to obtain a plurality of candidate resources, wherein the plurality of candidate resources comprise first resources and second resources, the first resources are resources which are already put in the plurality of candidate resources, and the second resources are resources which are not put in the plurality of candidate resources;
processing the release effect data of the first resource in the historical time period through a second machine learning model to obtain a second prediction result of the first resource, wherein the second prediction result is used for describing the release effect of the resource predicted by the second machine learning model;
Processing the release effect data of a third resource in the historical time period through the second machine learning model to obtain a second prediction result of the second resource, wherein the third resource is a resource with similarity between the first resource and the second resource being higher than a threshold value;
selecting a target resource from the plurality of candidate resources according to the second prediction result of the first resource and the second prediction result of the second resource;
and throwing the target resource.
2. The method for resource allocation according to claim 1, wherein the inputting the plurality of candidate materials into the first machine learning model for processing to obtain the first prediction result includes:
combining the self characteristics of the first candidate materials and the performance characteristics of the first candidate materials to obtain a characteristic combination, wherein the first candidate materials are one candidate material in the plurality of candidate materials, the self characteristics are used for describing the characteristics of the first candidate materials, and the performance characteristics are used for describing the throwing effect of the first candidate materials in a historical time period;
and processing the feature combination through the first machine learning model to obtain a first prediction result of the first candidate material.
3. The resource delivery method of claim 2, wherein the first candidate material comprises a video, and the self-feature of the first candidate material comprises at least one of an embedded vector of a video key frame, a video duration, a video style, or a video category; or,
the first candidate material comprises a cover image of a video, and the self characteristics of the first candidate material comprise at least one of an embedded vector of the image, an image label and an image tone; or,
the first candidate material includes text, and the first candidate material is characterized by word embedded vectors or one-hot codes.
4. The resource delivery method of claim 2, wherein the performance characteristics of the first candidate material include one or more of:
click rate;
the play completion rate is the play completion rate;
an exposure number, which refers to the number of times displayed;
and the activation number is the total number of the activation users in the historical time period, and the activation users are users for triggering the first candidate material so as to activate the account.
5. The resource delivery method of claim 1, wherein the first machine learning model is trained by:
Acquiring a training sample, wherein the training sample comprises the self characteristics of sample materials and the performance characteristics of the sample materials, and the label of the training sample is the throwing effect data of the sample materials in a historical time period;
and inputting the training sample into an initial machine learning model for processing, and adjusting parameters of the initial machine learning model according to the deviation between the output result of the initial machine learning model and the label to obtain the first machine learning model.
6. The resource allocation method according to claim 1, wherein selecting a plurality of target materials from the plurality of candidate materials according to the first prediction result includes:
selecting materials with the throwing effect higher than a first threshold value from the plurality of candidate materials as the target materials according to the first prediction result; or,
and selecting a material with the first preset bit number of the throwing effect row from the plurality of candidate materials according to the first prediction result, and taking the material as the target material.
7. The resource allocation method according to claim 1, wherein selecting the target resource from the plurality of candidate resources according to the second prediction result of the first resource and the second prediction result of the second resource comprises:
Selecting a resource with a release effect higher than a second threshold value from the plurality of candidate resources as the target resource according to the second prediction result of the first resource and the second prediction result of the second resource; or,
and selecting a resource with the release effect arranged in a second preset bit number from the plurality of candidate resources as the target resource according to the second prediction result of the first resource and the second prediction result of the second resource.
8. A resource delivery device, characterized in that the resource delivery device comprises:
the processing unit is configured to input a plurality of candidate materials into a first machine learning model for processing to obtain a first prediction result, wherein the first prediction result is used for describing the throwing effect of the plurality of candidate materials predicted by the first machine learning model;
a selecting unit configured to perform selecting a plurality of target materials from the plurality of candidate materials according to the first prediction result;
the combining unit is configured to perform combination on the plurality of target materials to obtain a plurality of candidate resources, wherein the plurality of candidate resources comprise first resources and second resources, the first resources are the resources which are already put in the plurality of candidate resources, and the second resources are the resources which are not put in the plurality of candidate resources;
The processing unit is further configured to execute a second machine learning model to process the release effect data of the first resource in the historical time period to obtain a second prediction result of the first resource, wherein the second prediction result is used for describing the release effect of the resource predicted by the second machine learning model;
the processing unit is further configured to execute processing, through the second machine learning model, the effect data of the third resource in the historical time period to obtain a second prediction result of the second resource, where the third resource is a resource, in the first resource, with similarity between the third resource and the second resource being higher than a threshold value;
the selecting unit is further configured to perform selecting a target resource from the plurality of candidate resources according to the second prediction result of the first resource and the second prediction result of the second resource;
and the throwing unit is configured to execute throwing of the target resource.
9. The resource delivery device of claim 8, wherein the processing unit is configured to perform: combining the self characteristics of the first candidate materials and the performance characteristics of the first candidate materials to obtain a characteristic combination, wherein the first candidate materials are one candidate material in the plurality of candidate materials, the self characteristics are used for describing the characteristics of the first candidate materials, and the performance characteristics are used for describing the throwing effect of the first candidate materials in a historical time period; and processing the feature combination through the first machine learning model to obtain a first prediction result of the first candidate material.
10. The resource delivery device of claim 9, wherein the first candidate material comprises a video, and the native features of the first candidate material comprise at least one of an embedded vector of video keyframes, a video duration, a video style, or a video category; or,
the first candidate material comprises a cover image of a video, and the self characteristics of the first candidate material comprise at least one of an embedded vector of the image, an image label and an image tone; or,
the first candidate material includes text, and the first candidate material is characterized by word embedded vectors or one-hot codes.
11. The resource delivery device of claim 9, wherein the performance characteristics of the first candidate material include one or more of:
click rate;
the play completion rate is the play completion rate;
an exposure number, which refers to the number of times displayed;
and the activation number is the total number of the activation users in the historical time period, and the activation users are users for triggering the first candidate material so as to activate the account.
12. The resource delivery device of claim 8, wherein the first machine learning model is trained by:
Acquiring a training sample, wherein the training sample comprises the self characteristics of sample materials and the performance characteristics of the sample materials, and the label of the training sample is the throwing effect data of the sample materials in a historical time period;
and inputting the training sample into an initial machine learning model for processing, and adjusting parameters of the initial machine learning model according to the deviation between the output result of the initial machine learning model and the label to obtain the first machine learning model.
13. The resource delivery device according to claim 8, wherein the selection unit is configured to perform: selecting materials with the throwing effect higher than a first threshold value from the plurality of candidate materials as the target materials according to the first prediction result; or selecting a material with the first preset bit number of the throwing effect arranged in front from the plurality of candidate materials according to the first prediction result, and taking the material as the target material.
14. The resource delivery device according to claim 8, wherein the selection unit is configured to perform: selecting a resource with a release effect higher than a second threshold value from the plurality of candidate resources as the target resource according to the second prediction result of the first resource and the second prediction result of the second resource; or selecting a resource with the release effect arranged in a second preset bit number from the plurality of candidate resources as the target resource according to the second prediction result of the first resource and the second prediction result of the second resource.
15. An electronic device, comprising:
one or more processors;
one or more memories for storing the one or more processor-executable program codes;
wherein the one or more processors are configured to execute the program code to implement the resource delivery method of any of claims 1 to 7.
16. A computer readable storage medium, characterized in that program code in the computer readable storage medium, when executed by a processor of an electronic device, enables the electronic device to perform the resource allocation method according to any one of claims 1 to 7.
17. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the resource allocation method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110050768.XA CN112862516B (en) | 2021-01-14 | 2021-01-14 | Resource release method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110050768.XA CN112862516B (en) | 2021-01-14 | 2021-01-14 | Resource release method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112862516A CN112862516A (en) | 2021-05-28 |
CN112862516B true CN112862516B (en) | 2024-03-12 |
Family
ID=76006319
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110050768.XA Active CN112862516B (en) | 2021-01-14 | 2021-01-14 | Resource release method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862516B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344623B (en) * | 2021-05-31 | 2024-02-27 | 北京百度网讯科技有限公司 | Information processing method, apparatus, electronic device and storage medium |
CN113570416B (en) * | 2021-07-30 | 2022-04-01 | 北京达佳互联信息技术有限公司 | Method and device for determining delivered content, electronic equipment and storage medium |
CN113627979B (en) * | 2021-07-30 | 2024-07-12 | 北京达佳互联信息技术有限公司 | Method, device, server, system and medium for processing resource release data |
CN113784173B (en) * | 2021-07-30 | 2023-04-28 | 北京达佳互联信息技术有限公司 | Video playing method and device and electronic equipment |
CN113378070B (en) * | 2021-08-11 | 2022-03-25 | 北京达佳互联信息技术有限公司 | Information delivery method, device, server and storage medium |
CN113723994A (en) * | 2021-08-18 | 2021-11-30 | 广州迈量科技有限公司 | Information promotion plan processing method, system and computer readable storage medium |
CN114222155B (en) * | 2021-12-13 | 2023-12-26 | 北京达佳互联信息技术有限公司 | Resource recommendation method, device, electronic equipment and storage medium |
CN113935554B (en) * | 2021-12-15 | 2022-05-13 | 北京达佳互联信息技术有限公司 | Model training method in delivery system, resource delivery method and device |
CN116564442A (en) * | 2022-01-24 | 2023-08-08 | 腾讯科技(深圳)有限公司 | Material screening method, material screening device, computer equipment and storage medium |
CN114418651A (en) * | 2022-01-26 | 2022-04-29 | 北京数智新天信息技术咨询有限公司 | Intelligent popularization decision-making method and device and electronic equipment |
CN115564469A (en) * | 2022-09-09 | 2023-01-03 | 北京沃东天骏信息技术有限公司 | Advertisement creative selection and model training method, device, equipment and storage medium |
CN116974652A (en) * | 2023-09-22 | 2023-10-31 | 星河视效科技(北京)有限公司 | Intelligent interaction method, device, equipment and storage medium based on SAAS platform |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631711A (en) * | 2015-12-30 | 2016-06-01 | 合一网络技术(北京)有限公司 | Advertisement putting method and apparatus |
CN105654198A (en) * | 2015-12-30 | 2016-06-08 | 合网络技术(北京)有限公司 | Brand advertisement effect optimization method capable of realizing optimal threshold value selection |
CN107330715A (en) * | 2017-05-31 | 2017-11-07 | 北京京东尚科信息技术有限公司 | The method and apparatus for selecting display advertising material |
CN109493116A (en) * | 2018-10-15 | 2019-03-19 | 上海基分文化传播有限公司 | A kind of method and system that advertisement automatically generates |
CN110189173A (en) * | 2019-05-28 | 2019-08-30 | 北京百度网讯科技有限公司 | Advertisement generation method and device |
CN110288375A (en) * | 2019-05-23 | 2019-09-27 | 北京鑫宇创世科技有限公司 | A kind of ad material confidence level determines method and device |
CN111144937A (en) * | 2019-12-20 | 2020-05-12 | 北京达佳互联信息技术有限公司 | Advertisement material determination method, device, equipment and storage medium |
CN111833099A (en) * | 2020-06-24 | 2020-10-27 | 广州筷子信息科技有限公司 | Method and system for generating creative advertisement |
-
2021
- 2021-01-14 CN CN202110050768.XA patent/CN112862516B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631711A (en) * | 2015-12-30 | 2016-06-01 | 合一网络技术(北京)有限公司 | Advertisement putting method and apparatus |
CN105654198A (en) * | 2015-12-30 | 2016-06-08 | 合网络技术(北京)有限公司 | Brand advertisement effect optimization method capable of realizing optimal threshold value selection |
CN107330715A (en) * | 2017-05-31 | 2017-11-07 | 北京京东尚科信息技术有限公司 | The method and apparatus for selecting display advertising material |
CN109493116A (en) * | 2018-10-15 | 2019-03-19 | 上海基分文化传播有限公司 | A kind of method and system that advertisement automatically generates |
CN110288375A (en) * | 2019-05-23 | 2019-09-27 | 北京鑫宇创世科技有限公司 | A kind of ad material confidence level determines method and device |
CN110189173A (en) * | 2019-05-28 | 2019-08-30 | 北京百度网讯科技有限公司 | Advertisement generation method and device |
CN111144937A (en) * | 2019-12-20 | 2020-05-12 | 北京达佳互联信息技术有限公司 | Advertisement material determination method, device, equipment and storage medium |
CN111833099A (en) * | 2020-06-24 | 2020-10-27 | 广州筷子信息科技有限公司 | Method and system for generating creative advertisement |
Also Published As
Publication number | Publication date |
---|---|
CN112862516A (en) | 2021-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112862516B (en) | Resource release method and device, electronic equipment and storage medium | |
CN110585726B (en) | User recall method, device, server and computer readable storage medium | |
CN111897996B (en) | Topic label recommendation method, device, equipment and storage medium | |
CN112995691B (en) | Live broadcast data processing method and device, electronic equipment and storage medium | |
CN111291200B (en) | Multimedia resource display method and device, computer equipment and storage medium | |
WO2021135212A1 (en) | Order processing | |
CN109086742A (en) | scene recognition method, scene recognition device and mobile terminal | |
CN105453070A (en) | Machine learning-based user behavior characterization | |
CN110246110A (en) | Image evaluation method, device and storage medium | |
CN112416207A (en) | Information content display method, device, equipment and medium | |
CN110493635B (en) | Video playing method and device and terminal | |
CN108401167A (en) | Electronic equipment and server for video playback | |
KR20150034925A (en) | Method for searching image and recording-medium recorded program thereof | |
CN110929159A (en) | Resource delivery method, device, equipment and medium | |
WO2022057764A1 (en) | Advertisement display method and electronic device | |
CN112990964B (en) | Recommended content resource acquisition method, device, equipment and medium | |
CN110909184A (en) | Multimedia resource display method, device, equipment and medium | |
CN116128571B (en) | Advertisement exposure analysis method and related device | |
CN107807940B (en) | Information recommendation method and device | |
CN114065056B (en) | Learning scheme recommendation method, server and system | |
CN113762585B (en) | Data processing method, account type identification method and device | |
CN112230822B (en) | Comment information display method and device, terminal and storage medium | |
CN113947418A (en) | Feedback information acquisition method and device, electronic equipment and storage medium | |
CN113592198B (en) | Method, server and terminal for determining demand reference information | |
CN113591958B (en) | Method, device and equipment for fusing internet of things data and information network data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |