CN115564469A - Advertisement creative selection and model training method, device, equipment and storage medium - Google Patents

Advertisement creative selection and model training method, device, equipment and storage medium Download PDF

Info

Publication number
CN115564469A
CN115564469A CN202211104745.3A CN202211104745A CN115564469A CN 115564469 A CN115564469 A CN 115564469A CN 202211104745 A CN202211104745 A CN 202211104745A CN 115564469 A CN115564469 A CN 115564469A
Authority
CN
China
Prior art keywords
advertisement
picture
creative
data
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211104745.3A
Other languages
Chinese (zh)
Inventor
刘银星
阮涛
张政
吕晶晶
詹科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN202211104745.3A priority Critical patent/CN115564469A/en
Publication of CN115564469A publication Critical patent/CN115564469A/en
Priority to PCT/CN2023/116575 priority patent/WO2024051609A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Evolutionary Computation (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Data Mining & Analysis (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment and a storage medium for selecting advertisement creatives and training models. The invention relates to the field of artificial intelligence. The method comprises the following steps: acquiring candidate advertisement creative data corresponding to a target article; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement copy; acquiring a sparse feature vector and a picture feature vector corresponding to the candidate advertisement creative data, and acquiring a recommendation probability value corresponding to the candidate advertisement creative data based on the sparse feature vector, the picture feature vector and a pre-trained creative selection model; and selecting target advertisement creative data according to the recommendation probability value. In other words, in the embodiment of the invention, the creative selection model can be used for automatically selecting the target advertisement creative data from the collected candidate advertisement creative data, and the images and the characters are subjected to fusion processing, so that the problem of multi-mode-based creative optimization in an advertisement system is solved, and the method has better universality.

Description

Advertisement creative selection and model training method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of artificial intelligence, in particular to a method, a device, equipment and a storage medium for selecting advertisement creatives and training models.
Background
With the continuous development of artificial intelligence technology, image processing technology and natural language processing technology have been applied to the field of advertisement industry.
The existing method for selecting advertisement originality generally utilizes image processing technology to process image elements in advertisement materials, and utilizes a natural language processing model to identify character elements in the advertisement materials. At present, the methods are all specific to the specific field and have poor universality. And the methods can not directly select the optimal advertisement creative element for the user, thereby reducing the user experience.
Disclosure of Invention
The embodiment of the invention provides an advertisement creative selection and model training method, device, medium and electronic equipment, which aim to automatically and accurately select optimal target advertisement creative data from candidate advertisement creative data.
In a first aspect, an embodiment of the present invention provides a multi-modal-based ad creative extraction method, including:
acquiring candidate advertisement creative data corresponding to a target article; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement document;
acquiring a sparse feature vector and a picture feature vector corresponding to the candidate advertisement creative data, and acquiring a recommendation probability value corresponding to the candidate advertisement creative data based on the sparse feature vector, the picture feature vector and a pre-trained creative selection model;
selecting target advertisement creative data according to the recommendation probability value;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting an attention mechanism, and outputting the recommendation probability value based on a fusion result.
In a second aspect, an embodiment of the present invention further provides a model training method, where the method includes:
obtaining training sample data, wherein the training sample data comprises sample advertisement creative data corresponding to a sample article and a standard recommendation probability value corresponding to the sample advertisement creative data, and the sample advertisement creative data comprises an advertisement picture and an advertisement file;
acquiring a sparse feature vector and a picture feature vector corresponding to the sample advertisement creative data, and acquiring a prediction recommendation probability value corresponding to the sample advertisement creative data based on the sparse feature vector, the picture feature vector and a creative selection model to be trained;
determining a loss function according to the standard recommendation probability value and the prediction recommendation probability value, adjusting network parameters in the creative selection model based on the loss function, and stopping training when a preset iteration stopping condition is met;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting an attention mechanism, and outputting the prediction recommendation probability value based on a fusion result.
In a third aspect, an embodiment of the present invention further provides an advertisement creative data extracting apparatus, where the apparatus includes:
the data acquisition module is used for acquiring candidate advertisement creative data corresponding to the target object; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement copy;
a probability value obtaining module, configured to obtain a sparse feature vector and a picture feature vector corresponding to the candidate advertisement creative data, and obtain a recommendation probability value corresponding to the candidate advertisement creative data based on the sparse feature vector, the picture feature vector, and a pre-trained creative selection model;
the data selection module is used for selecting target advertisement creative data according to the recommendation probability value;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting a self-attention mechanism, and outputting the recommendation probability value based on a fusion result.
In a fourth aspect, an embodiment of the present invention further provides a model training apparatus, where the apparatus includes:
the system comprises a sample data acquisition module, a sample data acquisition module and a training sample data processing module, wherein the training sample data comprises sample advertisement creative data corresponding to a sample article and a standard recommendation probability value corresponding to the sample advertisement creative data, and the sample advertisement creative data comprises an advertisement picture and an advertisement file;
the vector acquisition module is used for acquiring a sparse feature vector and a picture feature vector corresponding to the sample advertisement creative data and acquiring a prediction recommendation probability value corresponding to the sample advertisement creative data based on the sparse feature vector, the picture feature vector and a creative selection model to be trained;
the model training module is used for determining a loss function according to the standard recommendation probability value and the prediction recommendation probability value, adjusting network parameters in the creative selection model based on the loss function, and stopping training when a preset iteration stopping condition is met;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting a self-attention mechanism, and outputting the prediction recommendation probability value based on a fusion result.
In a fifth aspect, embodiments of the present invention further provide a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the multi-modal-based ad creative extraction method or model training method according to any of the embodiments of the present invention is implemented.
In a sixth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the multi-modal-based ad creative extraction method or the model training method according to any of the embodiments of the present invention.
In the embodiment of the invention, candidate advertisement creative data corresponding to a target article is obtained; the candidate advertisement creative data comprises advertisement pictures and advertisement documentaries; acquiring sparse feature vectors and picture feature vectors corresponding to candidate advertisement creative data, and acquiring recommendation probability values corresponding to the candidate advertisement creative data based on the sparse feature vectors, the picture feature vectors and a pre-trained creative selection model; selecting target advertisement creative data according to the recommendation probability value; wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting a self-attention mechanism, and outputting a recommendation probability value based on a fusion result. Namely, in the embodiment of the invention, the creative selection model can be used for automatically selecting the target advertisement creative data from the collected candidate advertisement creative data, and the creative selection model can adopt a self-attention mechanism to fuse the sparse feature vector, namely the ID class feature and the picture feature vector, so that the picture and the character can be fused, the problem of multi-mode-based creative optimization in an advertisement system is solved, and the creative selection model has better universality.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a flowchart of a multi-modal-based ad creative selection method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an creative selection model provided in an embodiment of the present invention;
FIG. 3 is a flowchart of a method for obtaining feature vectors of a picture according to an embodiment of the present invention;
FIG. 4 is a flowchart of a method for obtaining candidate ad creative data according to an embodiment of the present invention;
FIG. 5 is a flowchart of another method for obtaining candidate ad creative data, according to an embodiment of the present invention;
FIG. 6 is a flowchart of an optimization process for candidate ad creative data, according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating an advertisement creative data selection and optimization process, according to an embodiment of the present invention;
FIG. 8 is a flowchart of a model training method according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of an advertisement creative data extraction device according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a model training apparatus according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a flowchart of a multi-mode-based ad creative selection method, which is provided in an embodiment of the present invention, and this embodiment can implement automatic and accurate selection of optimal targeted ad creative data from candidate ad creative data, and this method can be executed by an ad creative data selection apparatus in an embodiment of the present invention, and this apparatus can be implemented in a software and/or hardware manner, as shown in fig. 1, and this method specifically includes the following steps:
s110, obtaining candidate advertisement creative data corresponding to the target object.
Wherein the target item represents an item for which corresponding advertising creative data needs to be generated or selected. There may be multiple candidate advertising creative data, each candidate advertising creative data describing an advertising creative scheme for the targeted item. Wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement file.
Specifically, in some article pages displayed in article websites and mobile phones APP, advertisement material data (advertisement documents and advertisement pictures) corresponding to articles can be acquired through an article detail page. The advertisement file of the target object can be obtained by performing text recognition, character extraction and the like on the content in the detail page; by performing image recognition, target detection and the like on the content in the detail page, the advertising picture of the target object can be obtained. Alternatively, the advertisement material data corresponding to the target item may be acquired from advertisement creative materials provided by an advertisement company or the like. And further, screening the advertisement material data corresponding to the target article according to the requirement to obtain candidate advertisement creative data corresponding to the target article. Illustratively, the target item is a cell phone. And checking the content of the detail pages of various mobile phones in the mobile phone website. The upper part of the detail page is provided with display pictures of all angles of the mobile phone, and the lower part of the pictures is provided with corresponding advertisement documents. And performing text recognition and character extraction on the content displayed in the mobile phone detail page, and extracting the advertisement case below the mobile phone detail page. And positioning the mobile phone pictures at each angle in the detail page to obtain the advertisement pictures. When the size of the extracted advertisement image is not appropriate, the image can be intelligently cut to obtain a final advertisement image. After the advertisement material data is obtained, the advertisement material data is screened according to requirements (such as click rate, sensitive words and the like) to obtain candidate advertisement creative data.
And S120, acquiring sparse feature vectors and picture feature vectors corresponding to the candidate advertisement creative data, and acquiring recommendation probability values corresponding to the candidate advertisement creative data based on the sparse feature vectors, the picture feature vectors and a pre-trained creative selection model.
The creative selection model is used for: outputting a first feature vector based on the sparse feature vector, and fusing the first feature vector based on the sparse feature vector and the picture feature vector by adopting a self-attention mechanism; and outputs a recommendation probability value based on the fusion result. In the scheme, the creative selection model comprises a Multilayer Perceptron neural network (MLP) module, a self-attention module and an output module; wherein the MLP module is configured to output a first feature vector based on the sparse feature vector; the self-attention module is used for outputting a second feature vector based on the sparse feature vector and the picture feature vector; the output module is used for outputting a recommendation probability value based on the first feature vector and the second feature vector.
Wherein the sparse feature vector is a vector for reflecting a plurality of types of sparse features. The sparse features of the present scheme include item features, user features, and creative features. The article characteristics comprise information such as article identification numbers, advertisement space identification numbers, brand identification numbers, article category identification numbers and the like. The user characteristics include the age, gender, and preferences of the user, among others. The creative features include background template features, literature features, and picture features. The background template characteristics comprise characteristics such as a background template identification number, a template style, a template layout, a template main color and the like. The copy characteristics include features such as a master copy, a slave copy, and a bubble copy (used to indicate the type of copy that the item is promoting, hot-selling), which can be obtained through the advertising copy in the advertising creative data. The picture characteristics comprise the characteristics of whether characters exist on the advertising picture, whether people exist, creative types and the like. The picture feature vector is a vector for reflecting the picture features of the advertisement picture, and the picture features can be obtained through the advertisement picture in the advertisement creative data. The MLP module may map the input plurality of feature vectors onto a single output feature vector. The self-attention module can quickly extract important features in the sparse feature vector. In this embodiment, optionally, the self-attention module includes a multi-head self-attention module in a transform model. The Transformer model is a neural network model, the Transformer model can learn the context of data by tracking the relationship in the sequence data, and the Transformer model comprises a multi-head self-attention module. The multi-head self-attention module can extract feature information from multiple dimensions, has high parallelism and can combine information of different dimensions to capture the dependency relationship of various ranges in the sequence.
And inputting the sparse feature matrix and the picture feature vector into an intention selection model to obtain recommendation probability values corresponding to candidate advertisement intention data. Fig. 2 is a schematic structural diagram of the creative selection model provided in the embodiment of the present invention. As shown in fig. 2, the sparse feature matrix and the image feature vector are input into the creative selection model, and the sparse feature matrix can be converted into a sparse feature vector by using the vector conversion table. And inputting the sparse feature vector and the image feature vector into a multi-head self-attention module, and inputting the sparse feature vector into an MLP module. And finally, predicting a recommendation probability value corresponding to the candidate advertisement creative data according to the second feature vector output by the multi-head self-attention module and the first feature vector output by the MLP module (an output module).
Before using the creative selection model, the creative selection model needs to be trained. Specifically, a large number of existing ad creative schemes are collected, and ad creative data (background template information, item information, document information, picture information, etc.) is collated from the ad creative schemes. And taking the marked sparse feature vector and the picture feature vector as sample data, and taking the recommended probability value (1 or 0) of the marked sparse feature vector and the picture feature vector as a sample label. And inputting the sample data into the creative selection model to obtain the recommended probability value corresponding to the sample data and predicted by the creative selection model. And then calculating a loss function by using the sample labels and the predicted recommended probability value, and continuously adjusting and training model parameters of the creative selection model according to a calculation result to obtain a trained creative selection model.
According to the technical scheme, the data are authorized by the user in the aspects of acquisition, storage, use, processing and the like, and the relevant regulations of national laws and regulations are met.
And S130, selecting target advertisement creative data according to the recommendation probability value.
Wherein the targeted advertising creative data is optimal advertising creative data selected from a plurality of candidate advertising creative data. The target ad creative data includes the ad copy, ad picture, background template (template style, background color, layout, etc.) of the target item, etc. data related to the ad solution of the target item. The recommendation probability value is a probability value output by the creative selection model for recommending the corresponding candidate advertisement creative data for the user. The greater the recommendation probability value, the greater the degree of excellence that the creative selection model deems corresponding candidate advertising creative data. Specifically, a preset probability value may be set according to a specific requirement, and when the recommendation probability value is greater than the preset probability value, the corresponding candidate advertisement creative data is determined as the target advertisement creative data. Or directly selecting the candidate advertisement creative data with the maximum recommendation probability value as the target advertisement data.
After the target advertisement creative data is selected, the scheme optionally further includes the following steps A1-A2:
step A1: and acquiring first coding information of an advertisement picture and second coding information of an advertisement pattern in the target advertisement creative data, and generating a Uniform Resource Locator (URL) corresponding to the target advertisement creative data according to the first coding information and the second coding information.
The uniform resource identifier (URL) is a compact representation of the location and access method of a resource obtained from the internet, and is an address of a standard resource on the internet. Each file on the internet has a unique URL that contains information indicating the location of the file and how the browser should handle it. The uniform resource identifier of the advertisement picture is subjected to URL coding to obtain first coding information, and the URL coding of the advertisement file is carried out to obtain second coding information. A picture URL, i.e., a URL corresponding to the targeted advertising creative data, may be generated using the first encoded information and the second encoded information. The address of the ad creative data can be directly accessed through the URL corresponding to the targeted ad creative data.
Step A2: under the condition of receiving an access request aiming at a URL (uniform resource locator) sent by a client, acquiring an advertisement picture and an advertisement document in target advertisement creative data according to the URL, and executing a picture combining operation on the acquired advertisement picture and the acquired advertisement document to obtain a target advertisement creative image; and sending the target advertisement creative image to a client for displaying.
Specifically, the address of the targeted ad creative data may be directly accessed via the URL to which the ad creative data corresponds. When an access request of a URL sent by a client is received, the advertisement picture and the advertisement file pointed by the URL are obtained according to the URL. Furthermore, the obtained advertisement picture and the obtained advertisement document are subjected to image combination operation to obtain a more specific advertisement picture, and the advertisement picture is used as a target advertisement creative image. For example, the advertisement copy and the advertisement picture are fused together by using image processing software in combination with the category information of the target object and the like. In the process of combining the images, the positions and the sizes of the advertisement documents and the advertisement images can be properly adjusted according to specific requirements and actual environments, and finally the target advertisement creative image is obtained.
The URL corresponding to the target advertisement creative data is generated through the steps, so that resources occupied by picture storage can be saved, the advertisement image can be updated at any time along with the change of the URL code, and the efficiency of providing the advertisement creative data for the user is improved.
According to the technical scheme of the embodiment, candidate advertisement creative data corresponding to a target article is obtained; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement copy. Acquiring sparse feature vectors and picture feature vectors corresponding to candidate advertisement creative data, and acquiring recommendation probability values corresponding to the candidate advertisement creative data based on the sparse feature vectors, the picture feature vectors and a pre-trained creative selection model; the creative selection model comprises a multi-layer perceptron neural network (MLP) module, a self-attention module and an output module, wherein the MLP module is used for outputting a first feature vector based on a sparse feature vector, and the self-attention module is used for outputting a second feature vector based on the sparse feature vector and a picture feature vector; the output module is used for outputting a recommendation probability value based on the first feature vector and the second feature vector. And selecting the target advertisement creative data according to the recommendation probability value. According to the scheme of the embodiment, the creative data of the target advertisement can be automatically selected from the collected candidate advertisement creative data by utilizing a creative selection model, and the self-attention module in the creative selection model can fuse the ID class characteristics of the sparse characteristic vector and the characteristic vector of the picture, so that the picture and the character can be fused, the problem of multi-mode-based creative optimization in an advertisement system is solved, and the creative data selection model has better universality.
Fig. 3 is a flowchart of a method for obtaining a picture feature vector according to an embodiment of the present invention, which is based on the foregoing embodiment. As shown in fig. 3, the method of this embodiment specifically includes the following steps:
and S210, inputting the advertisement pictures in the candidate advertisement creative data into a residual error neural network model trained in advance.
The candidate advertisement creative data comprises an advertisement file and an advertisement picture. As shown in fig. 2, the picture feature vector needs to be input into a multi-headed self-attention module. Therefore, the advertisement pictures in the candidate advertisement creative data need to be input into a residual neural network model trained in advance to obtain the picture feature vectors. In this embodiment, optionally, training the residual neural network model includes the following steps B1 to B3:
step B1: and acquiring the sample picture and the classification label corresponding to the sample picture.
The classification label is a product word of the sample article contained in the sample picture, and the product word is a vocabulary used for representing the category of the sample article and does not contain brand information. When the existing residual error neural network model is trained, the category words of the articles are used as classification labels. The category word of the article includes brand information of the article, and the product word may be used to indicate the category of the article and does not include brand information. For example, if the sample item is a brand a cell phone, the corresponding category words include the cell phone and the brand a, and the corresponding product words include only the cell phone. The sample picture is an advertising picture in an existing advertising scheme for the sample item.
Specifically, the sample picture and the classification label corresponding to the sample picture may be obtained from some websites or detail pages in the mobile phone APP. Or, the sample picture and the classification label corresponding to the sample picture are obtained from an advertisement material library provided by an advertiser or a professional. For example, the exposure of all the articles may be ranked from high to low according to the detail page/advertisement material library of the articles, the article category information corresponding to the article with the exposure of 10000 before is selected from all the articles, and the category information is used as the sample label. The method using the product words as the sample labels is suitable for large-scale and multi-classification tasks, can improve the generalization of the residual neural network model to generate the image feature vectors, and further avoids the insufficient expression of the image content caused by the over-concentration of the object category information. In practical applications, the information on the type of the article is often more important than the brand information of the article for the advertisement data. For example, the advertising schemes for cell phones and clothing are very different. The mobile phone type advertisement scheme may need to highlight the advertisement literature (describing the performance of the mobile phone, etc.), and the clothing type advertisement scheme may need to highlight the advertisement pictures. But the advertising scheme is mostly the same for the same category of articles, even for articles of different brands. For example, different brands of mobile phones, the advertising schemes may differ only by the content described in the advertising copy. Therefore, the product words of the sample articles can be closer to the actual condition by being used as the classification labels, and the accuracy of the recommendation probability value of the creative selection model is further improved.
And step B2: and inputting the sample picture into the residual error neural network model, and obtaining the prediction classification output by the residual error neural network model.
The residual neural network model is one of the convolutional neural network models, such as the ResNet model. The residual neural network model is mostly suitable for image classification and object recognition. The residual neural network model is easy to optimize, and the accuracy can be improved by increasing a certain network model depth. The residual blocks in the deep neural network use jump connection, so that the problem of gradient disappearance caused by increasing the depth in the deep neural network is solved. Specifically, the sample picture is input into a residual error neural network model, and the residual error neural network model predicts the corresponding type of the sample picture through computational inference. In this embodiment, optionally, the tail portion of the residual neural network model includes three fully-connected layers for outputting 32-dimensional vectors, 128-dimensional vectors, and 256-dimensional vectors, respectively, that is, the three fully-connected layers are added to the tail portion of the existing ResNet model.
Specifically, the residual neural network model includes a convolutional layer, a pooling layer, an activation function, a full link layer, and the like. The operations of convolution layer, pooling layer and activation function are to map the original data to the feature space of the hidden layer to obtain the feature vector. While the fully-connected layer may map the feature vectors of the distributed feature representation to the sample label space. The full-connection layer can extract the features of the feature vectors and classify the sample pictures according to the feature vectors of the sample pictures. According to the data size of the sample data and the classification requirement of the sample data, output vectors with different dimensions can be set for the residual error neural network model. By utilizing a residual error neural network model with three full-connection layers of which the tail part comprises output 32-dimensional vectors, 128-dimensional vectors and 256-dimensional vectors, the category of the sample picture can be predicted flexibly and accurately according to the service requirement and the size of the sample data volume.
And step B3: and determining a loss function based on the prediction classification and the classification label, adjusting network parameters in the residual neural network model based on the loss function, and stopping training when a preset iteration stopping condition is met.
And the prediction classification is the category of the sample picture predicted by the residual neural network model through calculation after the sample picture is input into the residual neural network model. The classification label is the true category of the labeled sample picture.
The 'gap' between the prediction classification and the sample label can be calculated through the loss function, and the network parameters in the residual neural network model can be continuously adjusted according to the loss function determined based on the prediction classification and the classification label, so that the prediction classification and the classification label are closer and closer, and the training is stopped until the preset iteration stop condition is met. The preset iteration stop condition comprises that the prediction accuracy of the residual error neural network model reaches a preset accuracy range, and in the embodiment of the scheme, the preset accuracy range can be selected to be [75%,80% ]. Specifically, the higher the prediction accuracy of the residual neural network model is, the more accurate the category of the predicted sample picture is. Meanwhile, the greater the computational complexity of the residual neural network model, the slower the computation speed of the residual neural network model in practical application, and the possibility of overfitting may occur. Therefore, on the basis of improving the prediction accuracy of the residual neural network model, in order to avoid excessive calculation complexity of the residual neural network model, the preset accuracy range can be set to [75%,80% ]. When the prediction accuracy of the residual neural network model is not less than 75% and not more than 80%, the training of the residual neural network model may be stopped.
In the steps, the product words of the sample articles are used as the classification labels, so that the accuracy of the recommendation probability value of the creative selection model can be improved. By utilizing the residual neural network model with the tail part comprising three full-connection layers of output 32-dimensional vectors, 128-dimensional vectors and 256-dimensional vectors, the category of the sample picture can be flexibly and accurately predicted according to the service requirement and the size of the sample data volume, and the accuracy of the recommendation probability value of the creative selection model is further improved.
And S220, obtaining the picture characteristic vector output by the residual error neural network model.
The picture feature vector is a vector which expresses some features of a picture in a vector form. In this scheme, the picture feature vector output by the residual neural network model may be used to represent the category of the picture. For example, a picture of a mobile phone is input into the residual neural network model, and the residual neural network model predicts that the picture features are "category: a mobile phone picture ", and outputting a picture feature vector with the characteristics of the mobile phone. Further, obtaining the picture feature vector output by the residual error neural network model.
According to the technical scheme of the embodiment, advertisement pictures in candidate advertisement creative data are input into a residual error neural network model trained in advance; and obtaining the picture characteristic vector output by the residual error neural network model. According to the scheme of the embodiment, the category of the sample picture can be predicted flexibly and accurately according to the service requirement and the sample data volume, and the product words of the sample articles are further used as the classification labels, so that the accuracy of the recommendation probability value of the creative selection model is improved.
Fig. 4 is a flowchart of a method for obtaining candidate advertisement creative data according to an embodiment of the present invention, which is detailed based on the above embodiments. As shown in fig. 4, the method of this embodiment specifically includes the following steps:
s310, acquiring a plurality of advertisement material data corresponding to the target object, wherein the advertisement material data comprise advertisement files and advertisement pictures.
Specifically, in some article pages displayed in article websites and mobile phones APP, advertisement material data (advertisement documents and advertisement pictures) corresponding to articles can be acquired through an article detail page. Performing text recognition, character extraction and the like on the content in the detail page to obtain an advertisement case; and performing image recognition, image cutting and the like on the content in the detail page to obtain the advertisement picture. Alternatively, the advertisement material data corresponding to the article may be acquired from an advertisement creative material provided by an advertisement company or the like.
And S320, selecting at least one advertisement file and at least one advertisement picture from the advertisement material data according to the on-line click data corresponding to the advertisement material data respectively.
The online click data is used for representing the click rate of the advertisement material data, and the click rate can reflect the favorite degree of the user on the advertisement material. The greater the number of clicks on an advertising material, the more likely the advertising material is to be liked by more people. Therefore, at least one advertisement file and at least one advertisement picture can be selected from the advertisement material data according to the on-line click data corresponding to the advertisement material data. In this embodiment, optionally, the selecting at least one advertisement document and at least one advertisement picture from the advertisement material data includes the following steps C1 to C2:
step C1: and determining the score of the advertisement material data according to the average value of the on-line clicks of the advertisement material data and the cumulative times of the selected advertisement material data.
The online click rate of the advertisement material data can be determined according to the condition that the detail page of the target object is browsed and clicked. Specifically, a multi-arm slot Machine (MAB) model can be utilized, and an Upper Confidence Bound algorithm (UCB) is adopted, so that the score of the advertisement material data of the last month is counted offline as:
Figure BDA0003841179250000151
wherein the content of the first and second substances,
Figure BDA0003841179250000161
on-line click number mean, n, representing advertisement material data j Is the number of times the current advertisement material data is cumulatively selected, and n represents the number of advertisement material data. The larger the on-line click quantity average and the cumulative number of times the advertisement material data is selected, the higher the score of the advertisement material data. Conversely, the smaller the on-line click volume average and the cumulative number of times the advertisement material data is selected, the lower the score of the advertisement material data.
Further, according to the formula, the scores of the advertisement file and the advertisement picture in the advertisement material are respectively calculated. Specifically, after at least one advertisement file and at least one advertisement picture are selected from each advertisement material data, file groups are formed by the selected advertisement files, and scores of the advertisement files in the file groups are calculated. In calculating the score for the advertising copy,
Figure BDA0003841179250000162
average value of on-line click rate, n, for advertising copy in copy group j Is the number of times the current advertising copy is cumulatively selected, and n represents the number of advertising copies in the copy grouping. Further, by selectingThe advertisement pictures form a picture group, and the score of each advertisement picture in the picture group is calculated. In calculating the score of the advertisement picture,
Figure BDA0003841179250000163
average value of on-line click rate, n, for advertisement pictures in a group of pictures j Is the number of times the current advertising picture is cumulatively selected, and n represents the number of advertising pictures in the group of pictures.
And step C2: and selecting at least one advertisement file and at least one advertisement picture from the advertisement material data according to the score of the advertisement material data.
Specifically, after the score of the advertisement picture and the score of the advertisement file are obtained, at least one advertisement picture is selected from the advertisement material data according to the score of the advertisement picture. And selecting at least one advertisement file from the advertisement material data according to the score of the advertisement file.
By utilizing the steps, the scores of the advertisement material data can be accurately calculated, the advertisement material data are optimized by combining the score conditions of the advertisement material data, the proper advertisement material data can be accurately and quickly optimized, and the problem of combination explosion caused in the process of mutually combining the advertisement material data is avoided.
S330, combining the selected advertisement file and the selected advertisement picture to obtain at least one candidate advertisement creative data.
After at least one advertisement file and at least one advertisement picture are selected, further, the selected advertisement file and the selected advertisement picture are combined in pairs to obtain at least one candidate advertisement creative data. For example, the advertisement documents selected from the advertisement material data are document a and document B based on the scores of the advertisement material data. The advertisement picture selected from each advertisement material data is a picture C. Then candidate ad creative data may be obtained as AC and BC based on the ad copy and the ad picture.
According to the technical scheme, a plurality of advertisement material data corresponding to the target object are obtained, wherein the advertisement material data comprise advertisement files and advertisement pictures, at least one advertisement file and at least one advertisement picture are selected from the advertisement material data according to the on-line click data corresponding to the advertisement material data, and the selected advertisement files and the selected advertisement pictures are combined to obtain at least one candidate advertisement creative data. According to the scheme of the embodiment, the score of the advertisement material data can be accurately calculated, the advertisement material data is optimized by combining the score condition of the advertisement material data, the proper advertisement material data can be accurately and quickly optimized, and the problem of combination explosion caused in the process of combining the advertisement material data with each other is avoided. And the candidate advertisement creative data obtained by combining the selected advertisement file and the selected advertisement picture is more accurate and better accords with the mind of the user.
Fig. 5 is a flowchart of another method for obtaining candidate ad creative data according to an embodiment of the present invention, which is detailed based on the above embodiments. As shown in fig. 5, the method of this embodiment specifically includes the following steps:
and S410, identifying and extracting the advertising copy from the item detail page and/or the advertising creative material of the target item.
Specifically, in some article pages displayed in article websites and mobile phone APPs, advertisement material data corresponding to articles can be acquired through an article detail page. Further, the advertisement copy may be extracted from the advertisement material data. In this embodiment, optionally, identifying and extracting the advertisement copy from the article detail page and/or the advertisement creative material of the target article includes the following steps D1 to D3:
step D1: and identifying a to-be-selected file from the item detail page and/or the advertisement creative material of the target item based on a preset character identification model.
Wherein the Character Recognition model comprises an Optical Character Recognition (OCR) model. Specifically, in the article detail page and/or the advertisement creative material of the target article, the documents in the article detail page and/or the advertisement creative material can be identified and extracted through the OCR model to obtain the advertisement documents, and the identified advertisement documents are used as the documents to be selected.
Step D2: and screening the benefit point file from the file to be selected based on the first vocabulary containing the preset benefit point vocabulary.
The interest point vocabulary is a vocabulary for expressing article characteristics, article benefits/advantages, consumer benefits, emotional/value views and the like to the user in the advertisement file. The preset interest point vocabulary table is a table for recording the interest point vocabulary of the target object according to specific requirements and actual environment. For example, if the target item is a camera, the first vocabulary of the profit point vocabulary includes:
the article is characterized in that: small volume and high pixel
Item benefits/advantages: simple shooting of clear and beautiful photo
Consumer benefits: is convenient to carry and operate
Emotional/value view: recording life, showing the most real world
And further, after the to-be-selected file is obtained, the interest point vocabulary is screened out from the to-be-selected file according to the first vocabulary.
And D3: and screening out the selling point documents from the documents which are left after removing the benefit point documents from the documents to be selected based on the preset word number limiting conditions and/or the second vocabulary containing the preset non-selling point vocabularies.
The selling point file is a valuable file which can improve the purchasing interest of the user and promote the product sale. The selling point scheme can describe the selling points of the commodities by using a simple language, so the selling point scheme has certain word number limitation. Specifically, the selling point documents can be screened out from the documents remaining after removing the benefit point documents from the documents to be selected based on the preset word number limiting condition. However, in some cases, the advertisement case satisfying the word count restriction condition is not necessarily a selling point case. In this case, based on the second vocabulary including the predetermined non-selling-point vocabularies, the selling-point documents can be screened out from the documents remaining after the benefit-point documents are removed from the documents to be selected. In order to more accurately screen out the selling point documents, further, the selling point documents can be screened out from the remaining documents after the benefit point documents are removed from the documents to be selected based on the preset word number limiting conditions and the second word list. For example, if the predetermined word number limit condition is 5, the advertisement documents with more than 5 words are deleted from the remaining documents after removing the interest point documents from the documents to be selected, and then the remaining advertisement documents are screened according to the second vocabulary, and the advertisement documents containing non-selling point words are deleted, so as to finally obtain the selling point documents.
In the steps, the advertising copy in the article detail page can be accurately and quickly mined and identified by using the OCR model, and the final benefit point copy and the selling point copy are obtained through the first vocabulary and the second vocabulary, so that the problem of insufficient material of the online copy can be solved.
And S420, positioning and cutting the object picture in the object detail page of the target object to obtain the advertisement picture.
And the position of the article picture is accurately found out in the article detail page. Specifically, in the item detail page, the position of the picture of the target item is uncertain, and the item detail page can be divided into a plurality of specific areas (such as a text area and a picture area) with unique properties by using a significance algorithm. And identifying the object picture of the detail page in the divided picture area, analyzing the size of the object picture, and intelligently cutting the object picture. For example, when the article main body in the article picture is too small, and the user cannot clearly see the article main body from the article picture, the article picture can be cut out to obtain an article picture with an appropriate size. And further, taking the cut object picture as an advertisement picture.
And S430, selecting at least one advertisement file and at least one advertisement picture from the advertisement material data according to the on-line click data corresponding to the advertisement material data respectively.
S440, combining the selected advertisement file and the selected advertisement picture to obtain at least one candidate advertisement creative data.
According to the technical scheme of the embodiment, the advertisement documents are identified and extracted from the article detail pages and/or the advertisement creative materials of the target articles; positioning and cutting the article picture in the article detail page of the target article to obtain an advertisement picture; selecting at least one advertisement file and at least one advertisement picture from each advertisement material data according to the on-line click data corresponding to each advertisement material data; and combining the selected advertisement file and the selected advertisement picture to obtain at least one candidate advertisement creative data. The technical scheme of the embodiment can accurately and quickly mine and identify the advertisement case in the article detail page, and solves the problem of insufficient material of the online case through the selling point case. By intelligently cutting the advertisement picture, the target object picture which can highlight the target object is obtained, so that the advertisement picture can fully show the target object.
Fig. 6 is a flowchart of optimization processing on candidate advertisement creative data according to an embodiment of the present invention, and the method for optimizing candidate advertisement creative data according to the above embodiments is further detailed in the present embodiment. As shown in fig. 6, the method of this embodiment specifically includes the following steps:
s510, combining the selected advertisement file and the advertisement pictures to obtain at least one file and picture combination.
Specifically, after at least one advertisement file and at least one advertisement picture are selected from the advertisement material data, the selected advertisement file and the selected advertisement picture are combined in pairs to obtain at least one file and picture combination. For example, the advertisement documents selected from the advertisement material data are document a and document B based on the scores of the advertisement material data. The advertisement picture selected from each advertisement material data is a picture C. The combination of the copy picture and the BC can be obtained according to the advertisement copy and the advertisement picture.
S520, combining at least one file and picture combination with at least one preset background template to obtain at least one creative combination.
The preset background template is a typesetting template which is set in advance according to article characteristics, specific requirements and the like and has a fixed style of combination of the file and the picture. Specifically, after the combination of the document and the picture is obtained, at least one background template can be selected for the combination of the document and the picture according to the category information of the article, the characteristics of the article and the like, and the combination of the document and the picture and at least one preset background template are combined in pairs to obtain at least one creative combination.
S530, based on preset screening factors, at least one creative combination is screened out from all creative combinations to serve as candidate advertisement creative data.
The preset screening factors comprise category information of the target object and/or color information of the advertisement picture and the background template in each creative combination. Specifically, to obtain advertising creative data that is more appropriate for the target item, candidate advertising creative data needs to be filtered out according to category information of the item, and/or color information of the advertising picture and background template in each creative combination, which may include the main color. For example, the main color of the picture can be identified by performing cluster analysis and main color extraction on the picture colors by using a K-Means clustering algorithm.
When at least one creative combination is screened out from all creative combinations based on the category information of the target article to serve as candidate advertisement creative data, the background template style corresponding to the target article is determined according to the category information of the target article and the preset corresponding relationship between the category of the article and the background template style, and then the creative combination matched with the background template style is selected from all creative combinations.
When at least one creative combination is screened out from each creative combination as candidate advertising creative data based on the color information of the advertising picture and the background template in each creative combination, specifically, at least one creative combination is screened out from each creative combination as candidate advertising creative data by utilizing an HSV color model and according to the main color of the advertising picture and the main color of the background template and a mode of preferentially using adjacent color matching and contrast color matching.
And S540, under the condition that the size of the target article area in the advertisement picture contained in the candidate advertisement creative data is smaller than a preset threshold value, cutting the target article area, and updating the advertisement picture contained in the candidate advertisement creative data by using the cut target article picture.
The preset threshold value may be set in advance according to specific requirements and the like. Specifically, when the size of the target object area in the advertisement picture of the target object is smaller than the preset threshold, there may be a case where the target object area in the picture is too small to allow the user to visually and clearly see the object. The target object area in the advertisement picture can be identified by using a target detection algorithm, further, the advertisement picture is intelligently cut to obtain a target object picture which can highlight the target object, and the advertisement picture contained in the candidate advertisement creative data is updated based on the target object picture.
And S550, performing color rendering processing on the target object area in the advertisement picture according to the color information of the advertisement picture in the candidate advertisement creative data.
Wherein the rendering process includes at least one of adjusting brightness, adjusting contrast, and adjusting saturation. Specifically, after the advertisement pictures included in the updated candidate advertisement creative data are obtained, the advertisement pictures are subjected to image analysis based on the colors of the advertisement pictures and the colors of the target object areas. When the color of the target object area is too dark, which may cause the target object area to be less conspicuous, the color brightness of the target object area may be adjusted to highlight the target object, so as to attract the user to click or trigger the target object. Similarly, when the color contrast between the target object area and the advertisement picture is weak, which may cause the background of the target object area and the advertisement picture to be integrated, the contrast may be adjusted to highlight the target object. When the color saturation of the target object area and the advertising picture is weak, the saturation can be adjusted to highlight the target object. In this embodiment, optionally, the step of performing the coloring treatment on the target object area in the advertisement picture according to the color information of the advertisement picture in the candidate advertisement creative data includes the following steps E1 to E2:
step E1: and determining whether the advertisement picture is a color picture or a black picture according to the pixel values of the pixel points contained in the advertisement picture in the candidate advertisement creative data.
The pixel point information in the picture can reflect the color information of the image. Specifically, the pixel values of the pixel points in the advertisement picture are counted, and when the pixel points with the pixel values exceeding 190 (the pixel value range is 0-255) exist in all the pixel points, and the number of the pixel points with the pixel values exceeding 190 exceeds 50% of the total pixel points, the advertisement picture is determined to be a white picture. And when the number of the pixel points with the pixel values exceeding 190 does not exceed 15% of the total pixel points, and the number of the pixel points with the pixel values not exceeding 55 exceeds 50% of the total pixel points, determining that the advertisement picture is a black picture. Other situations determine the advertisement picture as a color picture. And when the advertisement picture is determined to be a white picture, performing color rendering processing on the advertisement picture.
And E2: and under the condition that the advertisement picture is a color picture, performing color-rendering processing on the target object area in the advertisement picture based on the first preset brightness parameter value, the first preset contrast parameter value and the first preset saturation parameter value.
The first preset brightness parameter value, the first preset contrast parameter value and the first preset saturation parameter value can be set in advance according to specific requirements. For example, when the advertisement picture is a color picture, the first preset brightness parameter value may be set to 15, the first preset contrast parameter value may be set to 10, and the first preset saturation parameter value may be set to 10. Further, according to the first preset brightness parameter value, the first preset contrast parameter value and the first preset saturation parameter value, the target object area in the advertisement picture is subjected to color-rendering processing.
Step E3: and under the condition that the advertisement picture is a black picture, performing color rendering processing on the target object area in the advertisement picture based on a second preset brightness parameter value, a second preset contrast parameter value and a second preset saturation parameter value.
The second preset brightness parameter value is larger than the first preset brightness parameter value, the second preset contrast parameter value is larger than the first preset contrast parameter value, and the second preset saturation parameter value is larger than the first preset saturation parameter value. The second preset brightness parameter value, the second preset contrast parameter value and the second preset saturation parameter value may be set in advance according to specific requirements. When the advertisement picture is a black picture, since the effect of the black picture on the image retouching process is not obvious, the black picture can be subjected to the strong retouching process. For example, a second preset brightness parameter value may be set to 20, a second preset contrast parameter value may be set to 15, and a second preset saturation parameter value may be set to 15. And further, performing color-matching processing on the target object area in the advertisement picture according to the second preset brightness parameter value, the second preset contrast parameter value and the second preset saturation parameter value.
Through the steps, the advertisement picture can be subjected to color-retouching treatment according to the color of the advertisement picture, so that the advertisement article in the advertisement picture is more attractive, and the display effect of the advertisement picture on the article is fully exerted.
Figure 7 is a flow diagram of an ad creative data selection and optimization process provided by an embodiment of the present invention. As shown in fig. 7, the advertisement copy of the target item is identified and extracted from the item detail page and/or the advertisement material library by the OCR model in the material mining module. And identifying the advertisement picture of the target object through an image segmentation model and a target detection model in the material mining module. And respectively calculating the score conditions of the advertisement file and the advertisement picture through the MAB. And selecting at least one advertisement file and at least one advertisement picture from the advertisement material data according to the score condition. Candidate advertisement creative data are obtained through the creative element combination module, the candidate advertisement creative data are input into the creative optimization module, and the target advertisement creative data are obtained through the output value of the creative selection model in the creative optimization module.
According to the scheme of the embodiment, at least one file and picture combination is obtained by combining the selected advertisement file and the advertisement picture; combining at least one document and picture combination with at least one preset background template to obtain at least one creative combination; screening at least one creative combination from the creative combinations as candidate advertisement creative data based on preset screening factors; under the condition that the size of a target article area in the advertisement pictures contained in the candidate advertisement creative data is smaller than a preset threshold value, cutting the target article area, and updating the advertisement pictures contained in the candidate advertisement creative data by using the target article picture obtained by cutting; and performing color rendering processing on the target object area in the advertisement picture according to the color information of the advertisement picture in the candidate advertisement creative data. According to the scheme of the embodiment, creative combination entry screening can be performed according to the category information, the color information and the like of the target object, so that more attractive candidate advertisement creative data with colors closer to those of manual design can be obtained. And finally, the advertising pictures are subjected to color retouching treatment according to the colors of the advertising pictures, so that the advertising articles in the advertising pictures are more attractive, and the display effect of the advertising pictures on the articles is fully exerted.
Fig. 8 is a flowchart of a model training method according to an embodiment of the present invention, where this embodiment can train an initial model to obtain an creative selection model, and the method may be executed by a model training apparatus according to an embodiment of the present invention, and the apparatus may be implemented in a software and/or hardware manner, as shown in fig. 8, the method specifically includes the following steps:
s610, training sample data is obtained.
The training sample data comprises sample advertisement creative data corresponding to the sample article and standard recommendation probability values corresponding to the sample advertisement creative data, and the sample advertisement creative data comprises advertisement pictures and advertisement documents. In an alternative embodiment, sample data may be obtained from a historical stories database storing sample ad creative data and standard recommendation probability values corresponding to the sample ad creative data. For example, based on the big data and the data analysis algorithm, some advertisement creative data and standard recommendation probability values corresponding to the advertisement creative data can be determined, and the advertisement creative data and the corresponding standard probability recommendation values are stored in the historical material database. Further, sample data may be obtained from the historical materials database.
S620, obtaining a sparse feature vector and a picture feature vector corresponding to the sample advertisement creative data, and obtaining a prediction recommendation probability value corresponding to the sample advertisement creative data based on the sparse feature vector, the picture feature vector and a creative selection model to be trained.
Wherein the sparse feature vector is a vector for reflecting a plurality of types of sparse features. The picture feature vector is a vector for reflecting picture features of the advertisement picture, and the picture features can be obtained through the advertisement picture in the sample advertisement creative data. And the prediction recommendation probability value is a recommendation probability value corresponding to the sparse feature vector and the picture feature vector which are output after calculation according to the sparse feature vector and the picture feature vector by the untrained creative selection model. Specifically, the sparse feature vectors and the picture feature vectors are input into an creative selection model to be trained, and the creative selection model to be trained can output a predicted recommendation probability value corresponding to the sample advertisement creative data through model calculation.
S630, determining a loss function according to the standard recommendation probability value and the prediction recommendation probability value, adjusting network parameters in the creative selection model based on the loss function, and stopping training when a preset iteration stopping condition is met.
The loss function is a function that maps the value of a random event or its related random variable to a non-negative real number to represent the "risk" or "loss" of the random event. The network parameters in the model are configuration variables inside the model, and the values of the model parameters can be adjusted according to the loss function. The creative selection model is used for: and fusing the sparse feature vector and the picture feature vector by adopting an attention mechanism, and outputting a prediction recommendation probability value based on a fusion result. In this scheme, optionally, the creative selection model includes a multilayer perceptron neural network MLP module, a self-attention module, and an output module. The MLP module is used for outputting a first feature vector based on the sparse feature vector; the self-attention module is used for outputting a second feature vector based on the sparse feature vector and the picture feature vector; the output module is used for outputting a prediction recommendation probability value based on the first feature vector and the second feature vector.
Specifically, the sparse feature vector and the picture feature vector are input into an intention selection model to be trained, and a corresponding prediction recommendation probability value is obtained. And a great 'gap' exists between the prediction recommendation probability value and the standard recommendation probability value at the moment, and the creative selection model to be trained is continuously optimized according to the loss function and the 'gap'. And adjusting network parameters of the creative selection model to continuously reduce the difference between the predicted recommendation probability value and the standard recommendation probability value, and obtaining the trained creative selection model when an iteration stopping condition is preset.
According to the technical scheme, training sample data can be obtained, sparse feature vectors and picture feature vectors corresponding to the sample advertisement creative data are obtained, and the prediction recommendation probability value corresponding to the sample advertisement creative data is obtained based on the sparse feature vectors, the picture feature vectors and the creative selection model to be trained. And determining a loss function according to the standard recommendation probability value and the prediction recommendation probability value, adjusting network parameters in the creative selection model based on the loss function, and stopping training when a preset iteration stopping condition is met. According to the technical scheme, the creative selection model can be continuously optimized, so that the predicted recommendation probability value output by the creative selection model is closer to the standard recommendation probability value, and the accuracy of the predicted recommendation probability value is improved.
According to the technical scheme, the data acquisition, storage, use, processing and the like meet the relevant regulations of national laws and regulations.
Fig. 9 is a schematic structural diagram of an advertisement creative data extracting device according to an embodiment of the present invention. The embodiment can realize automatic and accurate selection of optimal target advertisement creative data from candidate advertisement creative data, the device can be realized in a software and/or hardware manner, the device can be integrated in any equipment providing the function of selecting the advertisement creative data, as shown in fig. 9, the device for selecting the advertisement creative data specifically comprises:
a data obtaining module 910, configured to obtain candidate advertisement creative data corresponding to a target item; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement copy;
a probability value obtaining module 920, configured to obtain a sparse feature vector and a picture feature vector corresponding to the candidate advertisement creative data, and obtain a recommendation probability value corresponding to the candidate advertisement creative data based on the sparse feature vector, the picture feature vector, and a creative selection model trained in advance;
a data selecting module 930, configured to select target advertisement creative data according to the recommendation probability value;
wherein the creative selection model is to: and fusing based on the sparse feature vector and the picture feature vector by adopting a self-attention mechanism, and outputting the recommendation probability value based on a fusion result.
The creative selection model comprises a multi-layer perceptron neural network (MLP) module, a self-attention module and an output module; wherein:
the MLP module to output a first feature vector based on the sparse feature vector;
the self-attention module is used for outputting a second feature vector based on the sparse feature vector and the picture feature vector;
the output module is configured to output the recommendation probability value based on the first feature vector and the second feature vector.
Optionally, the probability value obtaining module 920 is specifically configured to:
inputting the advertisement pictures in the candidate advertisement creative data into a residual neural network model trained in advance;
and obtaining the picture characteristic vector output by the residual error neural network model.
Optionally, the probability value obtaining module 920 is further configured to:
acquiring a sample picture and a classification label corresponding to the sample picture; wherein the classification label is a product word of a sample item contained in the sample picture, the product word being a vocabulary for characterizing a category of the sample item and not containing brand information;
inputting the sample picture into the residual error neural network model to obtain the prediction classification output by the residual error neural network model;
determining a loss function based on the prediction classification and the classification label, adjusting network parameters in the residual error neural network model based on the loss function, and stopping training when a preset iteration stopping condition is met.
Optionally, the preset iteration stop condition includes that the prediction accuracy of the residual neural network model reaches a preset accuracy range, and the preset accuracy range may include [75%,90% ].
Optionally, the tail of the residual neural network model comprises three fully-connected layers for outputting 32-dimensional vectors, 129-dimensional vectors and 256-dimensional vectors, respectively.
Optionally, the self-attention module comprises a multi-head self-attention module in a transform model.
Optionally, the data obtaining module 910 is specifically configured to:
acquiring a plurality of advertisement material data corresponding to a target object, wherein the advertisement material data comprise an advertisement file and an advertisement picture;
selecting at least one advertisement file and at least one advertisement picture from each advertisement material data according to the on-line click data corresponding to each advertisement material data;
and combining the selected advertisement file and the selected advertisement picture to obtain at least one candidate advertisement creative data.
Optionally, the data obtaining module 910 is further configured to:
identifying and extracting an advertisement file from the item detail page and/or the advertisement creative material of the target item;
and positioning and cutting the article picture in the article detail page of the target article to obtain the advertisement picture.
Optionally, the data obtaining module 910 is further configured to:
identifying a file to be selected from the article detail page and/or the advertisement creative material of the target article based on a preset character identification model;
screening out a benefit point file from the file to be selected based on a first word list containing preset benefit point words;
and screening out the selling point documents from the documents which are left after the benefit point documents are removed from the documents to be selected on the basis of a preset word number limiting condition and/or a second vocabulary containing preset non-selling point words.
Optionally, the data obtaining module 910 is further configured to:
for each advertisement material data, determining the score of the advertisement material data according to the on-line click quantity average value of the advertisement material data and the cumulative times of the advertisement material data selected;
and selecting at least one advertisement file and at least one advertisement picture from each advertisement material data according to the score of each advertisement material data.
Optionally, the data obtaining module 910 is further configured to:
combining the selected advertisement file and the advertisement pictures to obtain at least one file and picture combination;
combining the at least one file and picture combination with at least one preset background template to obtain at least one creative combination;
screening at least one creative combination from the creative combinations as candidate advertisement creative data based on preset screening factors; the preset screening factors comprise category information of the target object and/or color information of the advertisement picture and the background template in each creative combination.
Optionally, the data obtaining module 910 is further configured to:
and under the condition that the size of a target article area in the advertisement picture contained in the candidate advertisement creative data is smaller than a preset threshold value, cutting the target article area, and updating the advertisement picture contained in the candidate advertisement creative data by using the cut target article picture.
Optionally, the data obtaining module 910 is further configured to:
according to the color information of the advertisement picture in the candidate advertisement creative data, performing color-rendering processing on a target article area in the advertisement picture; wherein the rendering process includes at least one of adjusting brightness, adjusting contrast, and adjusting saturation.
Optionally, the data obtaining module 910 is further configured to:
determining whether the advertisement picture is a color picture or a black picture according to pixel values of pixel points contained in the advertisement picture in the candidate advertisement creative data;
under the condition that the advertisement picture is a color picture, performing color-rendering processing on a target object area in the advertisement picture based on a first preset brightness parameter value, a first preset contrast parameter value and a first preset saturation parameter value;
under the condition that the advertisement picture is a black picture, performing color-moistening treatment on a target object area in the advertisement picture based on a second preset brightness parameter value, a second preset contrast parameter value and a second preset saturation parameter value;
the second preset brightness parameter value is greater than the first preset brightness parameter value, the second preset contrast parameter value is greater than the first preset contrast parameter value, and the second preset saturation parameter value is greater than the first preset saturation parameter value.
Optionally, the apparatus is further configured to:
acquiring first coding information of an advertisement picture and second coding information of an advertisement pattern in the target advertisement creative data, and generating a Uniform Resource Locator (URL) corresponding to the target advertisement creative data according to the first coding information and the second coding information;
under the condition that an access request aiming at the URL sent by a client is received, obtaining an advertisement picture and an advertisement document in the target advertisement creative data according to the URL, and executing a drawing operation on the obtained advertisement picture and the obtained advertisement document to obtain a target advertisement creative image; and sending the target advertisement creative image to the client for display.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 10 is a schematic structural diagram of a model training apparatus provided in an embodiment of the present invention, where the apparatus may be implemented in software and/or hardware, and the apparatus may be integrated in any device that provides a function of model training, as shown in fig. 10, the model training apparatus specifically includes:
a sample data obtaining module 1010, configured to obtain training sample data, where the training sample data includes sample advertisement creative data corresponding to a sample article and a standard recommendation probability value corresponding to the sample advertisement creative data, and the sample advertisement creative data includes an advertisement picture and an advertisement copy;
a vector obtaining module 1020, configured to obtain a sparse feature vector and a picture feature vector corresponding to the sample ad creative data, and obtain a prediction recommendation probability value corresponding to the sample ad creative data based on the sparse feature vector, the picture feature vector, and a creative selection model to be trained;
the model training module 1030 is configured to determine a loss function according to the standard recommendation probability value and the predicted recommendation probability value, adjust a network parameter in the creative selection model based on the loss function, and stop training when a preset iteration stop condition is met;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting a self-attention mechanism, and outputting the prediction recommendation probability value based on a fusion result.
The creative selection model comprises a multi-layer perceptron neural network (MLP) module, a self-attention module and an output module; wherein:
the MLP module is used for outputting a first feature vector based on the sparse feature vector;
the self-attention module is used for outputting a second feature vector based on the sparse feature vector and the picture feature vector;
the output module is configured to output the predicted recommendation probability value based on the first feature vector and the second feature vector.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 11 is a schematic structural diagram of a computer device according to an embodiment of the present invention. FIG. 11 illustrates a block diagram of an exemplary computer device 12 suitable for use in implementing embodiments of the present invention. The computer device 12 shown in fig. 11 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present invention.
As shown in FIG. 11, computer device 12 is embodied in the form of a general purpose computing device. The components of computer device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 30 and/or cache memory 32. Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 11 and commonly referred to as a "hard drive"). Although not shown in FIG. 11, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with computer device 12, and/or with any devices (e.g., network card, modem, etc.) that enable computer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. In the computer device 12 of the present embodiment, the display 24 is not provided as a separate body but is embedded in the mirror surface, and when the display surface of the display 24 is not displayed, the display surface of the display 24 and the mirror surface are visually integrated. Also, computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via network adapter 20. As shown, network adapter 20 communicates with the other modules of computer device 12 via bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with computer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes programs stored in the system memory 28 to perform various functional applications and data processing, such as implementing a multi-modal based ad creative extraction method provided by an embodiment of the present invention: acquiring candidate advertisement creative data corresponding to a target article; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement copy; acquiring a sparse feature vector and a picture feature vector corresponding to the candidate advertisement creative data, and acquiring a recommendation probability value corresponding to the candidate advertisement creative data based on the sparse feature vector, the picture feature vector and a pre-trained creative selection model; wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting an attention mechanism, and outputting the recommendation probability value based on a fusion result.
The embodiment of the invention provides a computer readable storage medium, which stores a computer program, when the program is executed by a processor, the method for selecting an advertising creative based on multiple modes, as provided by all embodiments of the invention, is implemented as follows: acquiring candidate advertisement creative data corresponding to a target article; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement document; acquiring a sparse feature vector and a picture feature vector corresponding to the candidate advertisement creative data, and acquiring a recommendation probability value corresponding to the candidate advertisement creative data based on the sparse feature vector, the picture feature vector and a pre-trained creative selection model; wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting a self-attention mechanism, and outputting the recommendation probability value based on a fusion result.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing description is only exemplary of the invention and that the principles of the technology may be employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in some detail by the above embodiments, the invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the invention, and the scope of the invention is determined by the scope of the appended claims.

Claims (18)

1. A multi-mode based advertisement creative extraction method is characterized by comprising the following steps:
acquiring candidate advertisement creative data corresponding to a target article; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement copy;
acquiring a sparse feature vector and a picture feature vector corresponding to the candidate advertisement creative data, and acquiring a recommendation probability value corresponding to the candidate advertisement creative data based on the sparse feature vector, the picture feature vector and a pre-trained creative selection model;
selecting target advertisement creative data according to the recommendation probability value;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting an attention mechanism, and outputting the recommendation probability value based on a fusion result.
2. The method of claim 1, wherein the creative selection model includes a multi-layer perceptron neural network (MLP) module, a self-attention module, and an output module; wherein:
the MLP module is used for outputting a first feature vector based on the sparse feature vector;
the self-attention module is used for outputting a second feature vector based on the sparse feature vector and the picture feature vector;
the output module is configured to output the recommendation probability value based on the first feature vector and the second feature vector.
3. The method of claim 1, wherein the obtaining of the picture feature vector corresponding to the candidate ad creative data comprises:
inputting the advertisement pictures in the candidate advertisement creative data into a pre-trained residual error neural network model;
and obtaining the picture characteristic vector output by the residual error neural network model.
4. The method of claim 3, wherein the training method of the residual neural network model comprises:
acquiring a sample picture and a classification label corresponding to the sample picture; wherein the classification label is a product word of a sample item contained in the sample picture, the product word being a vocabulary for characterizing a category of the sample item and not containing brand information;
inputting the sample picture into a residual error neural network model to be trained, and obtaining prediction classification output by the residual error neural network model;
determining a loss function based on the prediction classification and the classification label, adjusting network parameters in the residual error neural network model based on the loss function, and stopping training when a preset iteration stopping condition is met.
5. The method of claim 2, wherein the tail of the residual neural network model comprises three fully-connected layers for outputting 32-dimensional, 128-dimensional and 256-dimensional vectors, respectively.
6. The method of any one of claims 1-5, wherein the obtaining of candidate ad creative data for the target item comprises:
acquiring a plurality of advertisement material data corresponding to a target object, wherein the advertisement material data comprise an advertisement file and an advertisement picture;
selecting at least one advertisement file and at least one advertisement picture from each advertisement material data according to the on-line click data corresponding to each advertisement material data;
and combining the selected advertisement file and the selected advertisement picture to obtain at least one candidate advertisement creative data.
7. The method of claim 6, wherein the obtaining of the plurality of advertisement material data corresponding to the target object comprises:
identifying and extracting an advertising copy from the item detail page and/or the advertising creative materials of the target item;
and positioning and cutting the article picture in the article detail page of the target article to obtain the advertisement picture.
8. The method of claim 7, wherein the advertising copy comprises a points of interest copy and a points of sale copy; the identifying and extracting the advertising copy from the article detail page and/or the advertising creative material of the target article comprises the following steps:
identifying a to-be-selected file from the item detail page and/or the advertisement creative material of the target item based on a preset character identification model;
screening out a benefit point file from the file to be selected based on a first word list containing preset benefit point words;
and screening out the selling point file from the file left after removing the interest point file from the file to be selected based on a preset word number limiting condition and/or a second vocabulary containing preset non-selling point words.
9. The method of claim 6, wherein the selecting at least one advertisement document and at least one advertisement picture from each of the advertisement material data according to the on-line click data corresponding to each of the advertisement material data comprises:
for each advertisement material data, determining the score of the advertisement material data according to the on-line click quantity average value of the advertisement material data and the cumulative times of the advertisement material data selected;
and selecting at least one advertisement file and at least one advertisement picture from each advertisement material data according to the score of each advertisement material data.
10. The method of claim 6, wherein combining the extracted ad copy and ad picture to obtain at least one candidate ad creative data comprises:
combining the selected advertisement file and the advertisement pictures to obtain at least one file and picture combination;
combining the at least one document and picture combination with at least one preset background template to obtain at least one creative combination;
screening at least one creative combination from the creative combinations as candidate advertisement creative data based on preset screening factors; the preset screening factors comprise category information of the target object and/or color information of the advertisement picture and the background template in each creative combination.
11. The method of claim 6, wherein after obtaining the candidate ad creative data, the method further comprises:
according to the color information of the advertisement picture in the candidate advertisement creative data, performing color rendering processing on a target object area in the advertisement picture; wherein the rendering process includes at least one of adjusting brightness, adjusting contrast, and adjusting saturation.
12. The method of claim 11 wherein the rendering a target item area in the ad picture based on color information of the ad picture in the candidate ad creative data comprises:
determining whether the advertisement picture is a color picture or a black picture according to pixel values of pixel points contained in the advertisement picture in the candidate advertisement creative data;
under the condition that the advertisement picture is a color picture, performing color-rendering processing on a target object area in the advertisement picture based on a first preset brightness parameter value, a first preset contrast parameter value and a first preset saturation parameter value;
under the condition that the advertisement picture is a black picture, performing color-moistening treatment on a target object area in the advertisement picture based on a second preset brightness parameter value, a second preset contrast parameter value and a second preset saturation parameter value;
the second preset brightness parameter value is greater than the first preset brightness parameter value, the second preset contrast parameter value is greater than the first preset contrast parameter value, and the second preset saturation parameter value is greater than the first preset saturation parameter value.
13. The method of any of claims 1-5, wherein after extracting the targeted advertising creative data, the method further comprises:
acquiring first coding information of an advertisement picture and second coding information of an advertisement pattern in the target advertisement creative data, and generating a Uniform Resource Locator (URL) corresponding to the target advertisement creative data according to the first coding information and the second coding information;
under the condition that an access request aiming at the URL sent by a client is received, obtaining an advertisement picture and an advertisement document in the target advertisement creative data according to the URL, and executing a drawing operation on the obtained advertisement picture and the obtained advertisement document to obtain a target advertisement creative image; and sending the target advertisement creative image to the client for display.
14. A method of model training, the method comprising:
acquiring training sample data, wherein the training sample data comprises sample advertisement creative data corresponding to a sample article and a standard recommendation probability value corresponding to the sample advertisement creative data, and the sample advertisement creative data comprises an advertisement picture and an advertisement file;
acquiring a sparse feature vector and a picture feature vector corresponding to the sample advertisement creative data, and acquiring a prediction recommendation probability value corresponding to the sample advertisement creative data based on the sparse feature vector, the picture feature vector and a creative selection model to be trained;
determining a loss function according to the standard recommendation probability value and the prediction recommendation probability value, adjusting network parameters in the creative selection model based on the loss function, and stopping training when a preset iteration stopping condition is met;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting an attention mechanism, and outputting the prediction recommendation probability value based on a fusion result.
15. An advertising creative data selecting apparatus, comprising:
the data acquisition module is used for acquiring candidate advertisement creative data corresponding to the target object; wherein the candidate advertisement creative data comprises an advertisement picture and an advertisement document;
a probability value obtaining module, configured to obtain a sparse feature vector and a picture feature vector corresponding to the candidate advertisement creative data, and obtain a recommendation probability value corresponding to the candidate advertisement creative data based on the sparse feature vector, the picture feature vector, and a pre-trained creative selection model;
the data selection module is used for selecting target advertisement creative data according to the recommendation probability value;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting an attention mechanism, and outputting the recommendation probability value based on a fusion result.
16. A model training apparatus, the apparatus comprising:
the system comprises a sample data acquisition module, a sample data acquisition module and a training sample data processing module, wherein the training sample data comprises sample advertisement creative data corresponding to a sample article and a standard recommendation probability value corresponding to the sample advertisement creative data, and the sample advertisement creative data comprises an advertisement picture and an advertisement file;
the vector acquisition module is used for acquiring a sparse feature vector and a picture feature vector corresponding to the sample advertisement creative data and acquiring a prediction recommendation probability value corresponding to the sample advertisement creative data based on the sparse feature vector, the picture feature vector and a creative selection model to be trained;
the model training module is used for determining a loss function according to the standard recommendation probability value and the prediction recommendation probability value, adjusting network parameters in the creative selection model based on the loss function, and stopping training when a preset iteration stopping condition is met;
wherein the creative selection model is to: and fusing the sparse feature vector and the picture feature vector by adopting an attention mechanism, and outputting the prediction recommendation probability value based on a fusion result.
17. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the multi-modal based advertising creative extraction method of any of claims 1-13, or the model training method of claim 14.
18. A computer-readable storage medium having stored thereon computer instructions for causing a processor to implement the multi-modal based ad creative extraction method of any of claims 1-13, or the model training method of claim 14.
CN202211104745.3A 2022-09-09 2022-09-09 Advertisement creative selection and model training method, device, equipment and storage medium Pending CN115564469A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211104745.3A CN115564469A (en) 2022-09-09 2022-09-09 Advertisement creative selection and model training method, device, equipment and storage medium
PCT/CN2023/116575 WO2024051609A1 (en) 2022-09-09 2023-09-01 Advertisement creative data selection method and apparatus, model training method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211104745.3A CN115564469A (en) 2022-09-09 2022-09-09 Advertisement creative selection and model training method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115564469A true CN115564469A (en) 2023-01-03

Family

ID=84740904

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211104745.3A Pending CN115564469A (en) 2022-09-09 2022-09-09 Advertisement creative selection and model training method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115564469A (en)
WO (1) WO2024051609A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116911928A (en) * 2023-09-12 2023-10-20 深圳须弥云图空间科技有限公司 Training method and device of advertisement recommendation model based on creative features
CN117058275A (en) * 2023-10-12 2023-11-14 深圳兔展智能科技有限公司 Commodity propaganda drawing generation method and device, computer equipment and storage medium
WO2024051609A1 (en) * 2022-09-09 2024-03-14 北京沃东天骏信息技术有限公司 Advertisement creative data selection method and apparatus, model training method and apparatus, and device and storage medium
CN117829911A (en) * 2024-03-06 2024-04-05 湖南创研科技股份有限公司 AI-driven advertisement creative optimization method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140156379A1 (en) * 2012-11-30 2014-06-05 Adobe Systems Incorporated Method and Apparatus for Hierarchical-Model-Based Creative Quality Scores
CN110889718B (en) * 2019-11-15 2024-05-14 腾讯科技(深圳)有限公司 Scheme screening method, scheme screening device, medium and electronic equipment
CN112862516B (en) * 2021-01-14 2024-03-12 北京达佳互联信息技术有限公司 Resource release method and device, electronic equipment and storage medium
CN114493683A (en) * 2022-01-11 2022-05-13 北京达佳互联信息技术有限公司 Advertisement material recommendation method, model training method and device and electronic equipment
CN114898349A (en) * 2022-05-25 2022-08-12 广州欢聚时代信息科技有限公司 Target commodity identification method and device, equipment, medium and product thereof
CN115564469A (en) * 2022-09-09 2023-01-03 北京沃东天骏信息技术有限公司 Advertisement creative selection and model training method, device, equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051609A1 (en) * 2022-09-09 2024-03-14 北京沃东天骏信息技术有限公司 Advertisement creative data selection method and apparatus, model training method and apparatus, and device and storage medium
CN116911928A (en) * 2023-09-12 2023-10-20 深圳须弥云图空间科技有限公司 Training method and device of advertisement recommendation model based on creative features
CN116911928B (en) * 2023-09-12 2024-01-05 深圳须弥云图空间科技有限公司 Training method and device of advertisement recommendation model based on creative features
CN117058275A (en) * 2023-10-12 2023-11-14 深圳兔展智能科技有限公司 Commodity propaganda drawing generation method and device, computer equipment and storage medium
CN117058275B (en) * 2023-10-12 2024-04-12 深圳兔展智能科技有限公司 Commodity propaganda drawing generation method and device, computer equipment and storage medium
CN117829911A (en) * 2024-03-06 2024-04-05 湖南创研科技股份有限公司 AI-driven advertisement creative optimization method and system
CN117829911B (en) * 2024-03-06 2024-06-04 湖南创研科技股份有限公司 AI-driven advertisement creative optimization method and system

Also Published As

Publication number Publication date
WO2024051609A1 (en) 2024-03-14

Similar Documents

Publication Publication Date Title
CN109063163B (en) Music recommendation method, device, terminal equipment and medium
US20220375193A1 (en) Saliency-based object counting and localization
US11631234B2 (en) Automatically detecting user-requested objects in images
CN106250385B (en) System and method for automated information abstraction processing of documents
CN115564469A (en) Advertisement creative selection and model training method, device, equipment and storage medium
US20210089570A1 (en) Methods and apparatus for assessing candidates for visual roles
US11508173B2 (en) Machine learning prediction and document rendering improvement based on content order
CN103268317A (en) System and method for semantically annotating images
CN114648392B (en) Product recommendation method and device based on user portrait, electronic equipment and medium
US10755332B2 (en) Multi-perceptual similarity detection and resolution
CN105022773B (en) Image processing system including picture priority
CN113011186A (en) Named entity recognition method, device, equipment and computer readable storage medium
US11436851B2 (en) Text recognition for a neural network
CN110972499A (en) Labeling system of neural network
CN115917613A (en) Semantic representation of text in a document
US20230127525A1 (en) Generating digital assets utilizing a content aware machine-learning model
CN110781925A (en) Software page classification method and device, electronic equipment and storage medium
CN111522979B (en) Picture sorting recommendation method and device, electronic equipment and storage medium
WO2023020160A1 (en) Recommendation method and apparatus, training method and apparatus, device, and recommendation system
CN114119136A (en) Product recommendation method and device, electronic equipment and medium
CN110909768B (en) Method and device for acquiring marked data
CN115659008A (en) Information pushing system and method for big data information feedback, electronic device and medium
CN112784156A (en) Search feedback method, system, device and storage medium based on intention recognition
CN111539782B (en) Deep learning-based merchant information data processing method and system
WO2023124793A1 (en) Image pushing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination