CN110489582B - Method and device for generating personalized display image and electronic equipment - Google Patents

Method and device for generating personalized display image and electronic equipment Download PDF

Info

Publication number
CN110489582B
CN110489582B CN201910765901.2A CN201910765901A CN110489582B CN 110489582 B CN110489582 B CN 110489582B CN 201910765901 A CN201910765901 A CN 201910765901A CN 110489582 B CN110489582 B CN 110489582B
Authority
CN
China
Prior art keywords
features
feature
fusion
model
personalized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910765901.2A
Other languages
Chinese (zh)
Other versions
CN110489582A (en
Inventor
赵胜林
陈锡显
苏玉鑫
沈小勇
戴宇荣
贾佳亞
赵奕涵
张仁寿
梁志杰
陈俊标
蔡韶曼
邓向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910765901.2A priority Critical patent/CN110489582B/en
Publication of CN110489582A publication Critical patent/CN110489582A/en
Application granted granted Critical
Publication of CN110489582B publication Critical patent/CN110489582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a method and a device for generating personalized display images, and electronic equipment for realizing the method and a computer storage medium; relates to the technical field of machine learning. The method for generating the personalized presentation image comprises the following steps: encoding user data of a target user, display object data related to at least one display image and document data of at least one display image to obtain a first feature; extracting features of at least one display image to obtain second features; the first feature and the second feature are fused through the trained sequencing model, and fusion features are obtained; and predicting click information of the target user on the fusion features through the ordering model so as to determine personalized display images of the target user. According to the technical scheme, individuation of the display image can be improved, and the click rate corresponding to display in the display image can be improved. Meanwhile, personalized display of the images is beneficial to improving image browsing experience of users.

Description

Method and device for generating personalized display image and electronic equipment
Technical Field
The disclosure relates to the technical field of machine learning, in particular to a method and a device for generating a personalized display image, and electronic equipment for realizing the method and the device for generating the personalized display image.
Background
Images are widely used to present information to users, wherein image advertisements present advertising content to users by way of images. Image advertising more vividly conveys information of a display object (e.g., merchandise, etc.) to be used than text advertising, and thus, image advertising has become a main form of current internet advertising and is widely used in various scenes (e.g., e-commerce platform, social scene, news show, etc.).
In the related art, the present display image (i.e., the above-mentioned image advertisement, also referred to as banner advertisement or canner) adopts a unified design effect for different users, that is, the display images received by different users about the same commodity are the same. For example, the visible effect of the band of the same pair of shoes is the same for different users.
However, the display image provided by 10 lacks personalization, which is disadvantageous in increasing the click rate on the display object in the display image.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the present disclosure and thus may include information that does not constitute a related art known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide a method and a device for generating personalized display images, and electronic equipment for realizing the method for generating the personalized display images by using a computer storage medium, so that the individuation of the display images is improved to a certain extent, and the click rate of display objects in the display images is improved.
According to a first aspect of the present disclosure, there is provided a method for generating a personalized presentation image, the method comprising: encoding user data of a target user, display object data related to at least one display image and document data of the at least one display image to obtain a first feature; extracting features of the at least one display image to obtain second features; the first feature and the second feature are fused through the trained sequencing model, and fusion features are obtained; and predicting click information of the target user on the fusion features through the sorting model so as to determine personalized display images of the target user.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the user data includes: user portrait data and a device identifier corresponding to a user; the display object data includes: classification data, identification and modification manipulation data; the above document data includes: a document style and a document format; and the trained ranking model is obtained based on at least one of a neural network model, a decision tree model and an extreme gradient lifting model.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the performing feature extraction on the at least one display image to obtain a second feature includes:
inputting the display image into a pre-trained neural network model; and determining image full-connection layer characteristics according to the full-connection layer of the pre-trained neural network model, and determining image style characteristics according to the inner product of the output vectors of the hidden layer of the pre-trained neural network model.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the method for generating a personalized presentation image further includes:
obtaining a plurality of sets of training samples, wherein each set of training samples comprises: user features, and at least one of attribute features of the presentation image and target click information about the attribute features; inputting the user features and the attribute features into a ranking model; fusing the user features and the attribute features according to a feature fusion layer of the sequencing model to obtain fusion features; and determining predicted click information according to the fusion characteristics, and determining a loss function based on the predicted click information and the target click information, so as to optimize model parameters of the sorting model according to the loss function.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the determining a loss function based on the predicted click information and the target click information includes: and determining a cross entropy function of the predicted click information and the target click information as the loss function.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the fusing processing, by the trained ranking model, the first feature and the second feature to obtain a fused feature includes:
performing embedded learning on the first features through an embedded layer of the sequencing model to obtain embedded features; carrying out fusion processing on the embedded features through a first fusion layer of the sequencing model to obtain first fusion features; and fusing the first fused feature and the second feature through a second fused layer of the sequencing model to obtain a second fused feature.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the ranking model predicts click information of the target user on the fusion feature to determine a personalized presentation image of the target user, including:
The classification layer of the sequencing model predicts click information of the target user on the fusion features; determining target fusion characteristics according to a prediction result, wherein the target fusion characteristics comprise a group of personalized characteristics about the target user; and determining the personalized display image according to the personalized features.
According to a second aspect of the present disclosure, there is provided a generation apparatus for personalized presentation image, the apparatus comprising: the first feature determining module is configured to encode the user data of the target user, the display object data related to at least one display image and the document data of the at least one display image to obtain a first feature; the second feature determining module is configured to perform feature extraction on the at least one display image to obtain second features; the feature fusion module is configured to fuse the first features and the second features through the trained sequencing model to obtain fusion features; and the personalized display image determining module is configured to predict click information of the target user on the fusion feature by the sequencing model so as to determine a personalized display image of the target user.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the user data includes: user portrait data and a device identifier corresponding to a user; the display object data includes: classification data, identification and modification manipulation data; the above document data includes: a document style and a document format; and the trained ranking model is obtained based on at least one of a neural network model, a decision tree model and an extreme gradient lifting model.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the second feature determining module includes: an input unit and a feature extraction unit.
The input unit is configured to: inputting the display image into a pre-trained neural network model; and the above-mentioned feature extraction unit is configured to: and determining the image full-connection layer characteristics according to the full-connection layer of the pre-trained neural network model, and determining the image style characteristics according to the inner product of the output vectors of the hidden layer of the pre-trained neural network model.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the generating device of the personalized presentation image further includes: and a model training module. Wherein:
The model training module comprises: the device comprises a sample acquisition unit, a characteristic input unit, a characteristic fusion unit and a model parameter optimization unit.
The sample acquisition unit is configured to: obtaining a plurality of sets of training samples, wherein each set of training samples comprises: user features, and at least one of attribute features of the presentation image and target click information about the attribute features;
the above-described feature input unit is configured to: inputting the user features and the attribute features into a ranking model;
the above feature fusion unit is configured to: fusing the user features and the attribute features according to a feature fusion layer of the sequencing model to obtain fusion features; the method comprises the steps of,
the model parameter optimizing unit is configured to: and determining predicted click information according to the fusion characteristics, and determining a loss function based on the predicted click information and the target click information so as to optimize model parameters of the sorting model according to the loss function.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the above model parameter optimizing unit is specifically configured to: and determining a cross entropy function of the predicted click information and the target click information as the loss function.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the feature fusion module is specifically configured to:
performing embedded learning on the first features through an embedded layer of the sequencing model to obtain embedded features; carrying out fusion processing on the embedded features through a first fusion layer of the sequencing model to obtain first fusion features; and fusing the first fused feature and the second feature through a second fused layer of the sequencing model to obtain a second fused feature.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the personalized presentation image determining module is specifically configured to:
the sorting model predicts click information of the target user on the fusion features; determining target fusion characteristics according to a prediction result, wherein the target fusion characteristics comprise a group of personalized characteristics about the target user; and determining the personalized display image according to the personalized features.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for generating a personalized presentation image according to any of the embodiments of the first aspect.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to execute the method for generating a personalized presentation image according to any embodiment of the first aspect via execution of the executable instructions.
Exemplary embodiments of the present disclosure may have some or all of the following advantages:
the personalized presentation image generation method provided in an example embodiment of the present disclosure determines personalized presentation images for different users based on a trained ranking model. Specifically, encoding processing is performed according to user data of a target user, at least one display object data related to a display image and at least one document data of the display image, so as to obtain a first feature; and extracting features of the at least one display image to obtain second features; further, fusing the first feature and the second feature through the trained sequencing model to obtain a fused feature; and predicting click information of the target user on the fusion features by the sequencing model, and finally determining the personalized display image of the target user. Because the model input features comprise user features and attribute features of the display image, a group of target attribute features with higher target user click rate can be predicted through the trained sequencing model. The target attribute features are a group of features for reflecting the individuation of the target user, and further, the individuation display image is determined according to the group of target attribute features, so that individuation of the display image is improved, and the corresponding click rate of the display image is improved. Meanwhile, personalized display of the images is beneficial to improving image browsing experience of users.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture to which the methods and apparatus for generating personalized presentation images of embodiments of the present disclosure may be applied;
FIG. 2 schematically illustrates a flow chart of a method of generating a personalized presentation image according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a training method of a ranking model according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart of a method of determining a second feature 520 in accordance with an embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of a ranking model according to an embodiment of the disclosure;
FIG. 6 schematically illustrates a flow chart of a feature processing method in a ranking model according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a ranking model training and ranking model application relationship diagram according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow diagram of a classification method in a ranking model according to an embodiment of the disclosure;
FIG. 9 schematically illustrates a block diagram of a personalized presentation image generation apparatus according to an embodiment of the disclosure;
fig. 10 shows a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the disclosed aspects may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
Artificial intelligence (Artificial Intelligence, AI for short) is a theory, method, technique, and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend, and extend human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
Computer Vision (CV) Computer Vision is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The scheme provided by the embodiment of the disclosure relates to the technologies of computer vision technology, machine learning and the like of artificial intelligence, and is specifically described by the following embodiments:
fig. 1 illustrates a schematic diagram of a system architecture of an exemplary application environment to which the method and apparatus for generating a personalized presentation image according to embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of the terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others. The terminal devices 101, 102, 103 may be various electronic devices with display screens including, but not limited to, desktop computers, portable computers, smart phones, tablet computers, and the like. It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 105 may be a server cluster formed by a plurality of servers.
The method for generating the personalized display image provided by the embodiment of the disclosure is generally executed by the server 105, and accordingly, the device for generating the personalized display image is generally disposed in the server 105. However, it is easily understood by those skilled in the art that the method for generating the personalized presentation image provided in the embodiment of the present disclosure may be performed by the terminal devices 101, 102, 103, and accordingly, the generating device of the personalized presentation image may be provided in the terminal devices 101, 102, 103, which is not particularly limited in the present exemplary embodiment.
For example, in one exemplary embodiment, it may be that the terminal device 101, 102, 103 transmits user data of the target user, the presentation image, at least one presentation object data regarding the presentation image, and at least one document data of the above-mentioned presentation image to the server 105. Thus, the server 105 encodes the user data of the target user, the display object data about at least one display image, and the document data of the at least one display image, to obtain a first feature; and, the server 105 performs feature extraction on the at least one display image to obtain a second feature; then, the server 105 performs fusion processing on the first feature and the second feature through the ranking model, and obtains a fusion feature. Further, the server 105 predicts the click information of the target user on the fusion feature through the ranking model to determine the personalized presentation image of the target user. The server 105 transmits the personalized tactical image to the terminal devices 101, 102, 103, for example, so that the target user can conveniently view the personalized presentation image through the terminal devices 101, 102, 103. Further, the personalized display image meets the personalized requirements of the target user, and is beneficial to improving the click rate corresponding to display in the display image. Meanwhile, personalized display of the images is beneficial to improving image browsing experience of users.
By way of example, one usage scenario may be: for the canner released by the e-commerce platform, different users have different preferences on the display effect of the display image. For example, some users prefer a simple generous style of canner, and some users prefer a graceful style of canner. When the display styles of the same display object in the display image provided by the related technology are the same, the personalized requirements of different users cannot be met.
The following describes the technical scheme of the embodiments of the present disclosure in detail:
fig. 2 schematically illustrates a flowchart of a method of generating a personalized presentation image according to an embodiment of the disclosure. Specifically, referring to fig. 2, the embodiment shown in this figure includes:
step S210, encoding user data of a target user, display object data related to at least one display image and document data of the at least one display image to obtain a first feature;
step S220, extracting features of the at least one display image to obtain second features;
step S230, fusing the first feature and the second feature through the trained sequencing model to obtain a fused feature; the method comprises the steps of,
And step S240, predicting click information of the target user on the fusion features through the sorting model so as to determine personalized display images of the target user.
The technical solution provided in the embodiment shown in fig. 2 is to determine personalized presentation images for different users based on the trained ranking model. Because the model input features comprise user features and attribute features of the display image, a group of target attribute features with higher target user click rate can be predicted through the trained sequencing model. The target attribute features are a group of features for reflecting the individuation of the target user, and further, the individuation display image is determined according to the group of target attribute features, so that individuation of the display image is improved, and the corresponding click rate of the display image is improved. Meanwhile, personalized display of the images is beneficial to improving image browsing experience of users.
In an exemplary embodiment, since the method for generating the personalized presentation image provided in the above embodiment is implemented based on the trained ranking model, the method for training the ranking model is first described in this embodiment. Wherein the ranking model may be based on at least one of a neural network model, a decision tree model, and an extreme gradient lifting model. The training of the neural network model to obtain the ranking model will be described below as an example.
Illustratively, FIG. 3 schematically illustrates a flow chart of a training method of the ranking model according to an embodiment of the present disclosure. Specifically, referring to fig. 3, the embodiment shown in this figure includes steps S310-S340. Illustratively, FIG. 5 schematically illustrates a block diagram of a ranking model according to an embodiment of the present disclosure. The various steps of fig. 3 are explained below in conjunction with fig. 5:
in step S310, a plurality of sets of training samples are acquired, wherein each set of training samples includes: user features, and at least one of attribute features of the presentation image and target click information about the attribute features.
In an exemplary embodiment, the presentation image includes at least the following attribute features: display object features 102 (e.g., features of merchandise in the canner), document features, and image self features. The training targets of the sequencing model are as follows: based on a large amount of historical data generated by a user about the presentation image, preferences of the user about different features in the presentation image are predicted to determine a set of target features, and a personalized presentation image meeting the user preferences is determined based on the set of target features. The preference of the user can be embodied in click information, if the user clicks to indicate that the preference is met, if the user does not click to indicate that the preference is not met.
In an exemplary embodiment, referring to FIG. 5, a training sample set of ranking model 500 contains features relating to textual data, namely first features 510, and features relating to images, namely second features 520.
In the following examples, specific implementations for determining the first feature 510 and the second feature 520, respectively, will be described.
Illustratively, the data sources associated with the first feature 510 include user data on the one hand and data on the presentation image on the other hand, including: display object data, text data, click information of the user on the display object and the text, and the like. Illustratively, the data sources associated with the second feature 520 include image features that are presentation images. Taking the image advertisement canner as an example of the display image, the display object is the commodity. Table 1 below lists the data sources associated with the canner and the classification information and preprocessing information for the data sources.
TABLE 1
Referring to table 1, user data about the canner includes: user portrait data and a device identifier corresponding to the user. Wherein, user portrait data includes: age, sex, occupation, address, etc.; the device identifier corresponding to the user is used as a user identifier to uniquely determine the user, so that the user preference is conveniently counted.
The classification learning algorithm of the ranking model is based on calculation of distances between features or calculation of similarity. The distance or similarity is generally calculated by similarity in the euclidean space (e.g. cosine similarity is calculated). Therefore, in this embodiment, the data source needs to be mapped to the euclidean space.
And a mode of mapping discrete data to European space can be realized by a one-hot coding mode. The variable values with non-partial order relation can be free from partial order through one-hot coding, and are equidistant from dots. The value of the discrete data is expanded to the European space through the one-hot coding, namely, the value of the discrete data corresponds to a certain point of the European space. Thereby facilitating rationalization of feature distance computation involved in classification algorithms.
Therefore, the above-described preprocessing approach for discrete class data may employ One-hot encoding.
In an exemplary embodiment, data in the data sources described above is data preprocessed to determine data characteristics.
Illustratively, referring to FIG. 5, by preprocessing the user data in Table 1 above, user characteristics 101 may be determined. For a specific data processing manner, reference may be made to table 1: the user identification (user id) and the user gender (such as male, female or unknown) are preprocessed by a One-hot coding mode, and the user age is preprocessed by a quantized segment coding data preprocessing mode. Thereby determining the identification feature, the age feature, and the gender feature in the user feature 101 in fig. 5.
Illustratively, referring again to FIG. 5, in the case of a canner as an example, the presentation object feature 102 may be determined by preprocessing the merchandise data in Table 1 above. For a specific data processing manner, reference may be made to table 1: the commodity identification (user id), commodity category (such as clothing, food or cosmetic, etc.), and commodity modification technical data (such as shading, modification layer, etc.) are preprocessed by One-hot coding. Thereby determining the identification features, category features, and manipulation features in the presentation object features 102 in fig. 5.
For example, in the case of the canner, to attract eyes of the user to increase the click rate of the canner, in addition to the commodity image, some auxiliary images associated with the commodity may be arranged in the canner (i.e. for determining the auxiliary object feature 103), with continued reference to fig. 5. For example, regarding the banner of basketball shoe A, basketball stars wearing basketball shoes A may be placed in the banner to promote the user's focus on basketball shoes A. The basketball star image of the basketball shoes A worn in the canner is an accessory image of the commodity in the table 1. May be used to determine the subject feature 103. Illustratively, the collateral object features 103 may be determined by preprocessing the collateral image data of the commodity in table 1 above. The data preprocessing method for determining the auxiliary object feature 103 is the same as the data preprocessing method for determining the display object feature 102 in the above embodiment, and will not be described herein.
For example, still referring to fig. 5, the document features 104 may be determined from preprocessing of document data in the banner. The document data includes a style of the document in the banner (such as conciseness, europe, chinese style, etc.) and a layout of the document (i.e. typesetting mode of the document in the banner, such as layout data of a main document, a subsidiary document, a decorative document in the banner, etc.). For the above-described document data, the document style features and document layout features in the document feature 104 shown in FIG. 5 may also be determined by preprocessing using One-hot encoding.
In an exemplary embodiment, fig. 4 schematically shows a flow chart of a method of determining a second feature 520 according to an embodiment of the disclosure. Referring to fig. 4, the method provided by this embodiment includes:
step S410, inputting the display image into a pre-trained neural network model; and step S420, determining image full-connection layer characteristics according to the full-connection layer of the pre-trained neural network model, and determining image style characteristics according to the inner product of the output vectors of the hidden layer of the pre-trained neural network model.
In an exemplary embodiment, the second feature of the presentation image is acquired by employing a pre-trained model. Referring to Table 1, for Banner pictures, full connection features (Fully Convolutional Networks, abbreviated as FCN) are extracted by pre-training VGG-16 full connection layer features, and style features are extracted by pre-training Gram matrices of feature maps of VGG-16.
With continued reference to fig. 3, after determining sample data from the first feature 510 and the second feature 520 described above, in step S320, the user feature and the attribute feature are input to a ranking model; and in step S330, the user features and the attribute features are fused according to the feature fusion layer of the ranking model, so as to obtain fusion features.
In an exemplary embodiment, FIG. 6 schematically illustrates a flow chart of a feature processing method in a ranking model according to an embodiment of the disclosure. Specifically, it may be a specific embodiment of step S320 and step S330. Referring to fig. 6, the method provided by this embodiment includes steps S610-S630. Wherein:
in step S610, the first feature is subjected to embedded learning through an embedding layer of the ranking model, so as to obtain an embedded feature.
In an exemplary embodiment, according to the first feature 510 determined by encoding, an embedded learning (ebadd) manner is used to implement dimension reduction, so as to determine the embedded features of the user feature 101 and the embedded features of the attribute features of the display image.
Specifically, the index tag may be first created for the first feature 510; referring to fig. 5, the first feature 510 is further input to the embedding layer 501 of the ranking model 500, and the feature dimension reduction processing is performed by the embedding layer 501 by calculating the weight matrix of the index tag and the weighting layer. The feature dimension reduction can be realized by adopting algorithms such as principal component analysis (Principal Component Analysis, abbreviated as PCA), singular value decomposition (Singular Value Decomposition, abbreviated as SVD) and the like.
In step S620, the embedded features are fused by the first fusion layer of the ranking model, so as to obtain first fusion features.
In an exemplary embodiment, referring to fig. 5, a first fusion layer 502 (i.e., the perseptron layer) of an embedded feature ordering model corresponding to the first feature 510 described above will be described. The fusion among the user feature 101, the display object feature 102 and the auxiliary display object feature 103 is achieved through the first fusion layer 502, so that a first fusion feature is obtained.
With continued reference to fig. 6, in step S630, the first fused feature and the second feature are fused by the second fusion layer of the ranking model, so as to obtain a second fused feature.
In an exemplary embodiment, referring to fig. 5, the output features (i.e., the first fusion features described above) according to the first fusion layer 501, and the second features 520 described above are input to the second fusion layer 503. To splice VGG16 full connectivity layer features (FCN features) and style features (mapping vectors of Gram matrix of feature map of VGG 16).
In an exemplary embodiment, with continued reference to fig. 3, after determining the second fusion feature, in step S340, predicted click information is determined from the fusion feature (i.e., the second fusion feature described above), and a loss function is determined based on the predicted click information and the target click information to optimize model parameters of the ranking model according to the loss function.
In an exemplary embodiment, referring to FIG. 5, the first fusion feature described above is input to the classification layer 504 of the ranking model 500. Illustratively, the classification layer 504 predicts the click information of the second fusion feature by using a classification function (e.g., sigmoid) to obtain predicted click information.
In an exemplary embodiment, a cross entropy function of the predicted click information and the target click information corresponding to the predicted click information is determined and used as a loss function of the sorting model, so that model parameters of the sorting model are optimized through the loss function, and model training is achieved.
In an exemplary embodiment, if the test index obtained by testing the ranking model meets the preset requirement, a trained ranking model is obtained. Illustratively, FIG. 7 schematically illustrates a ranking model training and ranking model application relationship diagram in accordance with an embodiment of the present disclosure.
Referring to fig. 7, a training stage 710 of the ranking model is elaborated according to the technical solution provided in the embodiments shown in fig. 3 to 6, including obtaining training samples 711, training the model 712 based on the training samples, and finally obtaining a trained ranking model 713.
In the following embodiment, based on the trained ranking model 713 described above, a model application stage 720 is performed, including: acquiring user data and presentation image data 721 related to a target user, and extracting features 722 from the data; further, the above trained ranking model 713 is input; finally, a personalized presentation image is determined from the model output.
Based on the trained ranking model 713, a detailed description of the steps of the embodiment shown in fig. 2 is as follows:
in step S210, encoding processing is performed on user data of a target user, display object data about at least one display image, and document data of the at least one display image, so as to obtain a first feature.
In an exemplary embodiment, the preferences of the target user (i.e., a set of attribute features related to the presentation image) are predicted by the trained ranking model, and further, personalized presentation images are determined based on the preferences. As the attribute characteristics of the personalized image reflect the preference of the target user, the click rate of the display image is improved while the image browsing experience of the target user is improved.
Illustratively, the user data includes: user portrait data and a device identifier corresponding to the user. And encoding the user data to determine the user characteristics. Since the specific implementation manner of this embodiment is the same as that of the above embodiment for obtaining the user feature, the details are not repeated here.
Illustratively, at least one presentation image is also acquired in step S210. For each presentation image, presentation object data and document data are acquired. And if the display image contains the auxiliary object, acquiring auxiliary object data. Further, the data are processed to obtain a group of attribute features corresponding to each display image. Since the specific implementation manner of this embodiment is the same as the specific implementation manner of acquiring the attribute features of the display image in the foregoing embodiment, a detailed description thereof is omitted herein.
In step S220, feature extraction is performed on the at least one display image to obtain a second feature.
In an exemplary embodiment, for each presentation image in step S210, its full connection features and style features are acquired through a pre-training network. Since the detailed implementation is the same as that provided in the embodiment shown in fig. 4, a detailed description thereof will be omitted.
In step S230, the first feature and the second feature are fused by the trained ranking model, so as to obtain a fused feature.
In the exemplary embodiment, the specific implementation of step S230 is the same as that provided in the embodiment shown in fig. 5, and thus will not be described in detail herein.
In step S240, click information of the target user on the fusion feature is predicted by the ranking model, so as to determine a personalized presentation image of the target user.
In an exemplary embodiment, FIG. 8 schematically illustrates a flow chart of a classification method in a ranking model according to an embodiment of the disclosure. Specifically, as a specific embodiment of step S240, this may be performed. Referring to fig. 8, the method provided by this embodiment includes steps S810-S830. Wherein,
In step S810, click information of the fusion feature by the target user is predicted by a classification layer of the ranking model.
In an exemplary embodiment, the mathematical expression of the above-described ranking model is as follows, L (X) =p (click). Where X represents the input feature and P (click) represents the likelihood of the target user clicking. Specifically, if the target user clicks P (click) is 1, and if the target user does not click P (click) is 0.
In an exemplary embodiment, for the input features of the target user, the ranking model predicts click information for the input features.
In step S820, determining a target fusion feature according to the prediction result, wherein the target fusion feature comprises a set of personalized features about the target user; and, in step S830, determining the personalized presentation image according to the personalized features.
In an exemplary embodiment, the fusion feature corresponding to the maximum value of the Click prediction value P (Click) is taken as a target fusion feature, and the feature about the display image included in the target fusion feature is taken as a set of personalized features about the target user. Further, generating a display image according to the personalized features, namely the personalized display image.
According to the technical scheme, the problem that the display image provided by the related technology lacks individuation and is unfavorable for improving the click rate of the display object in the display image can be solved. Taking an image advertisement canner as an example, the technical scheme can realize individuation of the advertisement canner, thereby improving click rate and increasing effectiveness of advertisement delivery.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as a computer program executed by a processor (including a CPU and GPU). The computer program, when executed by a processor, performs the functions defined by the above-described methods provided by the present disclosure. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic disk or an optical disk, etc.
Furthermore, it should be noted that the above-described figures are merely illustrative of the processes involved in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Further, in this example embodiment, a device for generating a personalized presentation image is also provided. Referring to fig. 9, the personalized presentation image generating apparatus 900 includes: a first feature determination module 901, a second feature determination module 902, a feature fusion module 903, and a personalized presentation image determination module 904. Wherein:
the first feature determining module 901 is configured to encode user data of a target user, display object data related to at least one display image, and document data of the at least one display image, so as to obtain a first feature;
the second feature determining module 902 is configured to perform feature extraction on the at least one display image to obtain a second feature;
the feature fusion module 903 is configured to fuse the first feature and the second feature through the trained ranking model to obtain a fused feature;
the personalized presentation image determining module 904 is configured to predict click information of the target user on the fusion feature by the ranking model to determine a personalized presentation image of the target user.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the user data includes: user portrait data and a device identifier corresponding to a user; the display object data includes: classification data, identification and modification manipulation data; the above document data includes: a document style and a document format; and the trained ranking model is obtained based on at least one of a neural network model, a decision tree model and an extreme gradient lifting model.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the second feature determining module 902 includes: an input unit 9021 and a feature extraction unit 9022.
The input unit 9021 described above is configured to: inputting the display image into a pre-trained neural network model; and, the above-described feature extraction unit 9022 is configured to: and determining the image full-connection layer characteristics according to the full-connection layer of the pre-trained neural network model, and determining the image style characteristics according to the inner product of the output vectors of the hidden layer of the pre-trained neural network model.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the generating apparatus 900 of the personalized presentation image further includes: model training module 905. Wherein:
the model training module 905 includes: a sample acquisition unit 9051, a feature input unit 9052, a feature fusion unit 9053, and a model parameter optimization unit 9054.
The sample acquiring unit 9051 described above is configured to: obtaining a plurality of sets of training samples, wherein each set of training samples comprises: user features, and at least one of attribute features of the presentation image and target click information about the attribute features;
The above-described feature input unit 9052 is configured to: inputting the user features and the attribute features into a ranking model;
the above-described feature fusion unit 9053 is configured to: fusing the user features and the attribute features according to a feature fusion layer of the sequencing model to obtain fusion features; the method comprises the steps of,
the above model parameter optimizing unit 9054 is configured to: and determining predicted click information according to the fusion characteristics, and determining a loss function based on the predicted click information and the target click information so as to optimize model parameters of the sorting model according to the loss function.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the above-described model parameter optimizing unit 9054 is specifically configured to: and determining a cross entropy function of the predicted click information and the target click information as the loss function.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the feature fusion module 903 is specifically configured to:
performing embedded learning on the first features through an embedded layer of the sequencing model to obtain embedded features; carrying out fusion processing on the embedded features through a first fusion layer of the sequencing model to obtain first fusion features; and fusing the first fused feature and the second feature through a second fused layer of the sequencing model to obtain a second fused feature.
In an exemplary embodiment of the present disclosure, based on the foregoing embodiment, the personalized presentation image determining module 904 is specifically configured to:
the classification layer of the sequencing model predicts click information of the target user on the fusion features; determining target fusion characteristics according to a prediction result, wherein the target fusion characteristics comprise a group of personalized characteristics about the target user; and determining the personalized display image according to the personalized features.
The specific details of each module or unit in the above personalized display image generating device are described in detail in the corresponding personalized display image generating method, so that the details are not repeated here.
Fig. 10 shows a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
It should be noted that, the computer system 1000 of the electronic device shown in fig. 10 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present invention.
As shown in fig. 10, computer system 1000 includes a processor 1001, wherein processor 1001 may include: a graphics processing unit (Graphics Processing Unit, GPU), a central processing unit (Central Processing Unit, CPU), which can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a random access Memory (Random Access Memory, RAM) 1003. In the RAM 1003, various programs and data required for system operation are also stored. A processor (GPU/CPU) 1001, a ROM 1002, and a RAM 1003 are connected to each other by a bus 1004. An Input/Output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output portion 1006 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage portion 1008 including a hard disk or the like; and a communication section 1009 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The drive 1010 is also connected to the I/O interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in the drive 1010, so that a computer program read out therefrom is installed as needed in the storage section 1008.
In particular, according to embodiments of the present disclosure, the processes described below with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1009, and/or installed from the removable medium 1011. The computer programs, when executed by the processor (GPU/CPU) 1001, perform the various functions defined in the system of the present application. In some embodiments, the computer system 1000 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
It should be noted that, the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
As another aspect, the present application also provides a computer-readable medium that may be contained in the electronic device described in the above embodiment; or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the methods described in the above embodiments.
For example, the electronic device may implement the method as shown in fig. 2: step S210, encoding user data of a target user, display object data related to at least one display image and document data of the at least one display image to obtain a first feature; step S220, extracting features of the at least one display image to obtain second features; step S230, fusing the first feature and the second feature through the trained sequencing model to obtain a fused feature; and step S240, predicting the click rate of the fusion features through the sorting model to determine the personalized display image of the target user.
As another example, the electronic device may implement the various steps as shown in fig. 3-8.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A method for generating a personalized presentation image, the method comprising:
encoding user data of a target user, display object data related to at least one display image and document data of the at least one display image to obtain a first feature;
extracting features of the at least one display image to obtain second features;
And carrying out fusion processing on the first feature and the second feature through the trained sequencing model in the following mode to obtain fusion features: performing embedded learning on the first features through an embedded layer of the sequencing model to obtain embedded features; carrying out fusion processing on the embedded features through a first fusion layer of the sequencing model to obtain first fusion features; the first fusion feature and the second feature are subjected to fusion processing through a second fusion layer of the sequencing model, so that a second fusion feature is obtained;
predicting click information of the target user on the fusion features through a classification layer of the ordering model;
determining target fusion features according to the prediction result, wherein the target fusion features comprise a group of personalized features related to the target user;
and determining personalized display images according to the personalized features.
2. The method for generating a personalized presentation image according to claim 1, wherein,
the user data includes: user portrait data and a device identifier corresponding to a user;
the display object data includes: classification data, identification and modification manipulation data;
The document data includes: a document style and a document format;
the trained ranking model is derived based on at least one of a neural network model, a decision tree model, and an extreme gradient lifting model.
3. The method for generating a personalized display image according to claim 1, wherein the feature extraction of the at least one display image to obtain a second feature comprises:
inputting the display image into a pre-trained neural network model;
and determining image full-connection layer characteristics according to the full-connection layer of the pre-trained neural network model, and determining image style characteristics according to the inner product of the output vectors of the hidden layer of the pre-trained neural network model.
4. A method of generating a personalized presentation image according to any one of claims 1 to 3, the method further comprising:
obtaining a plurality of sets of training samples, wherein each set of training samples comprises: user features, and at least one of attribute features of the presentation image and target click information about the attribute features;
inputting the user features and the attribute features into a ranking model;
fusing the user features and the attribute features according to a feature fusion layer of the sequencing model to obtain fused features;
And determining predicted click information according to the fusion characteristics, and determining a loss function based on the predicted click information and the target click information so as to optimize model parameters of the sorting model according to the loss function.
5. The method of generating a personalized presentation image according to claim 4, wherein the determining a loss function based on the predicted click information and the target click information comprises:
and determining a cross entropy function of the predicted click information and the target click information as the loss function.
6. A device for generating a personalized presentation image, the device comprising:
the first feature determining module is configured to encode the user data of the target user, the display object data related to at least one display image and the document data of the at least one display image to obtain a first feature;
the second feature determining module is configured to perform feature extraction on the at least one display image to obtain second features;
the feature fusion module is configured to fuse the first feature and the second feature through the trained sequencing model in the following manner to obtain a fused feature: performing embedded learning on the first features through an embedded layer of the sequencing model to obtain embedded features; carrying out fusion processing on the embedded features through a first fusion layer of the sequencing model to obtain first fusion features; the first fusion feature and the second feature are subjected to fusion processing through a second fusion layer of the sequencing model, so that a second fusion feature is obtained;
The personalized display image determining module is configured to predict click information of the target user on the fusion features through a classification layer of the sorting model; determining target fusion features according to the prediction result, wherein the target fusion features comprise a group of personalized features related to the target user; and determining personalized display images according to the personalized features.
7. A computer storage medium having a computer program stored thereon;
wherein the computer program, when executed by a processor, implements the method of generating a personalized presentation image according to any one of claims 1 to 5.
8. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of generating a personalized presentation image of any of claims 1 to 5 via execution of the executable instructions.
CN201910765901.2A 2019-08-19 2019-08-19 Method and device for generating personalized display image and electronic equipment Active CN110489582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910765901.2A CN110489582B (en) 2019-08-19 2019-08-19 Method and device for generating personalized display image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910765901.2A CN110489582B (en) 2019-08-19 2019-08-19 Method and device for generating personalized display image and electronic equipment

Publications (2)

Publication Number Publication Date
CN110489582A CN110489582A (en) 2019-11-22
CN110489582B true CN110489582B (en) 2023-11-07

Family

ID=68552093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910765901.2A Active CN110489582B (en) 2019-08-19 2019-08-19 Method and device for generating personalized display image and electronic equipment

Country Status (1)

Country Link
CN (1) CN110489582B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862557B (en) * 2019-11-28 2024-06-04 北京金山云网络技术有限公司 Information display method, display device, server and storage medium
CN110909182B (en) * 2019-11-29 2023-05-09 北京达佳互联信息技术有限公司 Multimedia resource searching method, device, computer equipment and storage medium
CN113450433A (en) * 2020-03-26 2021-09-28 阿里巴巴集团控股有限公司 Picture generation method and device, computer equipment and medium
CN111506378B (en) * 2020-04-17 2021-09-28 腾讯科技(深圳)有限公司 Method, device and equipment for previewing text display effect and storage medium
CN111581926B (en) * 2020-05-15 2023-09-01 抖音视界有限公司 Document generation method, device, equipment and computer readable storage medium
CN112767038B (en) * 2021-01-25 2021-08-27 特赞(上海)信息科技有限公司 Poster CTR prediction method and device based on aesthetic characteristics
CN113111243A (en) * 2021-03-29 2021-07-13 北京达佳互联信息技术有限公司 Display object sharing method and device and storage medium
CN113342868B (en) * 2021-08-05 2021-11-02 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment and computer readable storage medium
CN114003806A (en) * 2021-09-27 2022-02-01 五八有限公司 Content display method and device, electronic equipment and readable medium
CN117786193A (en) * 2022-09-19 2024-03-29 北京沃东天骏信息技术有限公司 Method and device for generating multimedia information and computer readable storage medium
CN116433800B (en) * 2023-06-14 2023-10-20 中国科学技术大学 Image generation method based on social scene user preference and text joint guidance
CN117611953A (en) * 2024-01-18 2024-02-27 深圳思谋信息科技有限公司 Graphic code generation method, graphic code generation device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046515A (en) * 2015-06-26 2015-11-11 深圳市腾讯计算机系统有限公司 Advertisement ordering method and device
US20170330054A1 (en) * 2016-05-10 2017-11-16 Baidu Online Network Technology (Beijing) Co., Ltd. Method And Apparatus Of Establishing Image Search Relevance Prediction Model, And Image Search Method And Apparatus
CN109460513A (en) * 2018-10-31 2019-03-12 北京字节跳动网络技术有限公司 Method and apparatus for generating clicking rate prediction model
CN109495552A (en) * 2018-10-31 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for updating clicking rate prediction model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046515A (en) * 2015-06-26 2015-11-11 深圳市腾讯计算机系统有限公司 Advertisement ordering method and device
US20170330054A1 (en) * 2016-05-10 2017-11-16 Baidu Online Network Technology (Beijing) Co., Ltd. Method And Apparatus Of Establishing Image Search Relevance Prediction Model, And Image Search Method And Apparatus
CN109460513A (en) * 2018-10-31 2019-03-12 北京字节跳动网络技术有限公司 Method and apparatus for generating clicking rate prediction model
CN109495552A (en) * 2018-10-31 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for updating clicking rate prediction model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
排序学习研究进展与展望;李金忠;《自动化学报》;第44卷(第8期);1345-1363 *

Also Published As

Publication number Publication date
CN110489582A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110489582B (en) Method and device for generating personalized display image and electronic equipment
CN110737783B (en) Method and device for recommending multimedia content and computing equipment
CN111046275B (en) User label determining method and device based on artificial intelligence and storage medium
CN111784455A (en) Article recommendation method and recommendation equipment
US9939272B1 (en) Method and system for building personalized knowledge base of semantic image segmentation via a selective random field approach
CN113254785B (en) Recommendation model training method, recommendation method and related equipment
CN114332680A (en) Image processing method, video searching method, image processing device, video searching device, computer equipment and storage medium
CN111078940B (en) Image processing method, device, computer storage medium and electronic equipment
CN114298122B (en) Data classification method, apparatus, device, storage medium and computer program product
CN115131698B (en) Video attribute determining method, device, equipment and storage medium
CN113761153A (en) Question and answer processing method and device based on picture, readable medium and electronic equipment
CN114201516B (en) User portrait construction method, information recommendation method and related devices
CN113254684A (en) Content aging determination method, related device, equipment and storage medium
CN112765387A (en) Image retrieval method, image retrieval device and electronic equipment
CN113392179A (en) Text labeling method and device, electronic equipment and storage medium
CN111897950A (en) Method and apparatus for generating information
CN109377284B (en) Method and electronic equipment for pushing information
CN112862538A (en) Method, apparatus, electronic device, and medium for predicting user preference
CN109299378B (en) Search result display method and device, terminal and storage medium
CN112989174A (en) Information recommendation method and device, medium and equipment
CN113343664B (en) Method and device for determining matching degree between image texts
US11989939B2 (en) System and method for enhancing machine learning model for audio/video understanding using gated multi-level attention and temporal adversarial training
CN114330519A (en) Data determination method and device, electronic equipment and storage medium
CN113822065A (en) Keyword recall method and device, electronic equipment and storage medium
CN113822324A (en) Image processing method and device based on multitask model and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant