CN110489582A - Personalization shows the generation method and device, electronic equipment of image - Google Patents

Personalization shows the generation method and device, electronic equipment of image Download PDF

Info

Publication number
CN110489582A
CN110489582A CN201910765901.2A CN201910765901A CN110489582A CN 110489582 A CN110489582 A CN 110489582A CN 201910765901 A CN201910765901 A CN 201910765901A CN 110489582 A CN110489582 A CN 110489582A
Authority
CN
China
Prior art keywords
feature
image
mentioned
fusion
order models
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910765901.2A
Other languages
Chinese (zh)
Other versions
CN110489582B (en
Inventor
赵胜林
陈锡显
苏玉鑫
沈小勇
戴宇荣
贾佳亞
赵奕涵
张仁寿
梁志杰
陈俊标
蔡韶曼
邓向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910765901.2A priority Critical patent/CN110489582B/en
Publication of CN110489582A publication Critical patent/CN110489582A/en
Application granted granted Critical
Publication of CN110489582B publication Critical patent/CN110489582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a kind of generation method of personalized displaying image, device, and realizes the electronic equipment of above method computer storage medium;It is related to machine learning techniques field.Personalization shows that the generation method of image includes: to carry out coded treatment to the user data of target user, about at least one displaying object data for showing image and at least one official documents and correspondence data for showing image, obtains fisrt feature;Image, which carries out feature extraction, to be shown at least one, obtains second feature;Fusion treatment is carried out to fisrt feature and second feature by the order models after training, obtains fusion feature;Target user is predicted to the click information of fusion feature, to determine that the personalized of target user shows image by order models.The technical program be able to ascend show image personalization, facilitate promoted to show image in show corresponding clicking rate.Meanwhile the personalized picture browsing experience for showing image and being conducive to be promoted user.

Description

Personalization shows the generation method and device, electronic equipment of image
Technical field
This disclosure relates to machine learning techniques field, in particular to a kind of personalized generation method for showing image, Personalization shows the generating means of image, and realizes the above-mentioned personalized generation method computer storage medium for showing image Electronic equipment.
Background technique
Image is widely used in showing information to user, wherein image ad is by way of image by ad content Show user.Compared to copy, image ad will show that the information of object (e.g., commodity etc.) passes more vividly Up to being used for, thus, image ad has become the principal mode of Internet advertising instantly, and is widely used in various scenes (e.g., Electric business platform, social scene, news displaying etc.) in.
In the related technology, current displaying image (that is, above-mentioned image ad, also referred to as: banner or banner) needle To different users using unitized design effect, that is to say, that about same commodity received by different users Show that image is all identical.It such as with the Banner of a pair of shoes is the same for the visual effects of different user.
However, 10 displaying the lacking individuality of image provided, are unfavorable for promoting the click to displaying object in image is shown Rate.
It should be noted that information is only used for reinforcing the reason to the background of the disclosure disclosed in above-mentioned background technology part Solution, therefore may include the information not constituted to the relevant technologies known to persons of ordinary skill in the art.
Summary of the invention
The generation for being designed to provide a kind of personalized generation method for showing image, personalized displaying image of the disclosure Device, and realize the electronic equipment of the above-mentioned personalized generation method computer storage medium for showing image, and then certain The personalization for showing image is promoted in degree, and then is conducive to promote the clicking rate to displaying object in image is shown.
According to the disclosure in a first aspect, providing a kind of personalized generation method for showing image, the above method includes: pair The user data of target user shows image about at least one displaying object data for showing image and above-mentioned at least one Official documents and correspondence data carry out coded treatment, obtain fisrt feature;To it is above-mentioned it is at least one show that image carries out feature extraction, obtain the Two features;Fusion treatment is carried out to above-mentioned fisrt feature and above-mentioned second feature by the order models after training, is merged Feature;And by above-mentioned order models predict above-mentioned target user to the click information of above-mentioned fusion feature, it is above-mentioned with determination The personalized of target user shows image.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned user data includes: user's portrait Data and the corresponding device identification of user;Above-mentioned displaying object data includes: classification data, mark and modification gimmick data;On Stating official documents and correspondence data includes: official documents and correspondence style and official documents and correspondence format;And the order models after above-mentioned training are based on neural network mould What at least one of type, decision-tree model and extreme gradient lift scheme obtained.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, it is above-mentioned to above-mentioned at least one display diagram As carrying out feature extraction, second feature is obtained, comprising:
By the neural network model of above-mentioned displaying image input pre-training;And the neural network according to above-mentioned pre-training The full articulamentum of model determines the full articulamentum feature of image, and according to the hidden layer of the neural network model of above-mentioned pre-training The inner product of output vector determines image style and features.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, the above-mentioned personalized generation for showing image Method further include:
Obtain multiple groups training sample, wherein every group of training sample includes: user characteristics, and at least one displaying image Attributive character and target click information about shown attributive character;Above-mentioned user characteristics and above-mentioned attributive character are input to Order models;Above-mentioned user characteristics and above-mentioned attributive character are merged according to the Fusion Features layer of above-mentioned order models, are merged Feature;And prediction click information is determined according to above-mentioned fusion feature, and be based on above-mentioned prediction click information and shown target point It hits information and determines loss function, to optimize the model parameter of above-mentioned order models according to shown loss function.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, it is above-mentioned to be based on above-mentioned prediction click information Loss function is determined with shown target click information, comprising: determines above-mentioned prediction click information and shown target click information Intersect entropy function, as above-mentioned loss function.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above by the order models after training Fusion treatment is carried out to above-mentioned fisrt feature and above-mentioned second feature, obtains fusion feature, comprising:
Flush type learning is carried out to above-mentioned fisrt feature by the embeding layer of above-mentioned order models, obtains embedded feature; Fusion treatment is carried out to above-mentioned embedded feature by the first fused layer of above-mentioned order models, obtains the first fusion feature;With And it is carried out at fusion by the second fused layer of above-mentioned order models to by above-mentioned first fusion feature and above-mentioned second feature Reason, obtains the second fusion feature.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned order models predict above-mentioned target User shows image to the click information of above-mentioned fusion feature with the personalization of the above-mentioned target user of determination, comprising:
The classification layer of above-mentioned order models predicts above-mentioned target user to the click information of above-mentioned fusion feature;According to prediction As a result subject fusion feature is determined, wherein above-mentioned subject fusion feature includes that one group of personalization about above-mentioned target user is special Sign;And above-mentioned personalized displaying image is determined according to above-mentioned individualized feature.
According to the second aspect of the disclosure, a kind of personalized generating means for showing image are provided, above-mentioned apparatus includes: Fisrt feature determining module is configured as the user data to target user, about at least one displaying object for showing image Data and above-mentioned at least one official documents and correspondence data for showing image carry out coded treatment, obtain fisrt feature;Second feature determines Module is configured as carrying out feature extraction to above-mentioned at least one displaying image, obtains second feature;Fusion Features module, quilt It is configured to carry out fusion treatment to above-mentioned fisrt feature and above-mentioned second feature by the order models after training, it is special to obtain fusion Sign;And personalized displaying image determining module, it is configured as above-mentioned order models and predicts above-mentioned target user to above-mentioned fusion The click information of feature, personalized with the above-mentioned target user of determination show image.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned user data includes: user's portrait Data and the corresponding device identification of user;Above-mentioned displaying object data includes: classification data, mark and modification gimmick data;On Stating official documents and correspondence data includes: official documents and correspondence style and official documents and correspondence format;And the order models after above-mentioned training are based on neural network mould What at least one of type, decision-tree model and extreme gradient lift scheme obtained.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned second feature determining module, packet It includes: input unit and feature extraction unit.
Above-mentioned input unit is configured as: by the neural network model of above-mentioned displaying image input pre-training;And it is above-mentioned Feature extraction unit is configured as: determining that the full articulamentum of image is special according to the full articulamentum of the neural network model of above-mentioned pre-training Sign, and image style and features are determined according to the inner product of the output vector of the hidden layer of the neural network model of above-mentioned pre-training.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, personalization shows the generating means of image Further include: model training module.Wherein:
Above-mentioned model training module, comprising: sample acquisition unit, feature input unit, Fusion Features unit and model ginseng Number optimization unit.
Above-mentioned sample acquisition unit is configured as: obtaining multiple groups training sample, wherein every group of training sample includes: user Feature and at least one attributive character for showing image and the target click information about shown attributive character;
Features described above input unit is configured as: above-mentioned user characteristics and above-mentioned attributive character are input to order models;
Features described above integrated unit is configured as: merging above-mentioned user characteristics according to the Fusion Features layer of above-mentioned order models With above-mentioned attributive character, fusion feature is obtained;And
Above-mentioned Model Parameter Optimization unit is configured as: being determined prediction click information according to above-mentioned fusion feature, and is based on Above-mentioned prediction click information and shown target click information determine loss function, to optimize above-mentioned sequence according to shown loss function The model parameter of model.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned Model Parameter Optimization unit is had Body is configured that the intersection entropy function for determining above-mentioned prediction click information and shown target click information, as above-mentioned loss function.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, features described above Fusion Module is specifically matched It is set to:
Flush type learning is carried out to above-mentioned fisrt feature by the embeding layer of above-mentioned order models, obtains embedded feature; Fusion treatment is carried out to above-mentioned embedded feature by the first fused layer of above-mentioned order models, obtains the first fusion feature;With And it is carried out at fusion by the second fused layer of above-mentioned order models to by above-mentioned first fusion feature and above-mentioned second feature Reason, obtains the second fusion feature.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned personalized displaying image determines mould Block is configured specifically are as follows:
Above-mentioned order models predict above-mentioned target user to the click information of above-mentioned fusion feature;It is determined according to prediction result Subject fusion feature, wherein above-mentioned subject fusion feature includes one group of individualized feature about above-mentioned target user;And Above-mentioned personalized displaying image is determined according to above-mentioned individualized feature.
According to the third aspect of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with, Above-mentioned computer program realizes that personalization described in any embodiment shows image in above-mentioned first aspect when being executed by processor Generation method.
According to the fourth aspect of the disclosure, a kind of electronic equipment is provided, comprising: processor;And memory, for storing The executable instruction of above-mentioned processor;Wherein, above-mentioned processor is configured to above-mentioned to execute via above-mentioned executable instruction is executed Personalization described in any embodiment shows the generation method of image in first aspect.
Disclosure exemplary embodiment can have it is following partly or entirely the utility model has the advantages that
The personalization provided by an example embodiment of the disclosure shows that the generation method of image is based on after training Order models be that different user determines and personalized shows image.Specifically, according to the user data of target user, about exhibition At least one displaying object data of diagram picture and at least one official documents and correspondence data of above-mentioned displaying image carry out coded treatment, obtain To fisrt feature;And feature extraction is carried out to above-mentioned at least one displaying image, obtain second feature;Further, pass through Order models after training obtain fusion feature to above-mentioned fisrt feature and second feature progress fusion treatment is merged;And it is above-mentioned Order models predict above-mentioned target user to the click information of above-mentioned fusion feature, the final personalized displaying for determining target user Image.Due to including user characteristics and the attributive character for showing image in mode input feature, after through training Order models, predict the higher one group of objective attribute target attribute feature of target user's clicking rate.Wherein, above-mentioned objective attribute target attribute feature is One group of feature of above-mentioned target user's personalization is embodied, further, is determined according to above-mentioned one group of objective attribute target attribute feature personalized Show image, thus improve show image personalization, facilitate promoted to show image in show corresponding clicking rate.Together When, personalization shows that image is conducive to be promoted the picture browsing experience of user.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the disclosure Example, and together with specification for explaining the principles of this disclosure.It should be evident that the accompanying drawings in the following description is only the disclosure Some embodiments for those of ordinary skill in the art without creative efforts, can also basis These attached drawings obtain other attached drawings.
Fig. 1, which is shown, can show the generation method of image and showing for device using a kind of personalization of the embodiment of the present disclosure The schematic diagram of example property system architecture;
Fig. 2 diagrammatically illustrates the process of the personalized generation method for showing image of the embodiment according to the disclosure Figure;
Fig. 3 diagrammatically illustrates the flow chart of the training method of the order models of the embodiment according to the disclosure;
Fig. 4 diagrammatically illustrates the flow chart of the determination method of the second feature 520 of the embodiment according to the disclosure;
Fig. 5 diagrammatically illustrates the structure chart of the order models of the embodiment according to the disclosure;
Fig. 6 diagrammatically illustrates the flow chart of characteristic processing method in the order models according to an embodiment of the disclosure;
Fig. 7 is diagrammatically illustrated according to the training of the order models of an embodiment of the disclosure and order models application relationship Figure;
Fig. 8 diagrammatically illustrates the flow chart of classification method in the order models according to an embodiment of the disclosure;
Fig. 9 diagrammatically illustrates the structure of the personalized generating means for showing image of the embodiment according to the disclosure Figure;
Figure 10 shows the structural schematic diagram for being suitable for the computer system for the electronic equipment for being used to realize the embodiment of the present disclosure.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will more Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.
In addition, described feature, structure or characteristic can be incorporated in one or more implementations in any suitable manner In example.In the following description, many details are provided to provide and fully understand to embodiment of the disclosure.However, It will be appreciated by persons skilled in the art that can with technical solution of the disclosure without one or more in specific detail, Or it can be using other methods, constituent element, device, step etc..In other cases, it is not shown in detail or describes known side Method, device, realization or operation are to avoid fuzzy all aspects of this disclosure.
Block diagram shown in the drawings is only functional entity, not necessarily must be corresponding with physically separate entity. I.e., it is possible to realize these functional entitys using software form, or realized in one or more hardware modules or integrated circuit These functional entitys, or these functional entitys are realized in heterogeneous networks and/or processor device and/or microcontroller device.
Flow chart shown in the drawings is merely illustrative, it is not necessary to including all content and operation/step, It is not required to execute by described sequence.For example, some operation/steps can also decompose, and some operation/steps can close And or part merge, therefore the sequence actually executed is possible to change according to the actual situation.
Artificial intelligence (Artificial Intelligence, referred to as: AI) is to utilize digital computer or numerical calculation Machine simulation, extension and the intelligence for extending people of machine control, perception environment obtain knowledge and using Knowledge Acquirement optimum Theory, method, technology and application system.In other words, artificial intelligence is a complex art of computer science, it attempts The essence of intelligence is solved, and produces a kind of new intelligence machine that can be made a response in such a way that human intelligence is similar.Artificial intelligence The design principle and implementation method that various intelligence machines can namely be studied make machine have the function of perception, reasoning and decision.
Computer vision technique (Computer Vision, referred to as: CV) computer vision is how a research makes machine The science of " seeing " further just refers to and replaces human eye to be identified, tracked to target with video camera and computer and measured Etc. machine vision, and further do graphics process, computer made to be treated as being more suitable for eye-observation or send instrument detection to Image.As a branch of science, the relevant theory and technology of computer vision research, it is intended to which establishing can be from image or more The artificial intelligence system of information is obtained in dimension data.Computer vision technique generally includes image procossing, image recognition, image language Reason and good sense solution, image retrieval, OCR, video processing, video semanteme understanding, the reconstruction of video content/Activity recognition, three-dimension object, 3D skill The technologies such as art, virtual reality, augmented reality, synchronous superposition further include common recognition of face, fingerprint recognition etc. Biometrics identification technology.
Machine learning (Machine Learning, referred to as: ML) is a multi-field cross discipline, is related to probability theory, system Count the multiple subjects such as, Approximation Theory, convextiry analysis, algorithm complexity theory.Specialize in the mankind are simulated or realized to computer how Learning behavior reorganize the existing structure of knowledge to obtain new knowledge or skills and be allowed to constantly improve the performance of itself. Machine learning is the core of artificial intelligence, is the fundamental way for making computer have intelligence, and application is each throughout artificial intelligence A field.Machine learning and deep learning generally include artificial neural network, confidence network, intensified learning, transfer learning, conclusion The technologies such as study, formula teaching habit.
With artificial intelligence technology research and progress, research and application is unfolded in multiple fields in artificial intelligence technology, such as Common smart home, intelligent wearable device, virtual assistant, intelligent sound box, intelligent marketing, unmanned, automatic Pilot, nobody Machine, robot, intelligent medical, intelligent customer service etc., it is believed that with the development of technology, artificial intelligence technology will obtain in more fields To application, and play more and more important value.
The scheme that the embodiment of the present disclosure provides is related to the technologies such as computer vision technique and the machine learning of artificial intelligence, has Body is illustrated by following examples:
Fig. 1, which is shown, can show the generation method of image and showing for device using a kind of personalization of the embodiment of the present disclosure The schematic diagram of the system architecture of example property application environment.
As shown in Figure 1, system architecture 100 may include one or more of terminal device 101,102,103, network 104 and server 105.Network 104 between terminal device 101,102,103 and server 105 to provide communication link Medium.Network 104 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..Terminal is set Standby 101,102,103 can be the various electronic equipments with display screen, including but not limited to desktop computer, portable computing Machine, smart phone and tablet computer etc..It should be understood that the number of terminal device, network and server in Fig. 1 is only to show Meaning property.According to needs are realized, any number of terminal device, network and server can have.For example server 105 can be with It is the server cluster etc. of multiple server compositions.
Personalization provided by the embodiment of the present disclosure shows that the generation method of image is generally executed by server 105, accordingly Ground, personalization show that the generating means of image are generally positioned in server 105.But skilled addressee readily understands that It is that personalization provided by the embodiment of the present disclosure shows that the generation method of image can also be held by terminal device 101,102,103 Row, correspondingly, the personalized generating means for showing image also can be set in terminal device 101,102,103, this is exemplary Particular determination is not done in embodiment to this.
For example, in a kind of exemplary embodiment, it can be terminal device 101,102,103 for the use of target user User data shows image, about at least one for at least one displaying object data and above-mentioned displaying image for showing image Official documents and correspondence data are sent to server 105.To, server 105 to the user data of target user, about at least one display diagram Picture shows that object data and above-mentioned at least one official documents and correspondence data for showing image carry out coded treatment, obtains fisrt feature; And server 105 carries out feature extraction to above-mentioned at least one displaying image, obtains second feature;Then, server 105 Fusion treatment is carried out to above-mentioned fisrt feature and above-mentioned second feature by above-mentioned order models, obtains fusion feature.Further Ground, server 105 predict click of the above-mentioned target user to above-mentioned fusion feature by the above-mentioned order models of above-mentioned order models Information, personalized with the above-mentioned target user of determination show image.Illustratively, server 105 sends out personalized tactical image It send to terminal device 101,102,103, thus, target user can easily be watched by terminal device 101,102,103 State personalized displaying image.Further, the above-mentioned personalized individual demand for showing image and meeting target user, helps to mention It rises and shows corresponding clicking rate in displaying image.Meanwhile the personalized picture browsing body for showing image and being conducive to be promoted user It tests.
Illustratively, a kind of usage scenario may is that the banner for the publication of electric business platform, and different user is for showing The bandwagon effect of image has different preferences.Such as, some users prefer the banner of simplicity generosity style, and some users are more Like the banner of elegant and poised style.When the displaying style for showing object same in the displaying image that the relevant technologies provide is phase With, it is not able to satisfy the individual demand of different user.
The technical solution of the embodiment of the present disclosure is described in detail below:
Fig. 2 diagrammatically illustrates the process of the personalized generation method for showing image of the embodiment according to the disclosure Figure.Specifically, with reference to Fig. 2, which includes:
Step S210, to the user data of target user, about at least one displaying object data for showing image and At least one official documents and correspondence data for showing image carry out coded treatment, obtain fisrt feature;
Step S220 carries out feature extraction at least one displaying image, obtains second feature;
Step S230 carries out at fusion the fisrt feature and the second feature by the order models after training Reason, obtains fusion feature;And
Step S240, by the order models predict the target user to the click information of the fusion feature, with Determine that the personalized of the target user shows image.
It is that different user determines individual character that the technical solution that embodiment shown in Fig. 2 provides, which is based on the order models after training, Change and shows image.Due to including user characteristics and the attributive character for showing image in mode input feature, so as to logical The order models after training are crossed, predict the higher one group of objective attribute target attribute feature of target user's clicking rate.Wherein, above-mentioned objective attribute target attribute Feature is to embody one group of feature of above-mentioned target user's personalization, further, true according to above-mentioned one group of objective attribute target attribute feature Determine personalized displaying image, to improve the personalization for showing image, helps to be promoted corresponding to showing in displaying image Clicking rate.Meanwhile the personalized picture browsing experience for showing image and being conducive to be promoted user.
In the exemplary embodiment, since the personalized generation method for showing image provided by the above embodiment is to be based on What the order models after training were realized, therefore the training method to above-mentioned order models is introduced in the present embodiment first.Wherein, on It states order models and can be and obtained based at least one of neural network model, decision-tree model and extreme gradient lift scheme 's.It is illustrated so that the training to neural network model obtains above-mentioned order models as an example below.
Illustratively, Fig. 3 diagrammatically illustrates the stream of the training method of the order models of the embodiment according to the disclosure Cheng Tu.Specifically, the figure illustrated embodiment includes step S310- step S340 with reference to Fig. 3.Illustratively, Fig. 5 schematically shows The structure chart of the order models of the embodiment according to the disclosure is gone out.It is explained below in conjunction with each step of the Fig. 5 to Fig. 3 Illustrate:
In step s310, multiple groups training sample is obtained, wherein every group of training sample includes: user characteristics, and at least A kind of attributive character showing image and the target click information about shown attributive character.
In the exemplary embodiment, it shows that image includes at least following attributive character: showing characteristics of objects 102 (e.g., The feature of commodity in banner), official documents and correspondence feature and image unique characteristics.The training objective of above-mentioned order models are as follows: according to A large amount of historical datas about displaying image that family generates predict preference of the user about different characteristic in displaying image, from And determine one group of target signature, and then determine that meeting the personalized of the user preference shows image according to this group of target signature.Its In, the preference of user can be embodied in click information, if user clicks table is bright to meet its preference, if user does not click explanation Its preference is not met.
In the exemplary embodiment, with reference to Fig. 5, the training sample of order models 500 is concentrated, comprising about lteral data Feature, i.e. fisrt feature 510, and about the feature of image, i.e. second feature 520.
In following embodiment, by the specific embodiment to the above-mentioned fisrt feature of determination 510 and second feature 520 respectively into Row illustrates.
Illustratively, data source relevant to above-mentioned fisrt feature 510 includes two aspects, is on the one hand user data, separately On the one hand for about the data for showing image, comprising: show object data, official documents and correspondence data and user respectively to above-mentioned displaying pair As, click information of official documents and correspondence etc..Illustratively, data source relevant to above-mentioned second feature 520 includes the figure to show image As feature.By above-mentioned display diagram picture by taking image ad banner as an example, above-mentioned displaying object is commodity.Following table table 1 is listed The relevant data source of banner and about data source classification information and pretreatment information.
Table 1
Reference table 1, the user data about banner includes: user's representation data and the corresponding device identification of user.Its In, user's representation data includes: the information such as age, gender, occupation and address;The corresponding device identification of user is marked as user Know, to uniquely determine the user, consequently facilitating counting the preference of the user.
The classification learning algorithm of above-mentioned order models is the calculating of calculating or similarity based on distance between feature. Wherein, the calculating of distance or similarity is usually the similarity calculation (e.g., calculating cosine similarity) in theorem in Euclid space.Therefore, It needs data source being mapped to theorem in Euclid space in the present embodiment.
And one-hot coding mode may be implemented discrete class data to theorem in Euclid space mapping by way of.Pass through one- Hot coding can make the variable-value of non-partial ordering relation not have partial order, and be equidistant to dot.Pass through one-hot Coding, extends to theorem in Euclid space for the value of discrete class data, that is, the value of discrete class data just corresponds to certain of theorem in Euclid space A point.To be conducive to the rationalization of the calculating of characteristic distance involved in sorting algorithm.
Therefore, above-mentioned that the Data preprocess mode of discrete class data can be encoded using One-hot.
In the exemplary embodiment, data prediction is carried out to the data in above-mentioned data source, to determine data characteristics.
Illustratively, user characteristics 101 can be determined by pre-processing the user data in above-mentioned table 1 with reference to Fig. 5.Its In, specific data processing method can be with reference table 1: for user identifier (user id), user's gender (e.g., male, female or not Know) it is pre-processed by way of one-hot coding (One-hot coding), to age of user then using the number of quantization segment encoding Data preprocess mode.So that it is determined that identification characteristics, age characteristics and sex character in Fig. 5 in user characteristics 101.
It illustratively, can be by pre-processing the quotient in above-mentioned table 1 referring again to FIGS. 5, in the case where by taking banner as an example Product data show characteristics of objects 102 to determine.Wherein, specific data processing method can be with reference table 1: for commodity sign (user id), merchandise classification (e.g., clothing, foodstuff or makeups class etc.) and commodity modification gimmick data (e.g., shade, Decorative layer etc.) it is pre-processed in such a way that One-hot is encoded.So that it is determined that showing the mark in characteristics of objects 102 in Fig. 5 Feature, category feature and gimmick feature.
Illustratively, with continued reference to Fig. 5, by taking banner as an example in the case where, in order to attract user eyeball to be promoted pair The clicking rate of banner, in banner other than arranging commodity image, it is also possible to arrange some associated with commodity attached Image (i.e. for determining attached characteristics of objects 103).For example, wearing, can be there is above-mentioned basketball by the banner about basketball shoes A The basketball star of shoes A is arranged in bannner, to promote concern of the user to above-mentioned basketball shoes A.Wherein, it is dressed in banner The basketball star image for having above-mentioned basketball shoes A is the auxiliary picture of commodity in table 1.It is determined for attached characteristics of objects 103.It illustratively, can be by the auxiliary picture data of the commodity in the above-mentioned table 1 of pretreatment, to determine attached characteristics of objects 103.It specifically determines and determines displaying characteristics of objects 102 in the data prediction mode and above-described embodiment of attached characteristics of objects 103 Data prediction mode it is identical, details are not described herein.
Illustratively, referring still to Fig. 5, official documents and correspondence feature 104 can be determined the pretreatment of official documents and correspondence data in banner. Wherein, official documents and correspondence data include official documents and correspondence style in banner (e.g., brief, European, Chinese style etc.) and official documents and correspondence format (i.e. banner Chinese The type-setting mode of case, such as the topology data of main body of a court verdict case, secondary official documents and correspondence, decorative official documents and correspondence in banner).For above-mentioned official documents and correspondence number According to can also pre-process by the way of One-hot coding to determine in Fig. 5 the official documents and correspondence wind in displaying official documents and correspondence feature 104 Lattice feature and official documents and correspondence format feature.
In the exemplary embodiment, Fig. 4 diagrammatically illustrates the second feature 520 of the embodiment according to the disclosure Determine the flow chart of method.With reference to Fig. 4, the method which provides includes:
Step S410, by the neural network model of the displaying image input pre-training;And step S420, according to institute The full articulamentum for stating the neural network model of pre-training determines the full articulamentum feature of image, and the nerve according to the pre-training The inner product of the output vector of the hidden layer of network model determines image style and features.
In the exemplary embodiment, the second feature for showing image is obtained by using the mode of pre-training model.Ginseng Table 1 is examined, for Banner picture, passes through the full connection features (Fully of the full articulamentum feature extraction of pre-training VGG-16 Convolutional Networks, referred to as: FCN), and extracted by the Gram matrix of the feature map of pre-training VGG-16 Style and features.
With continued reference to Fig. 3, after determining sample data according to above-mentioned fisrt feature 510 and second feature 520, in step In S320, the user characteristics and the attributive character are input to order models;And in step S330, according to described The Fusion Features layer of order models merges the user characteristics and the attributive character, obtains fusion feature.
In the exemplary embodiment, Fig. 6 diagrammatically illustrates special in the order models according to an embodiment of the disclosure Levy the flow chart of processing method.Specifically, can be used as a kind of specific embodiment of step S320 and step S330.With reference to figure 6, the method which provides includes step S610- step S630.Wherein:
In step S610, flush type learning is carried out to the fisrt feature by the embeding layer of the order models, is obtained To embedded feature.
In the exemplary embodiment, the above-mentioned fisrt feature 510 determining according to coding, need to use flush type learning (embedding) mode realizes dimensionality reduction, so that it is determined that each attribute of the embedded feature of user characteristics 101 and displaying image The embedded feature of feature.
Specifically, fisrt feature 510 first can be established by index tab;It is inputted with reference to Fig. 5, and then by fisrt feature 510 To the embeding layer 501 of order models 500, above-mentioned index tab and embedding layers of weight square are passed through by embeding layer 501 Battle array calculates to reduce dimension and carry out Feature Dimension Reduction processing.Wherein, Feature Dimension Reduction can use principal component analysis (Principal Component Analysis, referred to as: PCA), singular value decomposition (Singular Value Decomposition, abbreviation SVD) Scheduling algorithm is realized.
In step S620, the embedded feature is carried out at fusion by the first fused layer of the order models Reason, obtains the first fusion feature.
It in the exemplary embodiment, will embedded feature ordering mould corresponding with above-mentioned fisrt feature 510 with reference to Fig. 5 First fused layer 502 (i.e. perceptron layers) of type.User characteristics 101, displaying pair are realized by above-mentioned first fused layer 502 As feature 102 and the attached fusion shown between characteristics of objects 103, the first fusion feature is obtained.
With continued reference to Fig. 6, in step S630, merged by the second fused layer of the order models to by described first Feature and the second feature carry out fusion treatment, obtain the second fusion feature.
In the exemplary embodiment, with reference to Fig. 5, according to the output feature of the first fused layer 501 (that is, above-mentioned first melts Close feature) and above-mentioned second feature 520 be input to the second fused layer 503.To splice the full articulamentum feature of VGG16, (FCN is special Sign) and style and features (map vector of the Gram matrix of the feature map of VGG16).
In the exemplary embodiment, with continued reference to Fig. 3, after determining the second fusion feature, in step S340, root Prediction click information is determined according to the fusion feature (i.e. above-mentioned second fusion feature), and is based on the prediction click information and institute Show that target click information determines loss function, to optimize the model parameter of the order models according to shown loss function.
In the exemplary embodiment, with reference to Fig. 5, above-mentioned first fusion feature is input to the classification of order models 500 Layer 504.Illustratively, realize that the click to the second fusion feature is believed by classification function (e.g., sigmoid) in classification layer 504 The prediction of breath obtains prediction click information.
In the exemplary embodiment, the cross entropy letter of prediction click information and corresponding target click information is determined Number to pass through the model parameter of this loss function Optimal scheduling model, and then is realized as the loss function of above-mentioned order models Model training.
In the exemplary embodiment, if the test index tested order models meets preset need, Order models after being trained.Illustratively, Fig. 7 is diagrammatically illustrated instructs according to the order models of an embodiment of the disclosure Experienced and order models application relational graph.
With reference to Fig. 7, the technical solution that is provided according to Fig. 3 to embodiment illustrated in fig. 6 to training stages 710 of order models into It has gone detailed elaboration, including has obtained training sample 711, is based on training sample training pattern 712, the row after finally obtaining training Sequence model 713.
In the examples below, based on the order models 713 after above-mentioned training, the model application stage 720 is carried out, comprising: It obtains the relevant user data of target user and shows image data 721, and feature extraction 722 is carried out to above-mentioned data; Further, the order models 713 after inputting above-mentioned training;Finally, it is exported according to model and determines personalized displaying image.
Based on the order models 713 after above-mentioned training, below to the specific embodiment party of each step of embodiment illustrated in fig. 2 Formula is illustrated:
In step S210, to the user data of target user, about at least one displaying object data for showing image And at least one official documents and correspondence data for showing image carry out coded treatment, obtain fisrt feature.
In the exemplary embodiment, the preference of above-mentioned target user is predicted (i.e. by the order models after above-mentioned training One group about the attributive character for showing image), further, personalized displaying image is determined according to its preference.Due to personalization The attributive character of image embodies the preference of target user, therefore, while the picture browsing experience for promoting target user, also mentions It has risen to the clicking rate for showing image.
Illustratively, above-mentioned user data includes: user's representation data and the corresponding device identification of user.To user data It carries out coded treatment and determines user characteristics.Due to obtaining user characteristics in the specific embodiment and above-described embodiment of the present embodiment Specific embodiment it is identical, therefore details are not described herein.
Illustratively, at least one displaying image is also obtained in step S210.For every kind of displaying image, obtains and show Object data and official documents and correspondence data.If showing in image to include attached object, attached object data is also obtained.Further, right Above-mentioned data are handled to obtain the corresponding one group of attributive character of every kind of displaying image.Due to the specific embodiment of the present embodiment Show that the specific embodiment of attributive character of image is identical with obtaining in above-described embodiment, therefore details are not described herein.
In step S220, feature extraction is carried out at least one displaying image, obtains second feature.
In the exemplary embodiment, for every kind of displaying image in step S210, it is obtained by pre-training network Full connection features and style and features.Since specific embodiment is identical as the specific embodiment that embodiment illustrated in fig. 4 provides, because Details are not described herein for this.
In step S230, the fisrt feature and the second feature are merged by the order models after training Processing, obtains fusion feature.
In the exemplary embodiment, the tool provided for the specific embodiment of step S230 and embodiment illustrated in fig. 5 Body embodiment is identical, therefore details are not described herein.
In step S240, predict that the target user believes the click of the fusion feature by the order models Breath, personalized with the determination target user show image.
In the exemplary embodiment, Fig. 8 is diagrammatically illustrated in the order models according to an embodiment of the disclosure and is divided The flow chart of class method.Specifically, can be used as a kind of specific embodiment of step S240.With reference to Fig. 8, which is provided Method include step S810- step S830.Wherein,
In step S810, predict the target user to the fusion feature by the classification layer of the order models Click information.
In the exemplary embodiment, the mathematical notation of above-mentioned order models is as follows, L (X)=P (click).Wherein X table Show that input feature vector, P (click) indicate a possibility that target user clicks.Specifically, P (click) is if target user clicks 1, P (click) is 0 if target user does not click on.
In the exemplary embodiment, for the input feature vector of above-mentioned target user, above-mentioned order models prediction is to input The click information of feature.
In step S820, subject fusion feature is determined according to prediction result, wherein the subject fusion feature includes one Individualized feature of the group about the target user;And in step S830, described is determined according to the individualized feature Propertyization shows image.
In the exemplary embodiment, using the corresponding fusion feature of above-mentioned click predicted value P (Click) maximum value as mesh Fusion feature is marked, and is used as one group about above-mentioned target according to the feature about displaying image that subject fusion feature is included The individualized feature at family.Further, it is generated according to this group of individualized feature and shows image, as above-mentioned personalized display diagram Picture.
Displaying the lacking individuality of image that can solve the relevant technologies offer by the technical program is unfavorable for being promoted to exhibition The problem of clicking rate of object is shown in diagram picture.By taking image ad banner as an example, the technical program can be realized advertisement The personalization of banner increases the validity that advertisement is launched to improve clicking rate.
It will be appreciated by those skilled in the art that realizing that all or part of the steps of above embodiment is implemented as by handling The computer program that device (including CPU and GPU) executes.When the computer program is executed by processor, executes the disclosure and provide The above method defined by above-mentioned function.The program can store in a kind of computer readable storage medium, this is deposited Storage media can be read-only memory, disk or CD etc..
Further, it should be noted that above-mentioned attached drawing is only according to included by the method for disclosure illustrative embodiments Processing schematically illustrates, rather than limits purpose.It can be readily appreciated that above-mentioned processing shown in the drawings does not indicate or limits these The time sequencing of processing.In addition, being also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules 's.
Further, in this example embodiment, a kind of personalized generating means for showing image are additionally provided.With reference to figure Shown in 9, which shows that the generating means 900 of image include: fisrt feature determining module 901, second feature determining module 902, Fusion Features module 903 and personalized displaying image determining module 904.Wherein:
Above-mentioned fisrt feature determining module 901 is configured as the user data to target user, shows about at least one Image shows that object data and above-mentioned at least one official documents and correspondence data for showing image carry out coded treatment, obtains the first spy Sign;
Above-mentioned second feature determining module 902 is configured as carrying out feature extraction to above-mentioned at least one displaying image, obtain To second feature;
Features described above Fusion Module 903 is configured as through the order models after training to above-mentioned fisrt feature and above-mentioned Second feature carries out fusion treatment, obtains fusion feature;
Above-mentioned personalized displaying image determining module 904, is configured as above-mentioned order models and predicts above-mentioned target user couple The click information of above-mentioned fusion feature, personalized with the above-mentioned target user of determination show image.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned user data includes: user's portrait Data and the corresponding device identification of user;Above-mentioned displaying object data includes: classification data, mark and modification gimmick data;On Stating official documents and correspondence data includes: official documents and correspondence style and official documents and correspondence format;And the order models after above-mentioned training are based on neural network mould What at least one of type, decision-tree model and extreme gradient lift scheme obtained.
In a kind of exemplary embodiment of the disclosure, be based on previous embodiment, above-mentioned second feature determining module 902, It include: input unit 9021 and feature extraction unit 9022.
Above-mentioned input unit 9021 is configured as: by the neural network model of above-mentioned displaying image input pre-training;And Features described above extraction unit 9022 is configured as: determining that image is complete according to the full articulamentum of the neural network model of above-mentioned pre-training Articulamentum feature, and image wind is determined according to the inner product of the output vector of the hidden layer of the neural network model of above-mentioned pre-training Lattice feature.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, personalization shows the generating means of image 900 further include: model training module 905.Wherein:
Above-mentioned model training module 905, comprising: sample acquisition unit 9051, feature input unit 9052, Fusion Features list Member 9053 and Model Parameter Optimization unit 9054.
Above-mentioned sample acquisition unit 9051 is configured as: obtain multiple groups training sample, wherein every group of training sample includes: User characteristics and at least one attributive character for showing image and the target click information about shown attributive character;
Features described above input unit 9052 is configured as: above-mentioned user characteristics and above-mentioned attributive character are input to sequence mould Type;
Features described above integrated unit 9053 is configured as: merging above-mentioned user according to the Fusion Features layer of above-mentioned order models Feature and above-mentioned attributive character, obtain fusion feature;And
Above-mentioned Model Parameter Optimization unit 9054 is configured as: prediction click information is determined according to above-mentioned fusion feature, and Loss function is determined based on above-mentioned prediction click information and shown target click information, it is above-mentioned to be optimized according to shown loss function The model parameter of order models.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned Model Parameter Optimization unit 9054 It is configured specifically are as follows: the intersection entropy function for determining above-mentioned prediction click information and shown target click information, as above-mentioned loss Function.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, features described above Fusion Module 903 is had Body is configured that
Flush type learning is carried out to above-mentioned fisrt feature by the embeding layer of above-mentioned order models, obtains embedded feature; Fusion treatment is carried out to above-mentioned embedded feature by the first fused layer of above-mentioned order models, obtains the first fusion feature;With And it is carried out at fusion by the second fused layer of above-mentioned order models to by above-mentioned first fusion feature and above-mentioned second feature Reason, obtains the second fusion feature.
In a kind of exemplary embodiment of the disclosure, it is based on previous embodiment, above-mentioned personalized displaying image determines mould Block 904, is configured specifically are as follows:
The classification layer of above-mentioned order models predicts above-mentioned target user to the click information of above-mentioned fusion feature;According to prediction As a result subject fusion feature is determined, wherein above-mentioned subject fusion feature includes that one group of personalization about above-mentioned target user is special Sign;And above-mentioned personalized displaying image is determined according to above-mentioned individualized feature.
Each module or the detail of unit are in corresponding individual character in the above-mentioned personalized generating means for showing image Change and be described in detail in the generation method for showing image, therefore details are not described herein again.
Figure 10 shows the structural schematic diagram for being suitable for the computer system for the electronic equipment for being used to realize the embodiment of the present invention.
It should be noted that the computer system 1000 of the electronic equipment shown in Figure 10 is only an example, it should not be to this The function and use scope of inventive embodiments bring any restrictions.
As shown in Figure 10, computer system 1000 includes processor 1001, and wherein processor 1001 may include: at figure Manage unit (Graphics Processing Unit, GPU), central processing unit (Central Processing Unit, It CPU), can be according to the program being stored in read-only memory (Read-Only Memory, ROM) 1002 or from storage unit Divide 1008 programs being loaded into random access storage device (Random Access Memory, RAM) 1003 and executes various suitable When movement and processing.In RAM 1003, it is also stored with various programs and data needed for system operatio.Processor (GPU/ CPU) 1001, ROM 1002 and RAM 1003 are connected with each other by bus 1004.Input/output (Input/Output, I/O) Interface 1005 is also connected to bus 1004.
I/O interface 1005 is connected to lower component: the importation 1006 including keyboard, mouse etc.;Including such as cathode Ray tube (Cathode Ray Tube, CRT), liquid crystal display (Liquid Crystal Display, LCD) etc. and loudspeaking The output par, c 1006 of device etc.;Storage section 1008 including hard disk etc.;And including such as LAN (Local Area Network, local area network) card, modem etc. network interface card communications portion 1009.Communications portion 1009 is via such as The network of internet executes communication process.Driver 1010 is also connected to I/O interface 1005 as needed.Detachable media 1011, such as disk, CD, magneto-optic disk, semiconductor memory etc., are mounted on as needed on driver 1010, in order to It is mounted into storage section 1008 as needed from the computer program read thereon.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer below with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 1009, and/or from detachable media 1011 are mounted.When the computer program is executed by processor (GPU/CPU) 1001, executes and limited in the system of the application Various functions.In some embodiments, computer system 1000 can also include AI (Artificial Intelligence, people Work intelligence) processor, the AI processor is for handling the calculating operation in relation to machine learning.
It should be noted that computer-readable medium shown in the embodiment of the present disclosure can be computer-readable signal media Or computer readable storage medium either the two any combination.Computer readable storage medium for example can be with System, device or the device of --- but being not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or it is any more than Combination.The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires Electrical connection, portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type are programmable Read-only memory (Erasable Programmable Read Only Memory, EPROM), flash memory, optical fiber, Portable, compact Disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In the disclosure, computer readable storage medium can be it is any include or storage program Tangible medium, which can be commanded execution system, device or device use or in connection.And in this public affairs In opening, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, In carry computer-readable program code.The data-signal of this propagation can take various forms, including but not limited to Electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable Any computer-readable medium other than storage medium, the computer-readable medium can send, propagate or transmit for by Instruction execution system, device or device use or program in connection.The journey for including on computer-readable medium Sequence code can transmit with any suitable medium, including but not limited to: wireless, wired etc. or above-mentioned is any appropriate Combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of above-mentioned module, program segment or code include one or more Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical On can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it wants It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction It closes to realize.
Being described in unit involved in the embodiment of the present disclosure can be realized by way of software, can also be by hard The mode of part realizes that described unit also can be set in the processor.Wherein, the title of these units is in certain situation Under do not constitute restriction to the unit itself.
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment. Above-mentioned computer-readable medium carries one or more program, when the electronics is set by one for said one or multiple programs When standby execution, so that the electronic equipment realizes method described in above-described embodiment.
For example, the electronic equipment may be implemented as shown in Figure 2: step S210, to the number of users of target user According to, about it is at least one show image displayings object data and it is described it is at least one displaying image official documents and correspondence data compiled Code processing, obtains fisrt feature;Step S220 carries out feature extraction at least one displaying image, obtains second feature; Step S230 carries out fusion treatment to the fisrt feature and the second feature by the order models after training, is melted Close feature;And step S240, clicking rate prediction is carried out to the fusion feature by the order models, described in determination The personalized of target user shows image.
For another example, the electronic equipment may be implemented such as Fig. 3 to each step shown in fig. 8.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description Member, but this division is not enforceable.In fact, according to embodiment of the present disclosure, it is above-described two or more Module or the feature and function of unit can embody in a module or unit.Conversely, an above-described mould The feature and function of block or unit can be to be embodied by multiple modules or unit with further division.
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating Equipment (can be personal computer, server, touch control terminal or network equipment etc.) is executed according to disclosure embodiment Method.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the disclosure Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principles of this disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the disclosure are by following Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by the accompanying claims.

Claims (10)

1. a kind of personalized generation method for showing image, which is characterized in that the described method includes:
To the user data of target user, about at least one displaying object data for showing image and at least one exhibition The official documents and correspondence data of diagram picture carry out coded treatment, obtain fisrt feature;
Feature extraction is carried out at least one displaying image, obtains second feature;
Fusion treatment is carried out to the fisrt feature and the second feature by the order models after training, it is special to obtain fusion Sign;
Predict that the target user to the click information of the fusion feature, is used with the determination target by the order models The personalized of family shows image.
2. the personalized generation method for showing image according to claim 1, which is characterized in that
The user data includes: user's representation data and the corresponding device identification of user;
The displaying object data includes: classification data, mark and modification gimmick data;
The official documents and correspondence data include: official documents and correspondence style and official documents and correspondence format;
Order models after the training be based in neural network model, decision-tree model and extreme gradient lift scheme extremely It is few a kind of to obtain.
3. the personalized generation method for showing image according to claim 1, which is characterized in that described to described at least one Kind shows that image carries out feature extraction, obtains second feature, comprising:
By the neural network model of the displaying image input pre-training;
The full articulamentum feature of image is determined according to the full articulamentum of the neural network model of the pre-training, and according to described pre- The inner product of the output vector of the hidden layer of trained neural network model determines image style and features.
4. the personalized generation method for showing image as claimed in any of claims 1 to 3, which is characterized in that institute State method further include:
Obtain multiple groups training sample, wherein every group of training sample includes: user characteristics and at least one category for showing image Property feature and the target click information about shown attributive character;
The user characteristics and the attributive character are input to order models;
The user characteristics and the attributive character are merged according to the Fusion Features layer of the order models, obtain fusion feature;
Prediction click information is determined according to the fusion feature, and is based on the prediction click information and shown target click information Loss function is determined, to optimize the model parameter of the order models according to shown loss function.
5. the personalized generation method for showing image according to claim 4, which is characterized in that described to be based on the prediction Click information and shown target click information determine loss function, comprising:
The intersection entropy function for determining the prediction click information and shown target click information, as the loss function.
6. the personalized generation method for showing image as claimed in any of claims 1 to 3, which is characterized in that institute It states and fusion treatment is carried out to the fisrt feature and the second feature by the order models after training, obtain fusion feature, Include:
Flush type learning is carried out to the fisrt feature by the embeding layer of the order models, obtains embedded feature;
Fusion treatment is carried out to the embedded feature by the first fused layer of the order models, obtains the first fusion spy Sign;
It is carried out at fusion by the second fused layer of the order models to by first fusion feature and the second feature Reason, obtains the second fusion feature.
7. the personalized generation method for showing image according to claim 6, which is characterized in that described to pass through the sequence Target user described in model prediction is to the click information of the fusion feature, with the personalized display diagram of the determination target user Picture, comprising:
Predict the target user to the click information of the fusion feature by the classification layer of the order models;
Subject fusion feature is determined according to prediction result, wherein the subject fusion feature includes one group and uses about the target The individualized feature at family;
The personalized displaying image is determined according to the individualized feature.
8. a kind of personalized generating means for showing image, which is characterized in that described device includes:
Fisrt feature determining module is configured as the user data to target user, about at least one displaying for showing image Object data and at least one official documents and correspondence data for showing image carry out coded treatment, obtain fisrt feature;
Second feature determining module is configured as carrying out feature extraction at least one displaying image, obtains second feature;
Fusion Features module is configured as carrying out the fisrt feature and the second feature by the order models after training Fusion treatment obtains fusion feature;
Personalization shows image determining module, is configured as predicting the target user to the fusion by the order models The click information of feature, personalized with the determination target user show image.
9. a kind of computer storage medium, which is characterized in that be stored thereon with computer program;
Wherein, personalized exhibition described in any one of claim 1 to 7 is realized when the computer program is executed by processor The generation method of diagram picture.
10. a kind of electronic equipment characterized by comprising
Processor;And
Memory, for storing the executable instruction of the processor;
Wherein, the processor is configured to come any one of perform claim requirement 1 to 7 institute via the execution executable instruction The personalized generation method for showing image stated.
CN201910765901.2A 2019-08-19 2019-08-19 Method and device for generating personalized display image and electronic equipment Active CN110489582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910765901.2A CN110489582B (en) 2019-08-19 2019-08-19 Method and device for generating personalized display image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910765901.2A CN110489582B (en) 2019-08-19 2019-08-19 Method and device for generating personalized display image and electronic equipment

Publications (2)

Publication Number Publication Date
CN110489582A true CN110489582A (en) 2019-11-22
CN110489582B CN110489582B (en) 2023-11-07

Family

ID=68552093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910765901.2A Active CN110489582B (en) 2019-08-19 2019-08-19 Method and device for generating personalized display image and electronic equipment

Country Status (1)

Country Link
CN (1) CN110489582B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909182A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Multimedia resource searching method and device, computer equipment and storage medium
CN111506378A (en) * 2020-04-17 2020-08-07 腾讯科技(深圳)有限公司 Method, device and equipment for previewing text display effect and storage medium
CN111581926A (en) * 2020-05-15 2020-08-25 北京字节跳动网络技术有限公司 Method, device and equipment for generating file and computer readable storage medium
CN112767038A (en) * 2021-01-25 2021-05-07 特赞(上海)信息科技有限公司 Poster CTR prediction method and device based on aesthetic characteristics
CN112862557A (en) * 2019-11-28 2021-05-28 北京金山云网络技术有限公司 Information display method, display device, server and storage medium
CN113111243A (en) * 2021-03-29 2021-07-13 北京达佳互联信息技术有限公司 Display object sharing method and device and storage medium
CN113342868A (en) * 2021-08-05 2021-09-03 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment and computer readable storage medium
CN113450433A (en) * 2020-03-26 2021-09-28 阿里巴巴集团控股有限公司 Picture generation method and device, computer equipment and medium
CN114003806A (en) * 2021-09-27 2022-02-01 五八有限公司 Content display method and device, electronic equipment and readable medium
CN116433800A (en) * 2023-06-14 2023-07-14 中国科学技术大学 Image generation method based on social scene user preference and text joint guidance
CN117611953A (en) * 2024-01-18 2024-02-27 深圳思谋信息科技有限公司 Graphic code generation method, graphic code generation device, computer equipment and storage medium
WO2024061073A1 (en) * 2022-09-19 2024-03-28 北京沃东天骏信息技术有限公司 Multimedia information generation method and apparatus, and computer-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046515A (en) * 2015-06-26 2015-11-11 深圳市腾讯计算机系统有限公司 Advertisement ordering method and device
US20170330054A1 (en) * 2016-05-10 2017-11-16 Baidu Online Network Technology (Beijing) Co., Ltd. Method And Apparatus Of Establishing Image Search Relevance Prediction Model, And Image Search Method And Apparatus
CN109460513A (en) * 2018-10-31 2019-03-12 北京字节跳动网络技术有限公司 Method and apparatus for generating clicking rate prediction model
CN109495552A (en) * 2018-10-31 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for updating clicking rate prediction model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046515A (en) * 2015-06-26 2015-11-11 深圳市腾讯计算机系统有限公司 Advertisement ordering method and device
US20170330054A1 (en) * 2016-05-10 2017-11-16 Baidu Online Network Technology (Beijing) Co., Ltd. Method And Apparatus Of Establishing Image Search Relevance Prediction Model, And Image Search Method And Apparatus
CN109460513A (en) * 2018-10-31 2019-03-12 北京字节跳动网络技术有限公司 Method and apparatus for generating clicking rate prediction model
CN109495552A (en) * 2018-10-31 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for updating clicking rate prediction model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李金忠: "排序学习研究进展与展望", 《自动化学报》, vol. 44, no. 8, pages 1345 - 1363 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112862557A (en) * 2019-11-28 2021-05-28 北京金山云网络技术有限公司 Information display method, display device, server and storage medium
CN112862557B (en) * 2019-11-28 2024-06-04 北京金山云网络技术有限公司 Information display method, display device, server and storage medium
CN110909182A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Multimedia resource searching method and device, computer equipment and storage medium
CN113450433A (en) * 2020-03-26 2021-09-28 阿里巴巴集团控股有限公司 Picture generation method and device, computer equipment and medium
CN111506378B (en) * 2020-04-17 2021-09-28 腾讯科技(深圳)有限公司 Method, device and equipment for previewing text display effect and storage medium
CN111506378A (en) * 2020-04-17 2020-08-07 腾讯科技(深圳)有限公司 Method, device and equipment for previewing text display effect and storage medium
CN111581926A (en) * 2020-05-15 2020-08-25 北京字节跳动网络技术有限公司 Method, device and equipment for generating file and computer readable storage medium
CN111581926B (en) * 2020-05-15 2023-09-01 抖音视界有限公司 Document generation method, device, equipment and computer readable storage medium
CN112767038A (en) * 2021-01-25 2021-05-07 特赞(上海)信息科技有限公司 Poster CTR prediction method and device based on aesthetic characteristics
CN113111243A (en) * 2021-03-29 2021-07-13 北京达佳互联信息技术有限公司 Display object sharing method and device and storage medium
CN113342868A (en) * 2021-08-05 2021-09-03 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment and computer readable storage medium
CN114003806A (en) * 2021-09-27 2022-02-01 五八有限公司 Content display method and device, electronic equipment and readable medium
WO2024061073A1 (en) * 2022-09-19 2024-03-28 北京沃东天骏信息技术有限公司 Multimedia information generation method and apparatus, and computer-readable storage medium
CN116433800A (en) * 2023-06-14 2023-07-14 中国科学技术大学 Image generation method based on social scene user preference and text joint guidance
CN116433800B (en) * 2023-06-14 2023-10-20 中国科学技术大学 Image generation method based on social scene user preference and text joint guidance
CN117611953A (en) * 2024-01-18 2024-02-27 深圳思谋信息科技有限公司 Graphic code generation method, graphic code generation device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110489582B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN110489582A (en) Personalization shows the generation method and device, electronic equipment of image
Ware Information visualization: perception for design
Park et al. A metaverse: Taxonomy, components, applications, and open challenges
CN108197532B (en) The method, apparatus and computer installation of recognition of face
CN111754596B (en) Editing model generation method, device, equipment and medium for editing face image
CN109409994A (en) The methods, devices and systems of analog subscriber garments worn ornaments
CN111046275B (en) User label determining method and device based on artificial intelligence and storage medium
CN111680217A (en) Content recommendation method, device, equipment and storage medium
Liu et al. The path of film and television animation creation using virtual reality technology under the artificial intelligence
CN115205949A (en) Image generation method and related device
CN110377587A (en) Method, apparatus, equipment and medium are determined based on the migrating data of machine learning
CN115131698B (en) Video attribute determining method, device, equipment and storage medium
CN111353299B (en) Dialog scene determining method based on artificial intelligence and related device
CN111291170A (en) Session recommendation method based on intelligent customer service and related device
CN115131849A (en) Image generation method and related device
Yang et al. Deep learning-based viewpoint recommendation in volume visualization
Deng et al. Application of vr in the experimental teaching of animation art
CN111522979A (en) Picture sorting recommendation method and device, electronic equipment and storage medium
CN108268629A (en) Image Description Methods and device, equipment, medium, program based on keyword
CN110472239A (en) Training method, device and the electronic equipment of entity link model
Wu et al. Automatic generation of traditional patterns and aesthetic quality evaluation technology
CN117271818A (en) Visual question-answering method, system, electronic equipment and storage medium
CN113537267A (en) Method and device for generating countermeasure sample, storage medium and electronic equipment
CN110135769A (en) Kinds of goods attribute fill method and device, storage medium and electric terminal
CN116955707A (en) Content tag determination method, device, equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant