WO2012071696A1 - 基于用户兴趣学习的个性化广告推送方法与系统 - Google Patents

基于用户兴趣学习的个性化广告推送方法与系统 Download PDF

Info

Publication number
WO2012071696A1
WO2012071696A1 PCT/CN2010/079245 CN2010079245W WO2012071696A1 WO 2012071696 A1 WO2012071696 A1 WO 2012071696A1 CN 2010079245 W CN2010079245 W CN 2010079245W WO 2012071696 A1 WO2012071696 A1 WO 2012071696A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
user
scene
learning
model
Prior art date
Application number
PCT/CN2010/079245
Other languages
English (en)
French (fr)
Inventor
李甲
高云超
余昊男
张军
田永鸿
严军
Original Assignee
华为技术有限公司
北京大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司, 北京大学 filed Critical 华为技术有限公司
Priority to PCT/CN2010/079245 priority Critical patent/WO2012071696A1/zh
Priority to CN2010800065025A priority patent/CN102334118B/zh
Priority to EP10860233.5A priority patent/EP2568429A4/en
Publication of WO2012071696A1 publication Critical patent/WO2012071696A1/zh
Priority to US13/709,795 priority patent/US8750602B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Definitions

  • the present invention relates to the field of image processing, and in particular, to a personalized advertisement pushing method and system based on user interest learning. Background technique
  • Peripheral related ads As shown in Figure 1 (b), when a video is played, a predefined advertisement is displayed on the periphery of the video player (such as a web page, a player border).
  • the above three methods of advertising push have been widely used.
  • the effect of these advertisements is not ideal.
  • the first method when the advertisement is played, the user often browses other webpages and reduces the advertisement effect; the second method has less interference, but the advertisement is often used as the webpage background. Neglect;
  • the third method affects the user's normal viewing experience to a certain extent.
  • the main problem is that the above-mentioned push advertisements are generally associated with content, and cannot satisfy the personalized interest needs of each user, so the effect achieved by the advertisement is poor. Summary of the invention
  • the embodiment of the invention provides a personalized advertisement pushing method and system based on user interest learning, which is used to solve the problem that the existing push advertising has low relevance to the content, and cannot satisfy each user's personalized promotion. interesting needs.
  • the embodiment of the invention provides a personalized advertisement pushing method based on user interest learning, which comprises:
  • Extracting an object of interest in the video according to the user interest model Extracting an object of interest in the video according to the user interest model
  • a plurality of visual features of the object of interest are extracted, and relevant advertising information is retrieved in the advertisement library based on the visual features.
  • the embodiment of the invention further provides a personalized advertisement pushing system based on user interest learning, which comprises:
  • a interest model learning module configured to learn a plurality of user interest models by multi-task sorting
  • an interest object extracting module configured to extract an object of interest in the video according to the user interest model
  • an advertisement retrieval module configured to extract the interest A plurality of visual features of the object, the relevant advertising information being retrieved in the advertisement library based on the visual features.
  • the embodiment of the present invention obtains the user interest model by using the multi-task sorting learning algorithm, and on this basis, automatically extracts the interest regions in the video for different users, and then uses the interest region to associate the advertisement information.
  • the advertisements provided in this way are not only closely related to the video content, but also satisfy the individualization requirements of the users to a certain extent, and realize personalized advertisement push. Close
  • FIG. 1 is a schematic diagram of an existing advertisement pushing method
  • FIG. 2 is a schematic flowchart of a personalized advertisement pushing method based on user interest learning according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a user interest model learning process according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a process for extracting a video interest object according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a key frame acquired in a video object extraction process according to an embodiment of the present invention. Schematic diagram of the distribution of interest;
  • FIG. 6 is a schematic diagram of a personalized advertisement push system based on user interest learning according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of data flow based on the system shown in FIG. 6 according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram of an advertisement push effect generated by the method and system according to an embodiment of the present invention.
  • the system firstly uses the multi-task sorting learning algorithm to train the user interest model according to different pre-collected scene collections and user feedback on the scenes, and simultaneously obtains the scene classification and User classification. It includes the underlying visual feature extraction of the scene, the initial random classification of the scene and the user, and the calculation of the interest model parameters. Then, the system detects the key frame while playing the video, classifies the scene corresponding to the key frame according to the scene category obtained in the model learning process, and then calculates the interest degree map by the interest model of each user.
  • an area with high interest is generated from the interest degree map as the object of interest, and relevant advertisements are retrieved in the advertisement information base according to various characteristics thereof, and finally the video stream with personalized advertisement is output.
  • the feature of the object of interest reflects the visual characteristics of different levels of the object at different levels, including but not limited to color, structure, contour, texture feature.
  • the HSV color histogram, Gabor histogram, and the Gabor histogram of the object of interest are extracted in the embodiment of the present invention. SIFT histogram, visual pattern features.
  • the simultaneous retrieval method is a fast matching algorithm, and different matching methods are adopted for different features.
  • FIG. 2 is a schematic flowchart of a personalized advertisement pushing method based on user interest learning according to an embodiment of the present invention. As shown in FIG. 2, this embodiment may include the following steps:
  • Step 201 Interest model learning: Multiple user interest models are obtained through a multi-task sorting learning algorithm.
  • the step further includes:
  • Step 2011 Get various scenarios and user feedback on each scene.
  • the scene may include topics in various aspects, such as advertisements, news, cartoons, movies, and the like. Users can mark objects of interest in these scenes with a simple interaction. Since the points of interest of different users are different even in the same scene, the field can be characterized as follows Scene collections, user collections, and the connections between them:
  • Step 2012 Extracting the underlying visual features of each macroblock in each scene from both local and global aspects. Specifically, each scene is divided into a set of macroblocks, and the underlying visual features of each macroblock are calculated. The underlying visual features of the entire scene can be obtained by combining the underlying visual features of the macroblock. In one embodiment, the scene can be divided into 16 by 16 size macroblocks and local contrast features in the multi-scale multi-vision channel are extracted as the underlying visual features of each macroblock.
  • Xk2, ... Xk represents a collection of feature vectors for all of its macroblocks.
  • the feature vector V k of the entire scene and the feature vector of the object of interest can be obtained by some combination transformation.
  • v k is defined as the expected and standard deviation of the feature vectors of each macroblock of the scene, ⁇ Defined as the expected and standard deviation of the feature vectors for each macroblock within the object of interest.
  • the preliminary classification can be performed by randomly dividing a scene and user clustering.
  • Another intuitive method is to classify the scene and the user separately according to the similarity of the scene content and the similarity of the user's interest feedback.
  • similar to the content of the scene by scene feature ho v k in step 302 to obtain the calculated affinity and the user selected object of interest characterized by a ⁇ is calculated for each user.
  • Step 2014 Initialize the user interest model based on the results of the two classifications.
  • initializing the user interest model requires first constructing an objective function to be optimized.
  • the objective function is divided into two parts. In this embodiment, it is formalized as:
  • W is the interest model parameter set
  • / is the scene classification set
  • J is the user classification set
  • L( , ⁇ , ⁇ ) is the empirical loss
  • ⁇ ( ⁇ , ⁇ ) it is divided into four types of penalty items based on prior knowledge set.
  • ⁇ ( , ⁇ , ⁇ ) ⁇ 8 ⁇ s + e u Q u + s d Q d + ⁇ 0 ⁇ c
  • Q s refers to the scene clustering penalty, which mainly calculates the difference between the feature vector Vk between the scene and the scene. When the two scenes have the same content but are in different scene classifications, the penalty value is too large.
  • the scene clustering penalty as:
  • A ⁇ ( - ) 2 [cos(v )] + where cos(vv ti ) represents the scene feature vector ⁇ .
  • the cosine distance between and the v is ⁇ , [x]+ is not max(0, x).
  • Q u refers to the user clustering penalty, which mainly calculates the difference of the feature ⁇ of the object of interest selected by the user. When the users of the same preference are classified into different categories, the penalty value becomes larger.
  • the user clustering penalty as:
  • T s is a predefined threshold
  • Mi is a constant used to ⁇ ⁇ . folk ⁇ 3—is in the range [0,1].
  • Q d refers to the model difference penalty, which mainly calculates the prediction loss of different models under different conditions, and encourages different categories of user models to give different predictions. This is because for the same category of user models, the predictions are different under different scenario categories.
  • we define the user clustering penalty as:
  • model complexity penalty which is obtained by calculating the sum of the model parameter moduli.
  • the model complexity penalty is defined as: In the model update process, we can use this penalty to control the number of users and scene classifications, which can avoid overly complex models.
  • Step 2015 Update the scene classification and user classification in turn based on the obtained user interest model.
  • Step 2016, training again to get a new user interest model based on the new scenario and the user's classification results.
  • Step 2017, determine whether the number of predefined iterations is reached or the objective function is small enough to be a certain value? If yes, proceed to step 2018; if no, return to step 2015.
  • Step 2018 The user interest model obtained in the last iteration, and the category of the scene and the user are used as the final user interest model, and the scene and user classification.
  • the basis for calculating the interest model in the beginning of 2014 is to minimize the loss of experience.
  • the update scenario and user classification in step 2015 are performed based on the obtained user interest model.
  • the scene cluster update may predict the error according to the reduced model and improve the content similarity between the scenes
  • the user cluster update may be based on Know the interest model and improve the similarity of preferences between users.
  • the new user interest model is again calculated from the newly obtained two categories, and the iterative update steps are repeated until the defined conditions are met (the defined number of times or the objective function value is small to a certain extent).
  • Step 2018 After that, the obtained scene and user classification and user interest model are used as the basis for extracting the object of interest in future multitasking.
  • Step 202 Interest object extraction: According to the user interest model, the object of interest is extracted in the video. Wherein, as shown in FIG. 4, the step further includes:
  • Step 2011 A representative key frame in the video stream is detected as a key scenario. Specifically, by calculating the similarity between all frames in a video shot, and looking for the frame with the highest similarity to other frames, it is used as a representative key frame.
  • Step 2022 Extract bottom layer visual features of each macro block of the current scene and calculate an overall bottom visual feature of the scene. Specifically, using the same underlying visual features as the interest model learning process, the underlying visual features of each macroblock of the current scene are extracted first, and then the overall underlying visual features of the scene are calculated. In this embodiment, the expected and standard variance of each macroblock feature is taken as the overall feature of the scene.
  • Step 2023 classifying the scene according to the overall underlying visual feature. Specifically, the overall underlying visual feature obtained in step 2022 is used as a basis for classifying the scene, and the closest one of the known scene classes is selected. Preferably, a support vector machine can be trained to perform this sorting work.
  • the degree of interest of each macroblock in the scene can be sorted by using the known user interest model.
  • Step 2024 Sort interest levels of each macro block of the scene according to a user interest model.
  • Step 2025 Map the ranking result to the degree of interest of each candidate block to obtain a scene interest degree distribution.
  • the results of the sorting in step 2024 are mapped to be converted into a range of values that facilitates the representation of the interest graph, such as a decimal between [0, 1].
  • a fourth power function with a range of [0, 1 ] is taken (3 ⁇ 4 ⁇ to perform this mapping work.
  • Step 526 Select a candidate block with the highest global interest degree.
  • Step 2027 use a region growing algorithm to generate a region with a high degree of interest. It should be noted that a region with a high degree of interest is generated in this step.
  • the algorithm is not limited to the region growing algorithm, but may be other algorithms.
  • Step 2028 Obtain an object of interest from the region of interest.
  • objects of interest to the user are extracted from the video.
  • the personalized advertisement pushing method based on user interest learning obtained by the embodiment of the invention obtains the user interest model by using the multi-task sorting learning algorithm, and automatically extracts the interest area in the video for different users, and then uses the interest area to advertise Information association.
  • the advertisements provided in this way are not only closely related to the video content, but also satisfy the user's preferences to a certain extent, and realize personalized advertisement push.
  • the embodiment of the present invention further provides a personalized advertisement pushing system based on user interest learning, including: an interest model learning module 61, an interest object extraction module 62, and an advertisement retrieval module 63.
  • the interest model learning module 61 is configured to learn a plurality of user interest models by multi-task sorting; the interest object extraction module 62 is configured to extract an interest object in the video according to the user interest model; the advertisement retrieval module 63 is configured to extract the Describe various visual features of the object of interest, and retrieve relevant advertising information in the advertisement library according to the visual feature.
  • the interest model learning module 61 may further include the following sub-modules:
  • a feature extraction sub-module 611 configured to acquire various scenarios in the training data, and extract an underlying visual feature of each macroblock in each scenario;
  • the initialization sub-module 612 is configured to randomly combine the user and the scene into multiple categories according to the underlying visual feature, and initialize an interest model for each type of user on each type of scene;
  • the optimization sub-module 613 is configured to establish a loss function on the training set by using the initialized interest model as an optimization target, minimize the loss function by an optimization algorithm, and then update each interest model parameter value to optimize the user and the scene. Clustering division;
  • the result acquisition sub-module 614 is configured to obtain a final user and scene cluster and a plurality of user interest models.
  • the object of interest extraction module 62 may further include the following sub-modules:
  • a key frame detection sub-module 621 configured to receive an input video stream, and detect a representative key frame in the video stream
  • a feature calculation sub-module 622 configured to calculate, according to an underlying visual feature of the macroblock, an overall visual feature of the scene corresponding to the key frame for each key frame;
  • a scene categorization sub-module 623 configured to classify, according to the overall visual feature, a scene corresponding to the key frame into one of a scene category divided in a process of constructing a user interest model;
  • the interest degree calculation sub-module 624 is configured to calculate an interest degree distribution map of the scene where the key frame is located according to the user interest model;
  • the region growing sub-module 625 is configured to obtain an area of highest interest on the interest degree distribution map by using a region growing algorithm as an object of interest.
  • FIG. 7 is a schematic diagram of the flow of data between modules in a personalized advertisement push system based on user interest learning according to an embodiment of the present invention, to further illustrate a personalized advertisement push system based on user interest learning provided by an embodiment of the present invention.
  • the connection relationship of each module As shown in Figure 7:
  • the predefined scene set and the user interest feedback data stream first enter the feature extraction sub-module 611, and the feature extraction sub-module 611 inputs the extracted underlying visual feature and the user interest feedback into the initialization sub-module. 612.
  • the initialization sub-module 612 randomly classifies the user and the scene, and initializes the initial user interest model according to the classification result, obtains a preliminary random scene and a user classification and interest model, and then sends the results to the optimization sub-module 613 through an iterative algorithm.
  • the result obtaining sub-module 614 obtains the final user classification result and the scene classification result from the last iteration result, and corresponding The user interest model; in the personalized advertisement push process, the key frame detection sub-module 621 receives the input video stream, detects a key frame whose content is representative, and outputs the key frame to the feature calculation sub-module 622 for the key frame corresponding scene.
  • the calculated information data stream accompanying the information provided by the result obtaining sub-module 614 flows through the scene classification sub-module 623 and the interest degree calculation sub-module 624, respectively, to generate an interest degree distribution map of the scene in which the key frame is located, and then, the region grower
  • the module 625 outputs the object of interest to the advertisement retrieval module 63 according to the interest degree profile to extract a plurality of visual features and retrieve the advertisement information library, and finally output the video stream with the personalized advertisement.
  • the personalized advertisement pushing system based on user interest learning obtained by the embodiment of the present invention obtains the user interest model by using the multi-task sorting learning algorithm, and automatically extracts the interest area in the video for different users, and then uses the interest area for advertising. Information association.
  • the advertisements provided in this way are not only closely related to the video content, but also satisfy the user's preferences to a certain extent, and realize personalized advertisement push.
  • the advertisement push result generated by the method and system provided by the embodiment of the present invention is as shown in FIG. 8.
  • a person skilled in the art can understand that all or part of the steps of implementing the above method embodiments may be completed by using hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, when executed, The steps of the foregoing method embodiments are performed; and the foregoing storage medium includes: various media that can store program codes, such as ROM, RAM, disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Description

基于用户兴趣学习的个性化广告推送方法与系统 技术领域
本发明涉及图像处理领域,特别涉及一种基于用户兴趣学习的个性化广告 推送方法与系统。 背景技术
近年来, 互联网上的视频数目在迅速增加, 而这些海量视频的存在极大地 促进了在线视频广告等服务的发展。 目前, 已经有多种基于不同准则将视频和 广告关联的方法在视频网站和视频播放软件中投入了使用。一般来说, 这些方 法主要强调对预定义广告的推送, 包括:
1) 时域插入广告。如附图 1 (a)所示, 指在视频的开头缓冲、 中途暂停或结 尾播放一段预定义的广告, 形式可以为图片、 视频等。
2) 周边关联广告。如附图 1 (b)所示, 指当视频播放时, 在视频播放器的周 边 (如网页、 播放器边框)显示预定义广告。
3) 部分叠加广告。 如附图 1 (c)所示, 指在视频的部分内容上叠加小型广告 (图片或简单的 FLASH) , 通常不影响视频主要部分。
目前, 以上三种广告推送的方法都得到了广泛的应用。然而这些广告推送 的效果并不理想, 比如第 1种方法播放广告时, 用户常常处于浏览其他网页的 状态, 降低了广告效果; 第 2种方法虽然干扰较小, 但广告常常被作为网页背 景而忽视; 第 3种方法则在一定程度上影响了用户正常的观看体验。 而最主要 的问题在于, 上述推送的广告与内容的关联程度一般较低, 且不能满足每个用 户个性化的兴趣需求, 因此广告所达到的效果较差。 发明内容
本发明实施例提供一种基于用户兴趣学习的个性化广告推送方法和系统, 用以解决现有的推送广告与内容的关联程度低,且不能满足每个用户个性化兴 趣需求的问题。
本发明实施例提供了一种基于用户兴趣学习的个性化广告推送方法, 包 括:
通过多任务排序学习得到多个用户兴趣模型;
根据所述用户兴趣模型, 在视频中提取兴趣物体;
提取所述兴趣物体的多种视觉特征,根据所述视觉特征在广告库中检索相 关的广告信息。
本发明实施例还提供了一种基于用户兴趣学习的个性化广告推送系统,包 括:
兴趣模型学习模块, 用于通过多任务排序学习得到多个用户兴趣模型; 兴趣物体提取模块,用于根据所述用户兴趣模型,在视频中提取兴趣物体; 广告检索模块,用于提取所述兴趣物体的多种视觉特征, 根据所述视觉特 征在广告库中检索相关的广告信息。
由上述技术方案可知,本发明实施例利用多任务排序学习算法获得用户兴 趣模型, 并在此基础上针对不同用户自动提取视频中的兴趣区域, 然后使用兴 趣区域进行广告信息关联。通过这样的方式提供的广告不仅和视频内容紧密相 关, 而且从一定程度上满足了用户的个性化要求, 实现了个性化的广告推送。 關删
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施 例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见地,下面描述 中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付 出创造性劳动的前提下, 还可以根据这些附图获得其他的实施方式。 图 1为现有的广告推送方法示意图;
图 2为本发明实施例提供的基于用户兴趣学习的个性化广告推送方法流程 示意图;
图 3为本发明实施例提供的用户兴趣模型学习流程示意图;
图 4为本发明实施例提供的视频兴趣物体提取流程示意图;
图 5为本发明实施例提供的在视频兴趣物体提取流程中获取的关键帧的兴 趣度分布示意图;
图 6为本发明实施例提供的基于用户兴趣学习的个性化广告推送系统示意 图;
图 7为本发明实施例提供的基于图 6所示系统的数据流动示意图; 图 8为依据本发明实施例提供的方法和系统生成的广告推送效果示意图。 具体实施方式
下面结合附图对本发明的具体实施例作进一歩详细的说明。
在本发明提供的实施例中,系统首先根据预先收集的内容各异的场景集合 与用户对这些场景的兴趣反馈,采用一种多任务排序学习算法训练得到用户兴 趣模型, 并同时得到场景分类及用户分类。包括场景底层视觉特征提取、场景 和用户初始随机分类、 兴趣模型参数计算等歩骤。 然后, 系统在播放视频的同 时检测关键帧, 根据在模型学习过程中得到的场景类别, 将关键帧所对应的场 景分类, 再由各用户的兴趣模型分别计算生成兴趣度图。 最后, 利用区域生长 的方法, 从兴趣度图中生成一个兴趣度较高的区域作为兴趣物体, 并根据其多 种特征在广告信息库中检索相关广告,最后输出带个性化广告的视频流。其中, 兴趣物体的特征反映物体不同角度不同层次的视觉特性, 包括但不限于颜色、 结构、 轮廓、 纹理特征, 优选地, 本发明实施例中提取兴趣物体的 HSV颜色直 方图、 Gabor直方图、 SIFT直方图、 视纹特征。 同时检索方法为一种快速匹配 算法, 并且对于不同特征采取不同的匹配方法。
图 2为本发明实施例提供的基于用户兴趣学习的个性化广告推送方法的流 程示意图。 如图 2所示, 本实施例可以包括以下歩骤:
歩骤 201、 兴趣模型学习: 通过多任务排序学习算法得到多个用户兴趣模 型。
其中, 如图 3所示, 该歩骤进一歩包括:
歩骤 2011、 获取各种场景以及用户对各场景的兴趣反馈。
具体地, 所述场景可以包括多个方面的主题, 比如广告、 新闻、 卡通、 电 影等。用户可以通过简单的交互, 在这些场景中标注出兴趣物体。 由于即使在 相同的场景中, 不同用户的感兴趣点也不一样, 因此可以用如下方式来表征场 景集合、 用户集合及它们之间的联系:
= { sl 9... sk ,...sK} 代表包含 κ个场景的场景集合, 其中 Sk表示第 k个 场景;
V- {\J ... Um, ...UM} 代表包含 M个用户的用户集合, 其中 Um表示 第 m个用户; 两者之间的联系通过 Θ = { ™£ {0, 1} } 表示。其中 = 1当且仅当用户 Um对场景 Sk进行了交互并标注出兴趣物体。 假设场景 Sk被划分成宏块的集合 = { ¾ι, ¾2, ...... ¾n}, 则用户 Um在场景 Sk上标注的兴趣物体 O 和场景 Sk中宏块 的关系可以引出另一个二值集合 = {y ≡ {0, 1} }, 其中 y = l当且仅当场 景 Sk中的第 i个宏块 属于用户标注的兴趣物体 O 的一部分。
歩骤 2012、 从局部和全局两方面提取各场景中各宏块的底层视觉特征。 具体地,将每个场景划分成宏块的集合,并计算每个宏块的底层视觉特征。 而整个场景底层视觉特征可以通过组合宏块的底层视觉特征来得到。在一个实 施例中, 可以将场景划分成 16乘 16大小的宏块, 并提取多尺度多视觉通道中 的局部对比度特征, 作为每个宏块的底层视觉特征。 同时, 通过计算宏块与其 所在的整个场景的多种视觉统计特性的差异, 以获得宏块的全局视觉特征。 对于宏块 Skl, 设 xkl代表其特征向量, 那么对于一个场景 Sk, Xk = { xkl,
Xk2, ...... Xk 代表了其所有宏块的特征向量集合。 利用该集合, 可以通过某种 组合变换得到场景整体的特征向量 Vk和兴趣物体的特征向量^, 在一个实施 例中, vk定义为场景各宏块的特征向量的期望和标准差, ^定义为兴趣物体 内各宏块的特征向量的期望和标准差。因此训练用户兴趣模型的任务就转变成 寻找一个模型(或函数^ -〉3 , 这个模型能根据场景 Sk中, 不同的宏块 Skl 的特征向量 , 赋于其不同的实数值。 然后对该实数值进行排序, 设排序结 果为 ¾(φ) = { φ( Xkl), φ( Xk2), ...... φ( Xkn) }, 那么最终目标就是减小模型输出的 排序结果7¾ (φ)和用户反馈的排序结果 的差距。
歩骤 2013、 将场景和用户随机进行初歩分类。
具体地, 所述初歩分类可以通过随机划分场景和用户聚类来进行。另一种 直观的方法是根据场景内容的相似性和用户兴趣反馈的相似性对场景和用户 分别进行初歩分类。在本实施例中,场景内容相似通过歩骤 302中得到的场景 特征 vk进行计算, 而用户兴趣相似通过每个用户选取的兴趣物体特征^进行 计算。初歩分类的结果用 a= {akl e { 0, 1} }和3 = { pmj ≡ {0, 1} }表示, akl = 1当且仅当 sk为第 i类场景 , pmj = 1当且仅当 Um为第 j类用户。
歩骤 2014、 根据两者分类结果, 初始化用户兴趣模型。
具体地, 初始化用户兴趣模型需要先构建待优化的目标函数。 目标函数分 为两部分, 在本实施例中, 将其形式化为:
min £( α, β) + λΩ ( α, β) ,
s.t. ∑ ki= akl ≡{0, 1} , 对于任意 k,
=1, PmJ E{0,1}, 对于任意11。
Figure imgf000007_0001
其中 W是兴趣模型参数集合, /是场景分类集合, J是用户分类集合, L( , α, β)为经验损失, 0( α, β)为根据先验知识确定 W的惩罚损失。 如果假设 表示场景 Sk由用户兴趣模型 预测出的兴趣度图与用户 Um的实 际兴趣度图之间的差异, 则经验损失可以定义为: £( α, β) =∑∑∑∑ΘΓ¾.βΜ7./(%(φ,),γ^)
i≡I j≡J m-l k-l 在一个实施例中, 我们可以将 /(π ^ ,γ^)定义为: /(¾(Φ , m)= ∑ [ 。 < ] < w, 其中, φ (Χ) = χ是一个线性的用户兴趣模型, 而^为其参数向量。 上式 中,如果事件 X成立,则 [4=1,否贝 ι」[4=0。同时,对于惩罚损失 Ω(^α,β), 将其分为四类基于先验知识设定的惩罚项之和。 即有:
Ω ( , α, β) = ε8Ω s + euQ u + sdQ d + ε0Ω c
上式中, Ss,su, , 等四个加权系数根据在验证数据集上的效果设置。 其中, Qs指场景聚类惩罚, 主要计算场景与场景之间特征向量 Vk的差异, 当 两个场景有着相同内容但是却在不同场景分类里, 该惩罚值偏大。在一个实施 例中, 我们将场景聚类惩罚定义为:
K
A =∑∑( - )2[cos(v )]+ 其中, cos(v vti)表不场景特征向量 ν。禾口 v 之间的余弦距禺, [x]+表不 max(0, x)。
Qu指用户聚类惩罚, 主要计算用户所选兴趣物体的特征 ^的差异, 当相 同偏好的用户被分到不同类别时, 该惩罚值变大。在一个实施例中, 我们将用 户聚类惩罚定义为:
M
m0<m1 j J 其中, 5„^表示用户 mQ和 1¾之间的相似度, 定义为:
Figure imgf000008_0001
上式中, Ts是一个预定义阈值; zmmi是一个常数,用来将 δΜ。„^3—化到 [0,1] 范围。 Q d指模型差别惩罚, 主要计算不同模型在不同情况下的预测损失, 鼓励 不同类别用户模型给出不同的预测。这是因为对于同一类别用户模型, 在不同 的场景类别下预测也不相同。 在一个实施例中, 我们将用户聚类惩罚定义为:
Ω,- /( (Φ,。 ,- (Φ¾
Figure imgf000009_0001
Ω。指模型复杂度惩罚, 通过计算模型参数模的总和得到, 当采取的模型 复杂时, 该部分较大。 在一个实施例中, 我们将模型复杂度惩罚定义为:
Figure imgf000009_0002
在模型更新过程中, 我们可以用该惩罚项控制用户和场景分类数, 可以避 免产生过于复杂的模型。
歩骤 2015、 在得到的用户兴趣模型基础上依次更新场景分类和用户分类。 歩骤 2016、 根据新的场景和用户的分类结果再次训练得到新的用户兴趣 模型。
歩骤 2017、 判断是否达到预定义迭代次数或目标函数小到一定值? 如果 是, 则执行歩骤 2018; 如果否, 则返回歩骤 2015。
歩骤 2018、将最后一次迭代得到的用户兴趣模型, 以及场景与用户的类别 作为最终的用户兴趣模型, 以及场景与用户分类。
需要说明的是, 歩骤 2014中初歩计算兴趣模型的依据是尽量减小经验损 失。歩骤 2015中更新场景和用户分类是在得到的用户兴趣模型基础上进行的, 比如场景聚类更新可以根据减小模型预测错误和提高场景之间内容相似性,而 用户聚类更新可以根据已知的兴趣模型和提高用户之间的偏好相似性。然后再 次由新得到的两者分类计算新的用户兴趣模型, 重复迭代更新的歩骤,直到满 足定义的条件 (达到定义的次数或者目标函数值小到一定程度)。 歩骤 2018之 后,得到的场景和用户分类以及用户兴趣模型就作为以后多任务提取兴趣物体 的依据。
歩骤 202、 兴趣物体提取: 根据用户兴趣模型, 在视频中提取兴趣物体。 其中, 如图 4所示, 该歩骤进一歩包括:
歩骤 2021、 检测出视频流中代表性的关键帧作为一个关键场景。 具体地, 通过计算一个视频镜头内所有帧之间的相似度, 并寻找与其他帧 相似度最高的帧, 做为代表性的关键帧。
歩骤 2022、 提取当前场景各宏块底层视觉特征并计算出场景整体底层视 觉特征。 具体地, 使用和兴趣模型学习过程相同的底层视觉特征, 先分别提取出当 前场景每个宏块的底层视觉特征, 然后计算出场景整体底层视觉特征。在本实 施例中, 采取各宏块特征的期望与标准方差作为场景整体特征。 歩骤 2023、 根据整体底层视觉特征将场景分类。 具体地, 利用歩骤 2022中得到的整体底层视觉特征作为分类该场景的依 据, 在已知的场景类中选择最接近的一类。优选地, 可以训练一个支持向量机 进行此分类工作。在知道当前用户类别和场景类别的前提下, 利用已知的用户 兴趣模型就可以对场景中每个宏块的兴趣程度进行排序。 歩骤 2024、 根据用户兴趣模型对场景各宏块的兴趣程度排序。 歩骤 2025、 由排序结果映射成每个候选块的兴趣程度, 得到场景兴趣度 分布。 具体地, 对歩骤 2024中排序的结果进行映射, 以便转换成有利于兴趣度 图表示的数值范围, 比如转换为 [0,1]之间的小数。 在一个实施例中, 对排序后 的序号 Cn e {0, ... N-l }进行映射, 采取一个值域为 [0, 1 ]的四次方函数(¾^ 进行此映射工作。 由此得到场景的兴趣度图, 如图 5所示。 歩骤 2026、 选取全局兴趣度最高的候选块。 歩骤 2027、 用区域生长算法以此生成一块兴趣度较高区域。 需要说明的是,本歩骤中生成兴趣度较高区域的算法并不限于区域生长算 法, 也可以是其他算法。
歩骤 2028、 由兴趣区域得到兴趣物体。
通过执行上述歩骤, 用户感兴趣的物体被从视频中提取出。
歩骤 203、 关联广告检索: 提取出所述兴趣物体的多种视觉特征, 根据这 些特征, 在广告库中检索相关的广告信息。
本发明实施例提供的基于用户兴趣学习的个性化广告推送方法,利用多任 务排序学习算法获得用户兴趣模型,并在此基础上针对不同用户自动提取视频 中的兴趣区域, 然后使用兴趣区域进行广告信息关联。通过这样的方式提供的 广告不仅和视频内容紧密相关, 而且从一定程度上满足了用户的偏好, 实现了 个性化的广告推送。
如图 6所示, 本发明实施例还提供了一种基于用户兴趣学习的个性化广告 推送系统, 包括: 兴趣模型学习模块 61, 兴趣物体提取模块 62和广告检索模块 63。其中, 兴趣模型学习模块 61用于通过多任务排序学习得到多个用户兴趣模 型;兴趣物体提取模块 62用于根据所述用户兴趣模型,在视频中提取兴趣物体; 广告检索模块 63用于提取所述兴趣物体的多种视觉特征,根据所述视觉特征在 广告库中检索相关的广告信息。
进一歩地, 所述兴趣模型学习模块 61还可以包括以下子模块:
特征提取子模块 611, 用于获取训练数据中的各种场景, 提取所述各场景 中各宏块的底层视觉特征;
初始化子模块 612, 用于根据所述底层视觉特征, 随机将用户和场景分别 组合为多个类别, 并为每类用户在每类场景上初始化一个兴趣模型; 优化子模块 613,用于使用初始化后的兴趣模型在训练集上建立损失函数, 作为最优化目标, 通过最优化算法, 最小化所述损失函数, 进而更新各个兴趣 模型参数值, 优化用户和场景的聚类划分;
结果获取子模块 614, 用于获取最终的用户和场景聚类以及多个用户兴趣 模型。
进一歩地, 所述兴趣物体提取模块 62还可以包括以下子模块:
关键帧检测子模块 621, 用于接收输入的视频流, 检测所述视频流中内容 具有代表性的关键帧;
特征计算子模块 622, 用于对每一个关键帧, 根据其宏块的底层视觉特征 计算出该关键帧对应场景的整体视觉特征;
场景归类子模块 623, 用于根据所述整体视觉特征, 将所述关键帧对应的 场景归入构建用户兴趣模型过程中所划分的场景类别中的其中一类;
兴趣度计算子模块 624, 用于根据用户兴趣模型, 计算所述关键帧所在场 景的兴趣度分布图;
区域生长子模块 625, 用于通过区域生长算法在所述兴趣度分布图上获得 兴趣度最高的区域, 作为兴趣物体。
图 7为数据在本发明实施例提供的基于用户兴趣学习的个性化广告推送系 统中各模块之间的流动示意图,以进一歩说明本发明实施例提供的基于用户兴 趣学习的个性化广告推送系统中各模块的连接关系。 如图 7所示:
在用户兴趣学习的过程中,预定义场景集合和用户兴趣反馈数据流首先进 入特征提取子模块 611,特征提取子模块 611将提取得到的某种底层视觉特征和 用户兴趣反馈一起输入到初始化子模块 612,初始化子模块 612对用户和场景进 行随机分类, 并根据分类结果初始化最初的用户兴趣模型, 得到初歩随机场景 和用户分类及兴趣模型, 然后将这些结果发送给优化子模块 613, 通过迭代算 法进行优化, 以更新兴趣模型的参数, 并更新用户和场景的分类, 直到达到预 定的条件之后, 结果获取子模块 614从最后的一次迭代结果获得最终的用户分 类结果和场景分类结果,以及相应的用户兴趣模型;在个性化广告推送过程中, 关键帧检测子模块 621接收输入视频流, 检测出内容具有代表性的关键帧, 将 该关键帧输出给特征计算子模块 622进行关键帧对应场景的整体底层视觉特征 计算, 计算出的特征数据流伴随结果获取子模块 614提供的信息分别流经场景 归类子模块 623与兴趣度计算子模块 624, 生成关键帧所在场景的兴趣度分布 图, 然后, 区域生长子模块 625根据兴趣度分布图输出兴趣物体给广告检索模 块 63,以提取多种视觉特征并检索广告信息库后最终输出带有个性化广告的视 频流。
本发明实施例提供的基于用户兴趣学习的个性化广告推送系统,利用多任 务排序学习算法获得用户兴趣模型,并在此基础上针对不同用户自动提取视频 中的兴趣区域, 然后使用兴趣区域进行广告信息关联。通过这样的方式提供的 广告不仅和视频内容紧密相关, 而且从一定程度上满足了用户的偏好, 实现了 个性化的广告推送。
依据本发明实施例提供的方法和系统生成的广告推送结果如图 8所示。 本领域普通技术人员可以理解:实现上述方法实施例的全部或部分歩骤可 以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存 储介质中, 该程序在执行时, 执行包括上述方法实施例的歩骤; 而前述的存储 介质包括: R0M、 RAM, 磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而非对其限 制; 尽管参照前述实施例对本发明进行了详细的说明, 本领域的普通技术人员 应当理解: 其依然可以对前述各实施例所记载的技术方案进行修改, 或者对其 中部分技术特征进行等同替换; 而这些修改或者替换, 并不使相应技术方案的 本质脱离本发明各实施例技术方案的精神和范围。

Claims

权 利 要 求
1、 一种基于用户兴趣学习的个性化广告推送方法, 其特征在于, 该方法 包括:
通过多任务排序学习得到多个用户兴趣模型;
根据所述用户兴趣模型, 在视频中提取兴趣物体;
提取所述兴趣物体的多种视觉特征,根据所述视觉特征在广告库中检索相 关的广告信息。
2、 如权利要求 1所述的基于用户兴趣学习的个性化广告推送方法, 其特 征在于, 所述通过多任务排序学习得到多个用户兴趣模型, 具体包括:
获取训练数据中的各种场景, 提取所述各场景中各宏块的底层视觉特征; 根据所述底层视觉特征, 通过多任务排序学习算法,进行用户聚类和场景 聚类, 并为每类用户在每类场景上构建兴趣模型。
3、 如权利要求 2所述的基于用户兴趣学习的个性化广告推送方法, 其特 征在于, 所述提取所述各场景中各宏块的底层视觉特征具体为:
从多尺度多视觉通道上提取所述各场景中各宏块的底层视觉特征。
4、 如权利要求 2或 3所述的基于用户兴趣学习的个性化广告推送方法, 其特征在于, 所述底层视觉特征包括局部特征和全局特征。
5、 如权利要求 4所述的基于用户兴趣学习的个性化广告推送方法, 其特 征在于,所述局部特征是通过计算宏块与其周边宏块的多种视觉特性的差异得 到的;所述全局特征是通过计算宏块与其所在的整个场景的多种视觉特性的差 异得到的。
6、 如权利要求 2所述的基于用户兴趣学习的个性化广告推送方法, 其特 征在于, 所述通过多任务排序学习算法, 进行用户聚类和场景聚类, 并为每类 用户在每类场景上构建兴趣模型, 具体包括:
随机将用户和场景分别组合为多个类别,并为每类用户在每类场景上初始 化一个兴趣模型;
使用初始化后的兴趣模型在训练集上建立损失函数, 作为最优化目标; 通过最优化算法, 最小化损失函数, 进而更新各个兴趣模型参数值, 并优 化用户和场景的聚类划分; 获取最终的用户和场景聚类以及多个用户兴趣模型。
7、 如权利要求 6所述的基于用户兴趣学习的个性化广告推送方法, 其特 征在于, 所述损失函数包括: 经验损失和惩罚损失。
8、 如权利要求 7所述的基于用户兴趣学习的个性化广告推送方法, 其特 征在于,所述经验损失具体为:所述各场景中各宏块的底层视觉特征在所述兴 趣模型下的函数值和该场景对应的用户反馈的兴趣值的差异。
9、 如权利要求 7所述的基于用户兴趣学习的个性化广告推送方法, 其特 征在于, 所述惩罚损失包括: 场景聚类惩罚, 用户聚类惩罚, 模型差别惩罚和 模型复杂度惩罚。
10、如权利要求 1所述的基于用户兴趣学习的个性化广告推送方法, 其特 征在于, 所述根据所述用户兴趣模型, 在视频中提取兴趣物体, 具体包括: 接收输入的视频流, 检测所述视频流中内容具有代表性的关键帧; 对每一个关键帧,根据其宏块的底层视觉特征计算出该关键帧对应场景的 整体视觉特征;
根据所述整体视觉特征,将所述关键帧对应的场景归入构建用户兴趣模型 过程中所划分的场景类别中的其中一类;
根据所述得到的用户兴趣模型, 计算所述关键帧所在场景的兴趣度分布 图;
从所述兴趣度分布图中提取出兴趣度最高的物体。
11、 如权利要求 10所述的基于用户兴趣学习的个性化广告推送方法, 其 特征在于,所述将所述关键帧对应的场景归入构建用户兴趣模型过程中所划分 的场景类别中的其中一类, 具体为:
利用所述场景中宏块的底层视觉特征计算出该场景的整体特征,根据所述 整体特征对该场景进行分类。
12、 如权利要求 10所述的基于用户兴趣学习的个性化广告推送方法, 其 特征在于, 所述根据所述得到的用户兴趣模型,计算所述关键帧所在场景的兴 趣度分布图, 具体为:
利用所述得到的用户兴趣模型,推断出所述关键帧所在场景内候选块的排 序, 并将其映射为各候选块的兴趣程度, 从而得到场景兴趣度分布图。
13、 如权利要求 10所述的基于用户兴趣学习的个性化广告推送方法, 其 特征在于, 所述从所述兴趣度分布图中提取出兴趣度最高的物体, 具体为: 在所述兴趣度分布图中确定兴趣度最高的宏块;
根据该宏块利用区域生长技术获得兴趣度高的区域, 并将其作为兴趣物 体。
14、 一种基于用户兴趣学习的个性化广告推送系统, 其特征在于, 该系统 包括:
兴趣模型学习模块, 用于通过多任务排序学习得到多个用户兴趣模型; 兴趣物体提取模块,用于根据所述用户兴趣模型,在视频中提取兴趣物体; 广告检索模块,用于提取所述兴趣物体的多种视觉特征, 根据所述视觉特 征在广告库中检索相关的广告信息。
15、 如权利要求 14所述的基于用户兴趣学习的个性化广告推送系统, 其 特征在于, 所述兴趣模型学习模块包括:
特征提取子模块,用于获取训练数据中的各种场景,提取所述各场景中各 宏块的底层视觉特征;
初始化子模块,用于根据所述底层视觉特征, 随机将用户和场景分别组合 为多个类别, 并为每类用户在每类场景上初始化一个兴趣模型;
优化子模块,用于使用初始化后的兴趣模型在训练集上建立损失函数, 作 为最优化目标, 通过最优化算法, 最小化所述损失函数, 进而更新各个兴趣模 型参数值, 优化用户和场景的聚类划分;
结果获取子模块, 用于获取最终的用户和场景聚类以及多个用户兴趣模 型。
16、 如权利要求 14所述的基于用户兴趣学习的个性化广告推送系统, 其 特征在于, 所述兴趣物体提取模块包括:
关键帧检测子模块,用于接收输入的视频流,检测所述视频流中内容具有 代表性的关键帧;
特征计算子模块, 用于对每一个关键帧, 根据其宏块的底层视觉特征计算 出该关键帧对应场景的整体视觉特征;
场景归类子模块, 用于根据所述整体视觉特征,将所述关键帧对应的场景 归入构建用户兴趣模型过程中所划分的场景类别中的其中一类; 兴趣度计算子模块,用于根据用户兴趣模型,计算所述关键帧所在场景的 兴趣度分布图;
区域生长子模块,用于通过区域生长算法在所述兴趣度分布图上获得兴趣 度最高的区域, 作为兴趣物体。
PCT/CN2010/079245 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统 WO2012071696A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2010/079245 WO2012071696A1 (zh) 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统
CN2010800065025A CN102334118B (zh) 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统
EP10860233.5A EP2568429A4 (en) 2010-11-29 2010-11-29 METHOD AND SYSTEM FOR PUSHING INDIVIDUAL ADVERTISING BASED ON THE LEARNING OF USER INTERESTS
US13/709,795 US8750602B2 (en) 2010-11-29 2012-12-10 Method and system for personalized advertisement push based on user interest learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/079245 WO2012071696A1 (zh) 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/709,795 Continuation US8750602B2 (en) 2010-11-29 2012-12-10 Method and system for personalized advertisement push based on user interest learning

Publications (1)

Publication Number Publication Date
WO2012071696A1 true WO2012071696A1 (zh) 2012-06-07

Family

ID=45484999

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/079245 WO2012071696A1 (zh) 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统

Country Status (4)

Country Link
US (1) US8750602B2 (zh)
EP (1) EP2568429A4 (zh)
CN (1) CN102334118B (zh)
WO (1) WO2012071696A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597940A (zh) * 2018-12-06 2019-04-09 上海哔哩哔哩科技有限公司 基于商业兴趣的目标人群确定及信息推送方法和系统
CN110163649A (zh) * 2019-04-03 2019-08-23 平安科技(深圳)有限公司 广告推送方法、装置、电子设备及存储介质
CN111881340A (zh) * 2020-06-11 2020-11-03 国家电网有限公司 一种基于数字化审计平台的智能推送方法、装置及设备
CN115687790A (zh) * 2022-12-01 2023-02-03 松原市逐贵网络科技有限公司 基于大数据的广告推送方法、系统及云平台

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102232220B (zh) * 2010-10-29 2014-04-30 华为技术有限公司 一种视频兴趣物体提取与关联的方法及系统
US9185470B2 (en) * 2012-05-03 2015-11-10 Nuance Communications, Inc. Remote processing of content
CN103714067B (zh) * 2012-09-29 2018-01-26 腾讯科技(深圳)有限公司 一种信息推送方法和装置
CN102946423B (zh) * 2012-10-31 2015-05-27 中国运载火箭技术研究院 一种基于分布式系统架构的数据映射推送系统及方法
CN103402124A (zh) * 2013-07-23 2013-11-20 百度在线网络技术(北京)有限公司 在用户观看视频时推送信息的方法、系统和云端服务器
CN103473339A (zh) * 2013-09-22 2013-12-25 世纪龙信息网络有限责任公司 更新信息过程中的信息获取方法和系统
CN104602046B (zh) * 2013-11-01 2019-04-23 中国移动通信集团公司 一种基于hls协议的信息发布方法、设备及系统
CN104618803B (zh) 2014-02-26 2018-05-08 腾讯科技(深圳)有限公司 信息推送方法、装置、终端及服务器
CN104915354B (zh) * 2014-03-12 2020-01-10 深圳市腾讯计算机系统有限公司 多媒体文件推送方法及装置
CN103970878A (zh) * 2014-05-15 2014-08-06 中国石油大学(北京) Svm分类器的构造方法及装置
CN104090919B (zh) * 2014-06-16 2017-04-19 华为技术有限公司 推荐广告的方法及广告推荐服务器
US10412436B2 (en) 2014-09-12 2019-09-10 At&T Mobility Ii Llc Determining viewership for personalized delivery of television content
CN104376058B (zh) * 2014-11-07 2018-04-27 华为技术有限公司 用户兴趣模型更新方法及相关装置
CN106155678A (zh) * 2015-04-28 2016-11-23 天脉聚源(北京)科技有限公司 一种用户行为预约提醒方法及系统
US10110933B2 (en) 2015-09-01 2018-10-23 International Business Machines Corporation Video file processing
CN105956888A (zh) * 2016-05-31 2016-09-21 北京创意魔方广告有限公司 广告个性化展示方法
CN107517393B (zh) * 2016-06-17 2020-04-17 阿里巴巴集团控股有限公司 一种信息推送方法、装置及系统
CN107545301B (zh) * 2016-06-23 2020-10-20 阿里巴巴集团控股有限公司 页面展示方法及装置
CN106101831B (zh) * 2016-07-15 2019-06-18 合一网络技术(北京)有限公司 视频向量化方法及装置
CN106529996A (zh) * 2016-10-24 2017-03-22 北京百度网讯科技有限公司 基于深度学习的广告展示方法和装置
CN107038213B (zh) * 2017-02-28 2021-06-15 华为技术有限公司 一种视频推荐的方法及装置
CN107483554A (zh) * 2017-07-25 2017-12-15 中天宽带技术有限公司 基于onu的网络流量进行机器学习定向广告的推送系统和方法
CN107894998B (zh) * 2017-10-24 2019-04-26 迅雷计算机(深圳)有限公司 视频推荐方法及装置
CN107977865A (zh) * 2017-12-07 2018-05-01 畅捷通信息技术股份有限公司 广告推送方法、装置、计算机设备和可读存储介质
CN109145979B (zh) * 2018-08-15 2022-06-21 上海嵩恒网络科技股份有限公司 敏感图像鉴定方法及终端系统
CN109918568B (zh) * 2019-03-13 2021-06-01 百度在线网络技术(北京)有限公司 个性化学习方法、装置、电子设备及存储介质
CN110310148A (zh) * 2019-06-05 2019-10-08 上海易点时空网络有限公司 基于大数据和机器学习的广告精准投放方法
CN110311839B (zh) * 2019-07-30 2021-07-06 秒针信息技术有限公司 推送信息追踪方法、装置、服务器、终端及存储介质
CN110428012A (zh) * 2019-08-06 2019-11-08 深圳大学 脑网络模型建立方法、脑图像分类方法、装置及电子设备
CN112711945B (zh) * 2019-10-25 2022-08-19 上海哔哩哔哩科技有限公司 广告召回方法和系统
CN111523007B (zh) * 2020-04-27 2023-12-26 北京百度网讯科技有限公司 用户感兴趣信息确定方法、装置、设备以及存储介质
CN112312203B (zh) * 2020-08-25 2023-04-07 北京沃东天骏信息技术有限公司 视频播放方法、装置和存储介质
CN112581195B (zh) * 2021-02-25 2021-05-28 武汉卓尔数字传媒科技有限公司 一种广告推送方法、装置和电子设备
CN113127763A (zh) * 2021-04-29 2021-07-16 深圳市艾酷通信软件有限公司 一种信息显示方法和装置
US20230077795A1 (en) * 2021-09-15 2023-03-16 International Business Machines Corporation Real time feature analysis and ingesting correlated advertisements in a video advertisement
CN114547459B (zh) * 2022-02-23 2023-10-31 深圳环金科技有限公司 一种跨境电商数据处理方法及系统
US11878707B2 (en) 2022-03-11 2024-01-23 International Business Machines Corporation Augmented reality overlay based on self-driving mode
JP7486871B1 (ja) 2024-03-25 2024-05-20 株式会社Star Ai シーン抽出システム、シーン抽出方法及びシーン抽出プログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090076909A1 (en) * 2007-05-11 2009-03-19 Dimitry Ioffe Video channel ad system and method
CN101489139A (zh) * 2009-01-21 2009-07-22 北京大学 基于视觉显著度的视频广告关联方法与系统
CN101621636A (zh) * 2008-06-30 2010-01-06 北京大学 基于视觉注意力模型的广告标志插入和变换方法及系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7089194B1 (en) * 1999-06-17 2006-08-08 International Business Machines Corporation Method and apparatus for providing reduced cost online service and adaptive targeting of advertisements
US20030023598A1 (en) * 2001-07-26 2003-01-30 International Business Machines Corporation Dynamic composite advertisements for distribution via computer networks
US8417568B2 (en) * 2006-02-15 2013-04-09 Microsoft Corporation Generation of contextual image-containing advertisements
US20080147500A1 (en) * 2006-12-15 2008-06-19 Malcolm Slaney Serving advertisements using entertainment ratings in a collaborative-filtering system
FR2926154A1 (fr) 2008-01-08 2009-07-10 Alcatel Lucent Sas Procede de fournitures d'annonces publicitaires personnalisees.
US8281334B2 (en) * 2008-03-31 2012-10-02 Microsoft Corporation Facilitating advertisement placement over video content
US9396258B2 (en) * 2009-01-22 2016-07-19 Google Inc. Recommending video programs
CN101833552A (zh) * 2009-03-10 2010-09-15 郝瑞林 一种流媒体标记和推荐的方法
US20100312609A1 (en) * 2009-06-09 2010-12-09 Microsoft Corporation Personalizing Selection of Advertisements Utilizing Digital Image Analysis
CN101834837A (zh) * 2009-12-18 2010-09-15 北京邮电大学 基于宽带网络的旅游景区景点在线景观视频主动信息服务系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090076909A1 (en) * 2007-05-11 2009-03-19 Dimitry Ioffe Video channel ad system and method
CN101621636A (zh) * 2008-06-30 2010-01-06 北京大学 基于视觉注意力模型的广告标志插入和变换方法及系统
CN101489139A (zh) * 2009-01-21 2009-07-22 北京大学 基于视觉显著度的视频广告关联方法与系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2568429A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597940A (zh) * 2018-12-06 2019-04-09 上海哔哩哔哩科技有限公司 基于商业兴趣的目标人群确定及信息推送方法和系统
CN110163649A (zh) * 2019-04-03 2019-08-23 平安科技(深圳)有限公司 广告推送方法、装置、电子设备及存储介质
CN110163649B (zh) * 2019-04-03 2023-10-17 平安科技(深圳)有限公司 广告推送方法、装置、电子设备及存储介质
CN111881340A (zh) * 2020-06-11 2020-11-03 国家电网有限公司 一种基于数字化审计平台的智能推送方法、装置及设备
CN111881340B (zh) * 2020-06-11 2024-05-10 国家电网有限公司 一种基于数字化审计平台的智能推送方法、装置及设备
CN115687790A (zh) * 2022-12-01 2023-02-03 松原市逐贵网络科技有限公司 基于大数据的广告推送方法、系统及云平台
CN115687790B (zh) * 2022-12-01 2023-07-14 成都坐联智城科技有限公司 基于大数据的广告推送方法、系统及云平台

Also Published As

Publication number Publication date
CN102334118A (zh) 2012-01-25
US8750602B2 (en) 2014-06-10
EP2568429A1 (en) 2013-03-13
US20130094756A1 (en) 2013-04-18
EP2568429A4 (en) 2013-11-27
CN102334118B (zh) 2013-08-28

Similar Documents

Publication Publication Date Title
WO2012071696A1 (zh) 基于用户兴趣学习的个性化广告推送方法与系统
US11556743B2 (en) Learning highlights using event detection
US10522186B2 (en) Apparatus, systems, and methods for integrating digital media content
US10528821B2 (en) Video segmentation techniques
Kao et al. Hierarchical aesthetic quality assessment using deep convolutional neural networks
CN106547908B (zh) 一种信息推送方法和系统
US8804999B2 (en) Video recommendation system and method thereof
WO2018166288A1 (zh) 信息呈现方法和装置
KR20230087622A (ko) 스트리밍 비디오 내의 객체를 검출하고, 필터링하고 식별하기 위한 방법 및 장치
CN106446015A (zh) 一种基于用户行为偏好的视频内容访问预测与推荐方法
US20130101209A1 (en) Method and system for extraction and association of object of interest in video
Bianco et al. Predicting image aesthetics with deep learning
Mironică et al. A modified vector of locally aggregated descriptors approach for fast video classification
CN113190709B (zh) 一种基于短视频关键帧的背景音乐推荐方法和装置
CN113766330A (zh) 基于视频生成推荐信息的方法和装置
Papadopoulos et al. Automatic summarization and annotation of videos with lack of metadata information
Saini et al. Video summarization using deep learning techniques: a detailed analysis and investigation
Ramezani et al. A novel video recommendation system based on efficient retrieval of human actions
Sahu et al. Multiscale summarization and action ranking in egocentric videos
Sowmyayani et al. Content based video retrieval system using two stream convolutional neural network
Ibrahim et al. VideoToVecs: a new video representation based on deep learning techniques for video classification and clustering
CN116684528A (zh) 一种视频彩铃不同视角的推荐方法
Lu et al. Temporal segmentation and assignment of successive actions in a long-term video
Hoang Multiple classifier-based spatiotemporal features for living activity prediction
Li Dance art scene classification based on convolutional neural networks

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080006502.5

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10860233

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010860233

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE