WO2012071696A1 - 基于用户兴趣学习的个性化广告推送方法与系统 - Google Patents

基于用户兴趣学习的个性化广告推送方法与系统 Download PDF

Info

Publication number
WO2012071696A1
WO2012071696A1 PCT/CN2010/079245 CN2010079245W WO2012071696A1 WO 2012071696 A1 WO2012071696 A1 WO 2012071696A1 CN 2010079245 W CN2010079245 W CN 2010079245W WO 2012071696 A1 WO2012071696 A1 WO 2012071696A1
Authority
WO
WIPO (PCT)
Prior art keywords
interest
user
scene
learning
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2010/079245
Other languages
English (en)
French (fr)
Chinese (zh)
Inventor
李甲
高云超
余昊男
张军
田永鸿
严军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Huawei Technologies Co Ltd
Original Assignee
Peking University
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Huawei Technologies Co Ltd filed Critical Peking University
Priority to CN2010800065025A priority Critical patent/CN102334118B/zh
Priority to PCT/CN2010/079245 priority patent/WO2012071696A1/zh
Priority to EP10860233.5A priority patent/EP2568429A4/en
Publication of WO2012071696A1 publication Critical patent/WO2012071696A1/zh
Priority to US13/709,795 priority patent/US8750602B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

Definitions

  • the present invention relates to the field of image processing, and in particular, to a personalized advertisement pushing method and system based on user interest learning. Background technique
  • Peripheral related ads As shown in Figure 1 (b), when a video is played, a predefined advertisement is displayed on the periphery of the video player (such as a web page, a player border).
  • the above three methods of advertising push have been widely used.
  • the effect of these advertisements is not ideal.
  • the first method when the advertisement is played, the user often browses other webpages and reduces the advertisement effect; the second method has less interference, but the advertisement is often used as the webpage background. Neglect;
  • the third method affects the user's normal viewing experience to a certain extent.
  • the main problem is that the above-mentioned push advertisements are generally associated with content, and cannot satisfy the personalized interest needs of each user, so the effect achieved by the advertisement is poor. Summary of the invention
  • the embodiment of the invention provides a personalized advertisement pushing method and system based on user interest learning, which is used to solve the problem that the existing push advertising has low relevance to the content, and cannot satisfy each user's personalized promotion. interesting needs.
  • the embodiment of the invention provides a personalized advertisement pushing method based on user interest learning, which comprises:
  • Extracting an object of interest in the video according to the user interest model Extracting an object of interest in the video according to the user interest model
  • a plurality of visual features of the object of interest are extracted, and relevant advertising information is retrieved in the advertisement library based on the visual features.
  • the embodiment of the invention further provides a personalized advertisement pushing system based on user interest learning, which comprises:
  • a interest model learning module configured to learn a plurality of user interest models by multi-task sorting
  • an interest object extracting module configured to extract an object of interest in the video according to the user interest model
  • an advertisement retrieval module configured to extract the interest A plurality of visual features of the object, the relevant advertising information being retrieved in the advertisement library based on the visual features.
  • the embodiment of the present invention obtains the user interest model by using the multi-task sorting learning algorithm, and on this basis, automatically extracts the interest regions in the video for different users, and then uses the interest region to associate the advertisement information.
  • the advertisements provided in this way are not only closely related to the video content, but also satisfy the individualization requirements of the users to a certain extent, and realize personalized advertisement push. Close
  • FIG. 1 is a schematic diagram of an existing advertisement pushing method
  • FIG. 2 is a schematic flowchart of a personalized advertisement pushing method based on user interest learning according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a user interest model learning process according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a process for extracting a video interest object according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a key frame acquired in a video object extraction process according to an embodiment of the present invention. Schematic diagram of the distribution of interest;
  • FIG. 6 is a schematic diagram of a personalized advertisement push system based on user interest learning according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of data flow based on the system shown in FIG. 6 according to an embodiment of the present invention
  • FIG. 8 is a schematic diagram of an advertisement push effect generated by the method and system according to an embodiment of the present invention.
  • the system firstly uses the multi-task sorting learning algorithm to train the user interest model according to different pre-collected scene collections and user feedback on the scenes, and simultaneously obtains the scene classification and User classification. It includes the underlying visual feature extraction of the scene, the initial random classification of the scene and the user, and the calculation of the interest model parameters. Then, the system detects the key frame while playing the video, classifies the scene corresponding to the key frame according to the scene category obtained in the model learning process, and then calculates the interest degree map by the interest model of each user.
  • an area with high interest is generated from the interest degree map as the object of interest, and relevant advertisements are retrieved in the advertisement information base according to various characteristics thereof, and finally the video stream with personalized advertisement is output.
  • the feature of the object of interest reflects the visual characteristics of different levels of the object at different levels, including but not limited to color, structure, contour, texture feature.
  • the HSV color histogram, Gabor histogram, and the Gabor histogram of the object of interest are extracted in the embodiment of the present invention. SIFT histogram, visual pattern features.
  • the simultaneous retrieval method is a fast matching algorithm, and different matching methods are adopted for different features.
  • FIG. 2 is a schematic flowchart of a personalized advertisement pushing method based on user interest learning according to an embodiment of the present invention. As shown in FIG. 2, this embodiment may include the following steps:
  • Step 201 Interest model learning: Multiple user interest models are obtained through a multi-task sorting learning algorithm.
  • the step further includes:
  • Step 2011 Get various scenarios and user feedback on each scene.
  • the scene may include topics in various aspects, such as advertisements, news, cartoons, movies, and the like. Users can mark objects of interest in these scenes with a simple interaction. Since the points of interest of different users are different even in the same scene, the field can be characterized as follows Scene collections, user collections, and the connections between them:
  • Step 2012 Extracting the underlying visual features of each macroblock in each scene from both local and global aspects. Specifically, each scene is divided into a set of macroblocks, and the underlying visual features of each macroblock are calculated. The underlying visual features of the entire scene can be obtained by combining the underlying visual features of the macroblock. In one embodiment, the scene can be divided into 16 by 16 size macroblocks and local contrast features in the multi-scale multi-vision channel are extracted as the underlying visual features of each macroblock.
  • Xk2, ... Xk represents a collection of feature vectors for all of its macroblocks.
  • the feature vector V k of the entire scene and the feature vector of the object of interest can be obtained by some combination transformation.
  • v k is defined as the expected and standard deviation of the feature vectors of each macroblock of the scene, ⁇ Defined as the expected and standard deviation of the feature vectors for each macroblock within the object of interest.
  • the preliminary classification can be performed by randomly dividing a scene and user clustering.
  • Another intuitive method is to classify the scene and the user separately according to the similarity of the scene content and the similarity of the user's interest feedback.
  • similar to the content of the scene by scene feature ho v k in step 302 to obtain the calculated affinity and the user selected object of interest characterized by a ⁇ is calculated for each user.
  • Step 2014 Initialize the user interest model based on the results of the two classifications.
  • initializing the user interest model requires first constructing an objective function to be optimized.
  • the objective function is divided into two parts. In this embodiment, it is formalized as:
  • W is the interest model parameter set
  • / is the scene classification set
  • J is the user classification set
  • L( , ⁇ , ⁇ ) is the empirical loss
  • ⁇ ( ⁇ , ⁇ ) it is divided into four types of penalty items based on prior knowledge set.
  • ⁇ ( , ⁇ , ⁇ ) ⁇ 8 ⁇ s + e u Q u + s d Q d + ⁇ 0 ⁇ c
  • Q s refers to the scene clustering penalty, which mainly calculates the difference between the feature vector Vk between the scene and the scene. When the two scenes have the same content but are in different scene classifications, the penalty value is too large.
  • the scene clustering penalty as:
  • A ⁇ ( - ) 2 [cos(v )] + where cos(vv ti ) represents the scene feature vector ⁇ .
  • the cosine distance between and the v is ⁇ , [x]+ is not max(0, x).
  • Q u refers to the user clustering penalty, which mainly calculates the difference of the feature ⁇ of the object of interest selected by the user. When the users of the same preference are classified into different categories, the penalty value becomes larger.
  • the user clustering penalty as:
  • T s is a predefined threshold
  • Mi is a constant used to ⁇ ⁇ . folk ⁇ 3—is in the range [0,1].
  • Q d refers to the model difference penalty, which mainly calculates the prediction loss of different models under different conditions, and encourages different categories of user models to give different predictions. This is because for the same category of user models, the predictions are different under different scenario categories.
  • we define the user clustering penalty as:
  • model complexity penalty which is obtained by calculating the sum of the model parameter moduli.
  • the model complexity penalty is defined as: In the model update process, we can use this penalty to control the number of users and scene classifications, which can avoid overly complex models.
  • Step 2015 Update the scene classification and user classification in turn based on the obtained user interest model.
  • Step 2016, training again to get a new user interest model based on the new scenario and the user's classification results.
  • Step 2017, determine whether the number of predefined iterations is reached or the objective function is small enough to be a certain value? If yes, proceed to step 2018; if no, return to step 2015.
  • Step 2018 The user interest model obtained in the last iteration, and the category of the scene and the user are used as the final user interest model, and the scene and user classification.
  • the basis for calculating the interest model in the beginning of 2014 is to minimize the loss of experience.
  • the update scenario and user classification in step 2015 are performed based on the obtained user interest model.
  • the scene cluster update may predict the error according to the reduced model and improve the content similarity between the scenes
  • the user cluster update may be based on Know the interest model and improve the similarity of preferences between users.
  • the new user interest model is again calculated from the newly obtained two categories, and the iterative update steps are repeated until the defined conditions are met (the defined number of times or the objective function value is small to a certain extent).
  • Step 2018 After that, the obtained scene and user classification and user interest model are used as the basis for extracting the object of interest in future multitasking.
  • Step 202 Interest object extraction: According to the user interest model, the object of interest is extracted in the video. Wherein, as shown in FIG. 4, the step further includes:
  • Step 2011 A representative key frame in the video stream is detected as a key scenario. Specifically, by calculating the similarity between all frames in a video shot, and looking for the frame with the highest similarity to other frames, it is used as a representative key frame.
  • Step 2022 Extract bottom layer visual features of each macro block of the current scene and calculate an overall bottom visual feature of the scene. Specifically, using the same underlying visual features as the interest model learning process, the underlying visual features of each macroblock of the current scene are extracted first, and then the overall underlying visual features of the scene are calculated. In this embodiment, the expected and standard variance of each macroblock feature is taken as the overall feature of the scene.
  • Step 2023 classifying the scene according to the overall underlying visual feature. Specifically, the overall underlying visual feature obtained in step 2022 is used as a basis for classifying the scene, and the closest one of the known scene classes is selected. Preferably, a support vector machine can be trained to perform this sorting work.
  • the degree of interest of each macroblock in the scene can be sorted by using the known user interest model.
  • Step 2024 Sort interest levels of each macro block of the scene according to a user interest model.
  • Step 2025 Map the ranking result to the degree of interest of each candidate block to obtain a scene interest degree distribution.
  • the results of the sorting in step 2024 are mapped to be converted into a range of values that facilitates the representation of the interest graph, such as a decimal between [0, 1].
  • a fourth power function with a range of [0, 1 ] is taken (3 ⁇ 4 ⁇ to perform this mapping work.
  • Step 526 Select a candidate block with the highest global interest degree.
  • Step 2027 use a region growing algorithm to generate a region with a high degree of interest. It should be noted that a region with a high degree of interest is generated in this step.
  • the algorithm is not limited to the region growing algorithm, but may be other algorithms.
  • Step 2028 Obtain an object of interest from the region of interest.
  • objects of interest to the user are extracted from the video.
  • the personalized advertisement pushing method based on user interest learning obtained by the embodiment of the invention obtains the user interest model by using the multi-task sorting learning algorithm, and automatically extracts the interest area in the video for different users, and then uses the interest area to advertise Information association.
  • the advertisements provided in this way are not only closely related to the video content, but also satisfy the user's preferences to a certain extent, and realize personalized advertisement push.
  • the embodiment of the present invention further provides a personalized advertisement pushing system based on user interest learning, including: an interest model learning module 61, an interest object extraction module 62, and an advertisement retrieval module 63.
  • the interest model learning module 61 is configured to learn a plurality of user interest models by multi-task sorting; the interest object extraction module 62 is configured to extract an interest object in the video according to the user interest model; the advertisement retrieval module 63 is configured to extract the Describe various visual features of the object of interest, and retrieve relevant advertising information in the advertisement library according to the visual feature.
  • the interest model learning module 61 may further include the following sub-modules:
  • a feature extraction sub-module 611 configured to acquire various scenarios in the training data, and extract an underlying visual feature of each macroblock in each scenario;
  • the initialization sub-module 612 is configured to randomly combine the user and the scene into multiple categories according to the underlying visual feature, and initialize an interest model for each type of user on each type of scene;
  • the optimization sub-module 613 is configured to establish a loss function on the training set by using the initialized interest model as an optimization target, minimize the loss function by an optimization algorithm, and then update each interest model parameter value to optimize the user and the scene. Clustering division;
  • the result acquisition sub-module 614 is configured to obtain a final user and scene cluster and a plurality of user interest models.
  • the object of interest extraction module 62 may further include the following sub-modules:
  • a key frame detection sub-module 621 configured to receive an input video stream, and detect a representative key frame in the video stream
  • a feature calculation sub-module 622 configured to calculate, according to an underlying visual feature of the macroblock, an overall visual feature of the scene corresponding to the key frame for each key frame;
  • a scene categorization sub-module 623 configured to classify, according to the overall visual feature, a scene corresponding to the key frame into one of a scene category divided in a process of constructing a user interest model;
  • the interest degree calculation sub-module 624 is configured to calculate an interest degree distribution map of the scene where the key frame is located according to the user interest model;
  • the region growing sub-module 625 is configured to obtain an area of highest interest on the interest degree distribution map by using a region growing algorithm as an object of interest.
  • FIG. 7 is a schematic diagram of the flow of data between modules in a personalized advertisement push system based on user interest learning according to an embodiment of the present invention, to further illustrate a personalized advertisement push system based on user interest learning provided by an embodiment of the present invention.
  • the connection relationship of each module As shown in Figure 7:
  • the predefined scene set and the user interest feedback data stream first enter the feature extraction sub-module 611, and the feature extraction sub-module 611 inputs the extracted underlying visual feature and the user interest feedback into the initialization sub-module. 612.
  • the initialization sub-module 612 randomly classifies the user and the scene, and initializes the initial user interest model according to the classification result, obtains a preliminary random scene and a user classification and interest model, and then sends the results to the optimization sub-module 613 through an iterative algorithm.
  • the result obtaining sub-module 614 obtains the final user classification result and the scene classification result from the last iteration result, and corresponding The user interest model; in the personalized advertisement push process, the key frame detection sub-module 621 receives the input video stream, detects a key frame whose content is representative, and outputs the key frame to the feature calculation sub-module 622 for the key frame corresponding scene.
  • the calculated information data stream accompanying the information provided by the result obtaining sub-module 614 flows through the scene classification sub-module 623 and the interest degree calculation sub-module 624, respectively, to generate an interest degree distribution map of the scene in which the key frame is located, and then, the region grower
  • the module 625 outputs the object of interest to the advertisement retrieval module 63 according to the interest degree profile to extract a plurality of visual features and retrieve the advertisement information library, and finally output the video stream with the personalized advertisement.
  • the personalized advertisement pushing system based on user interest learning obtained by the embodiment of the present invention obtains the user interest model by using the multi-task sorting learning algorithm, and automatically extracts the interest area in the video for different users, and then uses the interest area for advertising. Information association.
  • the advertisements provided in this way are not only closely related to the video content, but also satisfy the user's preferences to a certain extent, and realize personalized advertisement push.
  • the advertisement push result generated by the method and system provided by the embodiment of the present invention is as shown in FIG. 8.
  • a person skilled in the art can understand that all or part of the steps of implementing the above method embodiments may be completed by using hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, when executed, The steps of the foregoing method embodiments are performed; and the foregoing storage medium includes: various media that can store program codes, such as ROM, RAM, disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
PCT/CN2010/079245 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统 Ceased WO2012071696A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN2010800065025A CN102334118B (zh) 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统
PCT/CN2010/079245 WO2012071696A1 (zh) 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统
EP10860233.5A EP2568429A4 (en) 2010-11-29 2010-11-29 METHOD AND SYSTEM FOR INDIVIDUAL ADVERTISING ACCORDING TO DETERMINED USER INTEREST
US13/709,795 US8750602B2 (en) 2010-11-29 2012-12-10 Method and system for personalized advertisement push based on user interest learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/079245 WO2012071696A1 (zh) 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/709,795 Continuation US8750602B2 (en) 2010-11-29 2012-12-10 Method and system for personalized advertisement push based on user interest learning

Publications (1)

Publication Number Publication Date
WO2012071696A1 true WO2012071696A1 (zh) 2012-06-07

Family

ID=45484999

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/079245 Ceased WO2012071696A1 (zh) 2010-11-29 2010-11-29 基于用户兴趣学习的个性化广告推送方法与系统

Country Status (4)

Country Link
US (1) US8750602B2 (en:Method)
EP (1) EP2568429A4 (en:Method)
CN (1) CN102334118B (en:Method)
WO (1) WO2012071696A1 (en:Method)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597940A (zh) * 2018-12-06 2019-04-09 上海哔哩哔哩科技有限公司 基于商业兴趣的目标人群确定及信息推送方法和系统
CN110163649A (zh) * 2019-04-03 2019-08-23 平安科技(深圳)有限公司 广告推送方法、装置、电子设备及存储介质
CN111881340A (zh) * 2020-06-11 2020-11-03 国家电网有限公司 一种基于数字化审计平台的智能推送方法、装置及设备
CN115687790A (zh) * 2022-12-01 2023-02-03 松原市逐贵网络科技有限公司 基于大数据的广告推送方法、系统及云平台
CN118134566A (zh) * 2024-04-12 2024-06-04 深圳市创致联创科技有限公司 基于信息流刷新数据的广告定向推送方法、系统及介质

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2587826A4 (en) * 2010-10-29 2013-08-07 Huawei Tech Co Ltd EXTRACTIOM, ASSOCIATED PROCESS AND SYSTEM FOR CERTAIN OBJECTS OF A VIDEO
US9185470B2 (en) * 2012-05-03 2015-11-10 Nuance Communications, Inc. Remote processing of content
CN103714067B (zh) * 2012-09-29 2018-01-26 腾讯科技(深圳)有限公司 一种信息推送方法和装置
CN102946423B (zh) * 2012-10-31 2015-05-27 中国运载火箭技术研究院 一种基于分布式系统架构的数据映射推送系统及方法
CN103402124A (zh) * 2013-07-23 2013-11-20 百度在线网络技术(北京)有限公司 在用户观看视频时推送信息的方法、系统和云端服务器
CN103473339A (zh) * 2013-09-22 2013-12-25 世纪龙信息网络有限责任公司 更新信息过程中的信息获取方法和系统
CN104602046B (zh) * 2013-11-01 2019-04-23 中国移动通信集团公司 一种基于hls协议的信息发布方法、设备及系统
CN104618803B (zh) 2014-02-26 2018-05-08 腾讯科技(深圳)有限公司 信息推送方法、装置、终端及服务器
CN104915354B (zh) * 2014-03-12 2020-01-10 深圳市腾讯计算机系统有限公司 多媒体文件推送方法及装置
CN103970878A (zh) * 2014-05-15 2014-08-06 中国石油大学(北京) Svm分类器的构造方法及装置
CN104090919B (zh) * 2014-06-16 2017-04-19 华为技术有限公司 推荐广告的方法及广告推荐服务器
US10412436B2 (en) 2014-09-12 2019-09-10 At&T Mobility Ii Llc Determining viewership for personalized delivery of television content
CN104376058B (zh) * 2014-11-07 2018-04-27 华为技术有限公司 用户兴趣模型更新方法及相关装置
CN106155678A (zh) * 2015-04-28 2016-11-23 天脉聚源(北京)科技有限公司 一种用户行为预约提醒方法及系统
US10110933B2 (en) 2015-09-01 2018-10-23 International Business Machines Corporation Video file processing
CN105956888A (zh) * 2016-05-31 2016-09-21 北京创意魔方广告有限公司 广告个性化展示方法
CN107517393B (zh) * 2016-06-17 2020-04-17 阿里巴巴集团控股有限公司 一种信息推送方法、装置及系统
CN107545301B (zh) * 2016-06-23 2020-10-20 阿里巴巴集团控股有限公司 页面展示方法及装置
CN106101831B (zh) * 2016-07-15 2019-06-18 合一网络技术(北京)有限公司 视频向量化方法及装置
CN106529996A (zh) * 2016-10-24 2017-03-22 北京百度网讯科技有限公司 基于深度学习的广告展示方法和装置
CN107038213B (zh) * 2017-02-28 2021-06-15 华为技术有限公司 一种视频推荐的方法及装置
CN107483554A (zh) * 2017-07-25 2017-12-15 中天宽带技术有限公司 基于onu的网络流量进行机器学习定向广告的推送系统和方法
CN107894998B (zh) * 2017-10-24 2019-04-26 迅雷计算机(深圳)有限公司 视频推荐方法及装置
CN107977865A (zh) * 2017-12-07 2018-05-01 畅捷通信息技术股份有限公司 广告推送方法、装置、计算机设备和可读存储介质
CN109145979B (zh) * 2018-08-15 2022-06-21 上海嵩恒网络科技股份有限公司 敏感图像鉴定方法及终端系统
CN109918568B (zh) * 2019-03-13 2021-06-01 百度在线网络技术(北京)有限公司 个性化学习方法、装置、电子设备及存储介质
CN110310148A (zh) * 2019-06-05 2019-10-08 上海易点时空网络有限公司 基于大数据和机器学习的广告精准投放方法
CN110311839B (zh) * 2019-07-30 2021-07-06 秒针信息技术有限公司 推送信息追踪方法、装置、服务器、终端及存储介质
CN110428012A (zh) * 2019-08-06 2019-11-08 深圳大学 脑网络模型建立方法、脑图像分类方法、装置及电子设备
CN112711945B (zh) * 2019-10-25 2022-08-19 上海哔哩哔哩科技有限公司 广告召回方法和系统
CN111523007B (zh) * 2020-04-27 2023-12-26 北京百度网讯科技有限公司 用户感兴趣信息确定方法、装置、设备以及存储介质
CN112312203B (zh) * 2020-08-25 2023-04-07 北京沃东天骏信息技术有限公司 视频播放方法、装置和存储介质
CN114926192A (zh) * 2021-02-01 2022-08-19 腾讯科技(深圳)有限公司 一种信息处理方法、装置及计算机可读存储介质
CN112581195B (zh) * 2021-02-25 2021-05-28 武汉卓尔数字传媒科技有限公司 一种广告推送方法、装置和电子设备
CN113127763A (zh) * 2021-04-29 2021-07-16 深圳市艾酷通信软件有限公司 一种信息显示方法和装置
US12244908B2 (en) * 2021-09-15 2025-03-04 International Business Machines Corporation Real time feature analysis and ingesting correlated advertisements in a video advertisement
CN114547459B (zh) * 2022-02-23 2023-10-31 深圳环金科技有限公司 一种跨境电商数据处理方法及系统
US11878707B2 (en) 2022-03-11 2024-01-23 International Business Machines Corporation Augmented reality overlay based on self-driving mode
JP7486871B1 (ja) 2024-03-25 2024-05-20 株式会社Star Ai シーン抽出システム、シーン抽出方法及びシーン抽出プログラム
CN119624548B (zh) * 2025-02-13 2025-09-02 云袭网络技术河北有限公司 一种基于人工智能的广告推送方法、设备及介质
CN120125299B (zh) * 2025-03-13 2025-11-21 广州小飞信息科技有限公司 一种基于多模态数据的用户兴趣预测方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090076909A1 (en) * 2007-05-11 2009-03-19 Dimitry Ioffe Video channel ad system and method
CN101489139A (zh) * 2009-01-21 2009-07-22 北京大学 基于视觉显著度的视频广告关联方法与系统
CN101621636A (zh) * 2008-06-30 2010-01-06 北京大学 基于视觉注意力模型的广告标志插入和变换方法及系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7089194B1 (en) * 1999-06-17 2006-08-08 International Business Machines Corporation Method and apparatus for providing reduced cost online service and adaptive targeting of advertisements
US20030023598A1 (en) * 2001-07-26 2003-01-30 International Business Machines Corporation Dynamic composite advertisements for distribution via computer networks
US8417568B2 (en) * 2006-02-15 2013-04-09 Microsoft Corporation Generation of contextual image-containing advertisements
US20080147500A1 (en) * 2006-12-15 2008-06-19 Malcolm Slaney Serving advertisements using entertainment ratings in a collaborative-filtering system
FR2926154A1 (fr) 2008-01-08 2009-07-10 Alcatel Lucent Sas Procede de fournitures d'annonces publicitaires personnalisees.
US8281334B2 (en) * 2008-03-31 2012-10-02 Microsoft Corporation Facilitating advertisement placement over video content
US9396258B2 (en) * 2009-01-22 2016-07-19 Google Inc. Recommending video programs
CN101833552A (zh) * 2009-03-10 2010-09-15 郝瑞林 一种流媒体标记和推荐的方法
US20100312609A1 (en) * 2009-06-09 2010-12-09 Microsoft Corporation Personalizing Selection of Advertisements Utilizing Digital Image Analysis
CN101834837A (zh) * 2009-12-18 2010-09-15 北京邮电大学 基于宽带网络的旅游景区景点在线景观视频主动信息服务系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090076909A1 (en) * 2007-05-11 2009-03-19 Dimitry Ioffe Video channel ad system and method
CN101621636A (zh) * 2008-06-30 2010-01-06 北京大学 基于视觉注意力模型的广告标志插入和变换方法及系统
CN101489139A (zh) * 2009-01-21 2009-07-22 北京大学 基于视觉显著度的视频广告关联方法与系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2568429A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597940A (zh) * 2018-12-06 2019-04-09 上海哔哩哔哩科技有限公司 基于商业兴趣的目标人群确定及信息推送方法和系统
CN110163649A (zh) * 2019-04-03 2019-08-23 平安科技(深圳)有限公司 广告推送方法、装置、电子设备及存储介质
CN110163649B (zh) * 2019-04-03 2023-10-17 平安科技(深圳)有限公司 广告推送方法、装置、电子设备及存储介质
CN111881340A (zh) * 2020-06-11 2020-11-03 国家电网有限公司 一种基于数字化审计平台的智能推送方法、装置及设备
CN111881340B (zh) * 2020-06-11 2024-05-10 国家电网有限公司 一种基于数字化审计平台的智能推送方法、装置及设备
CN115687790A (zh) * 2022-12-01 2023-02-03 松原市逐贵网络科技有限公司 基于大数据的广告推送方法、系统及云平台
CN115687790B (zh) * 2022-12-01 2023-07-14 成都坐联智城科技有限公司 基于大数据的广告推送方法、系统及云平台
CN118134566A (zh) * 2024-04-12 2024-06-04 深圳市创致联创科技有限公司 基于信息流刷新数据的广告定向推送方法、系统及介质

Also Published As

Publication number Publication date
EP2568429A4 (en) 2013-11-27
US20130094756A1 (en) 2013-04-18
CN102334118B (zh) 2013-08-28
CN102334118A (zh) 2012-01-25
EP2568429A1 (en) 2013-03-13
US8750602B2 (en) 2014-06-10

Similar Documents

Publication Publication Date Title
WO2012071696A1 (zh) 基于用户兴趣学习的个性化广告推送方法与系统
US11556743B2 (en) Learning highlights using event detection
Saini et al. Video summarization using deep learning techniques: a detailed analysis and investigation
US10522186B2 (en) Apparatus, systems, and methods for integrating digital media content
US10528821B2 (en) Video segmentation techniques
KR102741221B1 (ko) 스트리밍 비디오 내의 객체를 검출하고, 필터링하고 식별하기 위한 방법 및 장치
CN103299324B (zh) 使用潜在子标记来学习用于视频注释的标记
Bianco et al. Predicting image aesthetics with deep learning
WO2018166288A1 (zh) 信息呈现方法和装置
CN106446015A (zh) 一种基于用户行为偏好的视频内容访问预测与推荐方法
US20130101209A1 (en) Method and system for extraction and association of object of interest in video
CN103365936A (zh) 视频推荐系统及其方法
CN107247919A (zh) 一种视频情感内容的获取方法及系统
Sahu et al. Summarizing egocentric videos using deep features and optimal clustering
Ramezani et al. A novel video recommendation system based on efficient retrieval of human actions
Dutta et al. A shot detection technique using linear regression of shot transition pattern
Mallick et al. Video retrieval using salient foreground region of motion vector based extracted keyframes and spatial pyramid matching
CN115935049A (zh) 基于人工智能的推荐处理方法、装置及电子设备
CN114943549A (zh) 一种广告投放方法及装置
JP2017021606A (ja) 動画像検索方法、動画像検索装置及びそのプログラム
CN115604510A (zh) 一种视频推荐方法、装置、计算机设备和存储介质
Narwal et al. Domain Knowledge Based Multi-CNN Approach for Dynamic and Personalized Video Summarization
Mocanu et al. A multimodal high level video segmentation for content targeted online advertising
CN117156078B (zh) 一种视频数据处理方法、装置、电子设备及存储介质
Namala et al. Efficient feature based video retrieval and indexing using pattern change with invariance algorithm

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080006502.5

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10860233

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2010860233

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE