WO2021042826A1 - 一种视频播放完整度预测方法及装置 - Google Patents
一种视频播放完整度预测方法及装置 Download PDFInfo
- Publication number
- WO2021042826A1 WO2021042826A1 PCT/CN2020/097861 CN2020097861W WO2021042826A1 WO 2021042826 A1 WO2021042826 A1 WO 2021042826A1 CN 2020097861 W CN2020097861 W CN 2020097861W WO 2021042826 A1 WO2021042826 A1 WO 2021042826A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- video
- video playback
- data
- playback
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/74—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/732—Query formulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
Definitions
- the invention relates to the technical field of big data and deep learning, in particular to a method and device for predicting the completeness of video playback.
- the video recommendation system is based on a large number of users and videos, relying on big data analysis and artificial intelligence technology to build a video recommendation system by studying users’ interest preferences, recommending high-quality videos that users are interested in to target users, and solving the problem of information overload , To achieve the effect of thousands of people, improve user stay time and satisfaction.
- Video recommendation systems usually include two stages: recall and sorting.
- the recall stage is to select a part of the candidate set from a large number of videos.
- the sorting stage is to perform a more accurate and unified calculation on the candidate set in the recall stage, and to screen out the most interested users from the candidate set. A small number of high-quality videos.
- the click-based model may contribute to the title party, which does not increase the user’s stay time, and affects the user’s viewing time and satisfaction. Watching time is an important optimization goal of information flow. Therefore, it is urgent to introduce playback completeness optimization in the short video ranking model to improve the true relevance of recommendations to achieve user viewing time and satisfaction.
- the embodiments of the present invention provide a method and device for predicting the completeness of video playback.
- the user's video playback completeness is predicted, in terms of viewing time, which is an important information stream.
- the user's interest data closer to the real is obtained, the recognition accuracy of the user's interest is improved, the true relevance of the recommendation is improved, and the user's viewing time and satisfaction are greatly improved.
- a method for predicting the completeness of video playback includes:
- the preset video playback completeness prediction model is obtained through user video playback training data training, and the user video playback feature vector includes at least a user feature vector and a video feature vector.
- the method further includes:
- collecting the user video playback information data includes: obtaining the user video playback information data including user information, user playback history information, video information, and user client information; and/or,
- Filtering the user video playback information data to obtain the screening results includes: using a multi-channel recall method including user collaboration, user search, theme models, popular recommendations, user portraits, and video tags to play information data on the user video Perform screening and obtain screening results; and/or,
- Performing feature extraction on the screening result to generate the data to be tested for the user video playback feature vector includes: using word2vec model and IDF weight training to train the word vector obtained from a preset massive corpus to compare the screening result Perform word segmentation on the video title and video classification tag of the video to generate a video word vector, and then calculate the word vector according to the user playback history information combined with time attenuation to generate a user word vector.
- the preset video playback completeness prediction model includes three hidden layer DNNs.
- the preset video playback completeness prediction model is obtained by inputting the user video playback training data into training, wherein the user video playback training data is an independent variable, and the user viewing history video playback completeness The value is a dependent variable, and the user video playback training data is a feature vector of a combination of historical user vectors and historical video vectors constructed according to user playback history information.
- the method further includes:
- a device for predicting the completeness of video playback includes a model calculation module for:
- User video playback training data is obtained through training, and the user video playback feature vector includes at least a user feature vector and a video feature vector.
- the device further includes a data collection module, a data screening module, and a vector generation module.
- the data collection module collects the user video playback information data; the data screening module screens the user video playback information data, Obtain the screening result; the vector generation module performs feature extraction on the screening result, and generates the data to be tested of the user video playback feature vector.
- the data collection module obtains the user video playback information data including user information, user playback history information, video information, and user client information; and/or,
- the data screening module uses a multi-channel recall method including user collaboration, user search, topic models, popular recommendations, user portraits, and video tags to filter the user video playback information data to obtain screening results; and/or,
- the vector generation module performs feature extraction on the screening results to generate the data to be tested for the user video playback feature vector, including: using word2vec model and IDF weight training to train the word vector obtained from a preset massive corpus, and correct
- the video title and the video classification tag in the screening result are segmented to generate a video word vector, and then the word vector is calculated according to the user playback history information combined with time attenuation to generate a user word vector.
- the device further includes a data recommendation module configured to perform a sorting operation from high to low on the video playback integrity value of the data to be tested, obtain top N video sorting results, and compare all The video ranking result is recommended to the corresponding user according to the priority level, where N is an integer greater than 1.
- the TF-IDF algorithm is used in the field of video recommendation, which effectively highlights the key information of the video through the IDF value;
- FIG. 1 is a flowchart of a method for predicting the completeness of video playback according to an embodiment of the present invention
- FIG. 2 is a flowchart of a method for predicting the integrity of video playback according to another embodiment of the present invention
- FIG. 3 is a demonstration diagram of a preferred embodiment of feature engineering construction in step 203;
- FIG. 4 is a demonstration diagram of a preferred implementation manner of a preset video playback completeness prediction model provided by an embodiment of the present invention
- FIG. 5 is a schematic structural diagram of a device for predicting the integrity of video playback provided by an embodiment of the present invention.
- Fig. 6 is a schematic structural diagram of an apparatus for predicting the integrity of video playback provided by another embodiment of the present invention.
- the video playback integrity prediction method and device provided by the embodiments of the present invention change the traditional CTR estimation method, introduce video playback integrity indicators, and use a trained preset video playback integrity prediction model for video playback for different users.
- Completeness prediction through the prediction result of video playback completeness, the user's interest data closer to the real is obtained in terms of the important information stream of viewing time, which improves the accuracy of identifying user interests, thereby improving the real relevance of recommendations
- the user’s viewing time and satisfaction have been greatly improved. Therefore, the method and device for predicting the completeness of video playback can be widely used in a variety of network video application scenarios involving user interest mining, user demand matching, or user recommendation.
- FIG. 1 is a flowchart of a method for predicting the completeness of video playback according to an embodiment of the present invention. As shown in Fig. 1, the method for predicting the completeness of frequency playback includes the following steps:
- the user video playback feature vector here includes at least user feature vector and video feature vector, and user features include user portraits and user history. Playback records or other user-related information. Video features include video category, video duration, video time, video playback completeness records, or other other information related to the published video. In addition to the user feature vector and the video feature vector, the user video playback feature vector may also include user client classification information and other information related to video playback.
- the preset video playback integrity prediction model is obtained through user video playback training data training. The specific video playback integrity prediction model used can be obtained by designing and constructing a corresponding deep learning model for training as needed, or using existing There are any possible deep learning models obtained by training in the technology, which are not particularly limited in the embodiment of the present invention.
- Fig. 2 is a flowchart of a method for predicting the completeness of video playback according to another embodiment of the present invention. As shown in Figure 2, the method for predicting the completeness of video playback includes the following steps:
- user video playback information data including user information, user playback history information, video information, and user client information are acquired.
- User video playback information mainly includes user information, user playback history information, video information, and user client information.
- User information mainly refers to user portrait information, including user basics. Attribute information (gender, age, etc.), user playback history information includes the percentage of users playing in each hour of history, the percentage of users watching various videos, etc., and the client information includes user equipment type, operator type, etc.
- the user video playback information can also collect contextual information secondary to the video played by the user, such as the time the user watches each video, and user location information.
- step 201 may be implemented in other ways in addition to the manner described in the foregoing steps, and the embodiment of the present invention does not limit the specific manner.
- screening user video playback information data to obtain screening results includes: using multi-channel recall methods including user collaboration, user search, topic models, popular recommendations, user portraits, and video tags to perform user video playback information data Screening, get the screening results.
- This process is the recall stage of the coarse screening of the user's video playback information data.
- it is mainly to screen the video information in the user's video playback information data.
- the scale of the video is huge, it may reach the order of millions.
- the cost of directly inputting the model for data preprocessing is too high, and the time will be very slow, so you can roughly filter out some of the higher quality or more likely to meet user preferences through the recall stage.
- Video information. Recall usually uses multi-channel recall, such as user collaboration, user search, topic models, popular recommendations, user portraits and video tags, etc., so as to select a part of the expected candidate set from a large number of videos.
- step 202 may be implemented in other ways in addition to the manner described in the foregoing steps, and the embodiment of the present invention does not limit the specific manner.
- feature extraction of the screening results to generate the data to be tested for the user video playback feature vector including: using word2vec model and IDF weight training to train the word vector obtained from the preset massive corpus, and the video title in the screening result Perform word segmentation with the video classification label to generate a video word vector, and then calculate the word vector based on the user's playback history information and time attenuation to generate a user word vector.
- the user word vector and video word vector here correspond to the aforementioned user feature vector and video feature vector.
- This process is the feature engineering stage, as shown in Figure 3.
- word segmentation and word2vec model a 200-dimensional word vector for each word is trained to represent the potential meaning of the word in a vectorized form.
- the relationship between words is expressed, and the video title is processed by word segmentation and the IDF obtained by training is combined to calculate the word vector representation of the video.
- the user's word vector representation is calculated.
- the user’s top3 tag videos are counted according to the video tag category and the proportion exceeds 10%.
- the video corresponding to the relatively low video tag is not the user's potential point of interest. This part of the playback is often a hot video or a user's misoperation, which can be discarded through feature extraction.
- step 203 the process of performing feature extraction on the screening results to generate user video playback feature vectors can also be implemented in other ways in addition to the manner described in the above steps. Not limited.
- the preset video playback completeness prediction model is obtained by inputting user video playback training data into training, where the user video playback training data is the independent variable, the user viewing history video playback integrity value is the dependent variable, and the user video playback training data is The feature vector of the combination of historical user vector and historical video vector constructed according to the user's playing history information is used for training to obtain a desired preset video playing completeness prediction model.
- the preset video playback completeness prediction model includes three hidden layers of DNN, and the input information of the input layer includes the user's word vector representation (the video word vector is calculated from the user's history playing video and combined with IDF weights to obtain each video word vector, and then integrated
- the 200-dimensional word vector calculated by considering the time attenuation), the user’s basic portrait (gender, age, etc.), the proportion of videos played in each period (by hour), the proportion of each category of video, etc.; the word vector of the video (200 dimensions), Video quality (average playback integrity, video popularity, etc.), video release time, video category; device type, operator type; region; current time period, etc.
- step 204 the data content and form of the data to be tested for the user video playback feature vector are input.
- the process can also be implemented in other ways. Not limited.
- step 206 the following steps are further included:
- the steps of ordering the video playback integrity value can also be designed in the preset video playback integrity prediction model calculation process, as shown in FIG. 4, which is not particularly limited in the embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of a video playback integrity prediction device provided by an embodiment of the present invention.
- the video playback integrity prediction device includes a model calculation module 1.
- the model calculation module 1 is used to: input user video playback characteristics
- the data to be tested of the vector is calculated by the preset video playback integrity prediction model, and the video playback integrity value of the data to be tested is output.
- the preset video playback integrity prediction model is obtained by training the user video playback training data.
- the user video playback feature vector includes at least a user feature vector and a video feature vector.
- Fig. 6 is a schematic structural diagram of an apparatus for predicting the integrity of video playback provided by another embodiment of the present invention.
- the video playback completeness prediction device 2 includes a data collection module 21, a data screening module 22, a vector generation module 23, a model calculation module 24 and a data recommendation module 25.
- the data collection module 21 collects user video playback information data. Specifically, the data collection module 21 obtains user video playback information data including user information, user playback history information, video information, and user client information.
- the data screening module 22 screens the user's video playback information data and obtains the screening result. Specifically, the data screening module 22 uses a multi-channel recall method including user collaboration, user search, topic models, popular recommendations, user portraits, and video tags to filter user video playback information data and obtain screening results.
- a multi-channel recall method including user collaboration, user search, topic models, popular recommendations, user portraits, and video tags to filter user video playback information data and obtain screening results.
- the vector generation module 23 performs feature extraction on the screening results, and generates a user video playback feature vector. Specifically, the vector generation module 23 performs feature extraction on the screening results to generate the data to be tested for the user video playback feature vector, including: using word2vec model and IDF weight training to train the word vectors obtained from the preset massive corpus, and compare the screening results.
- the video title and the video classification tag in the video segmentation are performed to generate the video word vector, and then the word vector is calculated according to the user's playback history information and the time attenuation to generate the user word vector.
- the user word vector and video word vector here correspond to the following user feature vector and video feature vector.
- the model calculation module 24 inputs the data to be tested for the user’s video playback feature vector, calculates it through the preset video playback integrity prediction model, and outputs the video playback integrity value of the data to be tested.
- the preset video playback integrity prediction model is passed User video playback training data is obtained through training, and the user video playback feature vector includes at least a user feature vector and a video feature vector.
- the data recommendation module 25 performs a sorting operation from high to low on the video playback integrity value of the data to be tested, obtains the topN video sorting result, and recommends the video sorting result to the corresponding user according to the priority level, where N is an integer greater than 1.
- the word segmentation tool of this embodiment has its own thesaurus, and adds entertainment stars, movie and TV series names, sports stars, team information, etc. as supplementary thesaurus, which are composed of Netease News, Baidu Encyclopedia, Wikipedia, etc. obtained in the crawler system Mass corpus, word segmentation and word vector training are performed on the corpus, and finally the word vector representation of each word is obtained (the word vector dimension is 200 dimensions, which is determined by the experimental effect, and then the vector is normalized).
- TF-IDF training is performed to obtain the IDF value, which is normalized, and then the weight of the supplementary lexicon is increased to 1, similar to the attention mechanism, which puts more attention on these words.
- the video information table is shown in Table 1 below, which carries video id, video title information, classification tag, video tag information, release time, and so on.
- the video information is segmented, the word vector table of the word is checked, and the IDF value table is combined with the weighted calculation to obtain the word vector representation of the current video (normalized).
- User portrait acquisition stage that is, the calculation process of user word vector
- the target user group is active users, that is, there is a certain amount of playback (such as playing more than 10 videos) in the most recent period (such as the last 30 days) and relatively active recently Of users (such as playing records in the last 7 days).
- the calculation of the user’s word vector is refined according to the tag category. For example, the number of videos played by the user in the last cycle is 100, including 60 sports, 20 finance, 15 funny, 4 social, and 1 healthy; in the process of user portrait User portraits in the tag categories that accounted for TOP3 and accounted for more than 10% were performed in the TOP3 category.
- This method can obtain the user's main points of interest, and eliminate a small amount of misoperations and hot videos that do not represent the user's points of interest.
- sports accounted for 60%
- finance accounted for 20%
- funny accounted for 15%
- society accounted for 4%
- health accounted for 1%; therefore, it is necessary to profile the user in the three dimensions of sports, finance, and funny for the current user, and calculate The word vector representation of the user's corresponding dimension.
- the user's playback record in the most recent period (such as the most recent 30 days)
- the above-mentioned characteristics are constructed, and the deep learning model is trained in combination with the user's playback integrity of the video.
- the model predicts the possible playback integrity of the target user for the unplayed video, and the final recommendation result set is generated by inverting the playback integrity according to the playback integrity.
- the video playback integrity prediction device provided in the above embodiment triggers the video playback integrity prediction service, it only uses the division of the above functional modules for illustration. In actual applications, the above functions can be allocated according to needs. It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
- the video playback completeness prediction device provided in the above-mentioned embodiment belongs to the same concept as the video playback completeness prediction method embodiment. For the specific implementation process, please refer to the method embodiment, which will not be repeated here.
- the video playback completeness prediction method and device provided by the embodiments of the present invention have the following beneficial effects compared with the prior art:
- the TF-IDF algorithm is used in the field of video recommendation, which effectively highlights the key information of the video through the IDF value;
- the program can be stored in a computer-readable storage medium.
- the storage medium mentioned can be a read-only memory, a magnetic disk or an optical disk, etc.
- These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to produce a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment can be used to generate It is a device that realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
- the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
- the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims (10)
- 一种视频播放完整度预测方法,其特征在于,所述方法包括:输入用户视频播放特征向量的待测数据;通过预设视频播放完整度预测模型进行计算;输出所述待测数据的视频播放完整度值,其中,所述预设视频播放完整度预测模型是通过用户视频播放训练数据训练得到的,所述用户视频播放特征向量至少包括用户特征向量和视频特征向量。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:收集用户视频播放信息数据;对所述用户视频播放信息数据进行筛选,获取筛选结果;对所述筛选结果进行特征提取,生成所述用户视频播放特征向量的待测数据。
- 根据权利要求2所述的方法,其特征在于,收集所述用户视频播放信息数据,包括:获取包括用户信息、用户播放历史信息、视频信息及用户客户端信息在内的所述用户视频播放信息数据;和/或,对所述用户视频播放信息数据进行筛选,获取筛选结果,包括:利用包括用户协同、用户搜索、主题模型、热门推荐、用户画像和视频标签的多通道召回方式,对所述用户视频播放信息数据进行筛选,获取筛选结果;和/或,对所述筛选结果进行特征提取,生成所述用户视频播放特征向量的待测数据,包括:利用通过word2vec模型和IDF权值训练对预设海量语料库训练得到的词向量,对所述筛选结果中的视频标题和视频分类标签进行分词,生成视频词向量,然后根据所述用户播放历史信息结合时间衰减进行词向量计算,生成用户词向量。
- 根据权利要求1所述的方法,其特征在于,所述预设视频播放完整度预测模型包含三个隐藏层的DNN。
- 根据权利要求4所述的方法,其特征在于,所述预设视频播放完整度预测模型是通过将所述用户视频播放训练数据输入训练得到的,其中,所述用户视频播放训练数据为自变量,所述用户观看历史视频播放完整度值为因变量,所述用户视频播放训练数据是根据用户播放历史信息构建的历史用户向量、历史视频向量组合的特征向量。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:对所述待测数据的视频播放完整度值进行由高到低的排序操作,获取topN的视频排序结果,将所述视频排序结果根据优先级别推荐给对应用户,其中N为大于1的整数。
- 一种视频播放完整度预测装置,其特征在于,所述装置包括模型计算模块,所述模型计算模块用于:输入用户视频播放特征向量的待测数据,通过预设视频播放完整度预测模型进行计算,输出所述待测数据的视频播放完整度值,其中,所述预设视频播放完整度预测模型是通过用户视频播放训练数据训练得到的,所述用户视频播放特征向量至少包括用户特征向量和视频特征向量。
- 根据权利要求7所述的装置,其特征在于,所述装置还包括数据收集模块、数据筛选模块和向量生成模块,所述数据收集模块收集用户视频播放信息数据;所述数据筛选模块对所述用户视频播放信息数据进行筛选,获取筛选结果;所述向量生成模块对所述筛选结果进行特征提取,生成所述用户视频播放特征向量的待测数据。
- 根据权利要求8所述的装置,其特征在于,所述数据收集模块获取包括用户信息、用户播放历史信息、视频信息及用户客户端信息在内的所述用户视频播放信息数据;和/或,所述数据筛选模块利用包括用户协同、用户搜索、主题模型、热门推荐、用户画像和视频标签的多通道召回方式,对所述用户视频播放信息数据进行筛选,获取筛选结果;和/或,所述向量生成模块对所述筛选结果进行特征提取,生成所述用户视频播放特征向量的待测数据,包括:利用通过word2vec模型和IDF权值训练对预设海量语料库训练得到的词向量,对所述筛选结果中的视频标题和视频分类标签进行分词,生成视频词向量,然后根据所述用户播放历史信息结合时间衰减进行词向量计算,生成用户词向量。
- 根据权利要求7所述的装置,其特征在于,所述装置还包括数据推荐模块,所述数据推荐模块用于对所述待测数据的视频播放完整度值进行由高到低的排序操作,获取topN的视频排序结果,将所述视频排序结果根据优先级别推荐给对应用户,其中N为大于1的整数。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3153598A CA3153598A1 (en) | 2019-09-05 | 2020-06-24 | Method of and device for predicting video playback integrity |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910845413.2 | 2019-09-05 | ||
CN201910845413.2A CN110704674B (zh) | 2019-09-05 | 2019-09-05 | 一种视频播放完整度预测方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021042826A1 true WO2021042826A1 (zh) | 2021-03-11 |
Family
ID=69195102
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/097861 WO2021042826A1 (zh) | 2019-09-05 | 2020-06-24 | 一种视频播放完整度预测方法及装置 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN110704674B (zh) |
CA (1) | CA3153598A1 (zh) |
WO (1) | WO2021042826A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113220936A (zh) * | 2021-06-04 | 2021-08-06 | 黑龙江广播电视台 | 基于随机矩阵编码和简化卷积网络的视频智能推荐方法、装置及存储介质 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110704674B (zh) * | 2019-09-05 | 2022-11-25 | 苏宁云计算有限公司 | 一种视频播放完整度预测方法及装置 |
CN111918136B (zh) * | 2020-07-04 | 2022-07-01 | 中信银行股份有限公司 | 一种兴趣的分析方法及装置、存储介质、电子设备 |
CN111538912B (zh) * | 2020-07-07 | 2020-12-25 | 腾讯科技(深圳)有限公司 | 内容推荐方法、装置、设备及可读存储介质 |
CN111565316B (zh) * | 2020-07-15 | 2020-10-23 | 腾讯科技(深圳)有限公司 | 视频处理方法、装置、计算机设备及存储介质 |
CN112887795B (zh) * | 2021-01-26 | 2023-04-21 | 脸萌有限公司 | 视频播放方法、装置、设备和介质 |
CN115086705A (zh) * | 2021-03-12 | 2022-09-20 | 北京字跳网络技术有限公司 | 一种资源预加载方法、装置、设备和存储介质 |
CN113132803B (zh) * | 2021-04-23 | 2022-09-16 | Oppo广东移动通信有限公司 | 视频观看时长预测方法、装置、存储介质以及终端 |
CN113312512B (zh) * | 2021-06-10 | 2023-10-31 | 北京百度网讯科技有限公司 | 训练方法、推荐方法、装置、电子设备以及存储介质 |
CN113873330B (zh) * | 2021-08-31 | 2023-03-10 | 武汉卓尔数字传媒科技有限公司 | 视频推荐方法、装置、计算机设备和存储介质 |
CN114339417B (zh) * | 2021-12-30 | 2024-05-10 | 未来电视有限公司 | 一种视频推荐的方法、终端设备和可读存储介质 |
CN114339402A (zh) * | 2021-12-31 | 2022-04-12 | 北京字节跳动网络技术有限公司 | 视频播放完成率预测方法、装置、介质及电子设备 |
CN115082301B (zh) * | 2022-08-22 | 2022-12-02 | 中关村科学城城市大脑股份有限公司 | 定制视频生成方法、装置、设备和计算机可读介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105100165A (zh) * | 2014-05-20 | 2015-11-25 | 深圳市腾讯计算机系统有限公司 | 网络服务推荐方法和装置 |
CN106028071A (zh) * | 2016-05-17 | 2016-10-12 | Tcl集团股份有限公司 | 一种视频推荐方法及系统 |
CN106446052A (zh) * | 2016-08-31 | 2017-02-22 | 北京魔力互动科技有限公司 | 一种基于用户集合的视频点播节目推荐方法 |
CN108260008A (zh) * | 2018-02-11 | 2018-07-06 | 北京未来媒体科技股份有限公司 | 一种视频推荐方法、装置及电子设备 |
CN108460085A (zh) * | 2018-01-19 | 2018-08-28 | 北京奇艺世纪科技有限公司 | 一种基于用户日志的视频搜索排序训练集构建方法及装置 |
CN110704674A (zh) * | 2019-09-05 | 2020-01-17 | 苏宁云计算有限公司 | 一种视频播放完整度预测方法及装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10516906B2 (en) * | 2015-09-18 | 2019-12-24 | Spotify Ab | Systems, methods, and computer products for recommending media suitable for a designated style of use |
US10827221B2 (en) * | 2016-06-24 | 2020-11-03 | Sourse Pty Ltd | Selectively playing videos |
CN106227883B (zh) * | 2016-08-05 | 2019-09-13 | 北京数码视讯科技股份有限公司 | 一种多媒体内容的热度分析方法和装置 |
CN107832437B (zh) * | 2017-11-16 | 2021-03-02 | 北京小米移动软件有限公司 | 音/视频推送方法、装置、设备及存储介质 |
CN107948761B (zh) * | 2017-12-12 | 2021-01-01 | 上海哔哩哔哩科技有限公司 | 弹幕播放控制方法、服务器及弹幕播放控制系统 |
CN110059221B (zh) * | 2019-03-11 | 2023-10-20 | 咪咕视讯科技有限公司 | 视频推荐方法、电子设备及计算机可读存储介质 |
CN110012356B (zh) * | 2019-04-16 | 2020-07-10 | 腾讯科技(深圳)有限公司 | 视频推荐方法、装置和设备及计算机存储介质 |
-
2019
- 2019-09-05 CN CN201910845413.2A patent/CN110704674B/zh active Active
-
2020
- 2020-06-24 CA CA3153598A patent/CA3153598A1/en active Pending
- 2020-06-24 WO PCT/CN2020/097861 patent/WO2021042826A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105100165A (zh) * | 2014-05-20 | 2015-11-25 | 深圳市腾讯计算机系统有限公司 | 网络服务推荐方法和装置 |
CN106028071A (zh) * | 2016-05-17 | 2016-10-12 | Tcl集团股份有限公司 | 一种视频推荐方法及系统 |
CN106446052A (zh) * | 2016-08-31 | 2017-02-22 | 北京魔力互动科技有限公司 | 一种基于用户集合的视频点播节目推荐方法 |
CN108460085A (zh) * | 2018-01-19 | 2018-08-28 | 北京奇艺世纪科技有限公司 | 一种基于用户日志的视频搜索排序训练集构建方法及装置 |
CN108260008A (zh) * | 2018-02-11 | 2018-07-06 | 北京未来媒体科技股份有限公司 | 一种视频推荐方法、装置及电子设备 |
CN110704674A (zh) * | 2019-09-05 | 2020-01-17 | 苏宁云计算有限公司 | 一种视频播放完整度预测方法及装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113220936A (zh) * | 2021-06-04 | 2021-08-06 | 黑龙江广播电视台 | 基于随机矩阵编码和简化卷积网络的视频智能推荐方法、装置及存储介质 |
CN113220936B (zh) * | 2021-06-04 | 2023-08-15 | 黑龙江广播电视台 | 基于随机矩阵编码和简化卷积网络的视频智能推荐方法、装置及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN110704674B (zh) | 2022-11-25 |
CA3153598A1 (en) | 2021-03-11 |
CN110704674A (zh) | 2020-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021042826A1 (zh) | 一种视频播放完整度预测方法及装置 | |
CN111241311B (zh) | 媒体信息推荐方法、装置、电子设备及存储介质 | |
US11386137B2 (en) | Dynamic feedback in a recommendation system | |
CN110781321B (zh) | 一种多媒体内容推荐方法及装置 | |
CN103559206B (zh) | 一种信息推荐方法及系统 | |
CN107832437B (zh) | 音/视频推送方法、装置、设备及存储介质 | |
WO2017096877A1 (zh) | 一种推荐方法和装置 | |
CN106294830A (zh) | 多媒体资源的推荐方法及装置 | |
CN109189990B (zh) | 一种搜索词的生成方法、装置及电子设备 | |
EP3510496A1 (en) | Compiling documents into a timeline per event | |
CN109511015B (zh) | 多媒体资源推荐方法、装置、存储介质及设备 | |
CN111708901A (zh) | 多媒体资源推荐方法、装置、电子设备及存储介质 | |
CN112052387B (zh) | 一种内容推荐方法、装置和计算机可读存储介质 | |
CN111241394B (zh) | 数据处理方法、装置、计算机可读存储介质及电子设备 | |
CN107341272A (zh) | 一种推送方法、装置和电子设备 | |
CN111597446B (zh) | 基于人工智能的内容推送方法、装置、服务器和存储介质 | |
CN110019943A (zh) | 视频推荐方法、装置、电子设备和存储介质 | |
CN112464100B (zh) | 信息推荐模型训练方法、信息推荐方法、装置及设备 | |
CN112507163A (zh) | 时长预测模型训练方法、推荐方法、装置、设备及介质 | |
CN112131456A (zh) | 一种信息推送方法、装置、设备及存储介质 | |
Vilakone et al. | Movie recommendation system based on users’ personal information and movies rated using the method of k-clique and normalized discounted cumulative gain | |
CN114186130A (zh) | 一种基于大数据的体育资讯推荐方法 | |
CN110781377B (zh) | 一种文章推荐方法、装置 | |
CN112364184A (zh) | 多媒体数据的排序方法、装置、服务器及存储介质 | |
CN107707940A (zh) | 视频排序方法、装置、服务器及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20861635 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3153598 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20861635 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20861635 Country of ref document: EP Kind code of ref document: A1 |