CN108153863B - Video information representation method and device - Google Patents

Video information representation method and device Download PDF

Info

Publication number
CN108153863B
CN108153863B CN201711417051.4A CN201711417051A CN108153863B CN 108153863 B CN108153863 B CN 108153863B CN 201711417051 A CN201711417051 A CN 201711417051A CN 108153863 B CN108153863 B CN 108153863B
Authority
CN
China
Prior art keywords
video
playing
corrected
user
playing times
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711417051.4A
Other languages
Chinese (zh)
Other versions
CN108153863A (en
Inventor
徐龙
张鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201711417051.4A priority Critical patent/CN108153863B/en
Publication of CN108153863A publication Critical patent/CN108153863A/en
Application granted granted Critical
Publication of CN108153863B publication Critical patent/CN108153863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Abstract

The invention discloses a method and a device for representing video information, wherein the method comprises the following steps: acquiring a video playing recording list of a user, wherein the video playing recording list records a plurality of pieces of video information watched by the user; setting a video identifier corresponding to each video in the video playing recording list; acquiring the playing times of each video in the video playing record list, and establishing a mapping table of the video identification of each video and the corresponding playing times; and carrying out vector training on the mapping table to generate a vector of each video. The invention realizes the purpose of improving the user experience degree by expressing the video information in a vector form.

Description

Video information representation method and device
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a method and an apparatus for representing video information.
Background
In recent years, along with popularization and development of internet technology, more and more video clients appear in life entertainment of people, in order to meet watching habits of users, video recommendation, video similarity retrieval or video query and other works are often performed, and video feature extraction is the basis for performing the works.
Different from the feature extraction of the traditional database, the video feature extraction not only extracts the relevant information of numerical values and characters, but also comprehensively extracts the unformatted information such as images, audios, characters and the like. The conventional video feature extraction method mostly focuses on extracting information from aspects of description information, video titles, video frames and the like of videos, and the information is used as a basis for performing corresponding video recommendation or video similarity retrieval on video features, so that a video recommendation result is not a video desired by a user, or the similarity of the retrieved video is low.
Disclosure of Invention
In view of the above problems, the present invention provides a method and an apparatus for representing video information, which achieve the purpose of improving user experience by representing video information in a vector form.
In order to achieve the purpose, the invention provides the following technical scheme:
a method of representing video information, the method comprising:
acquiring a video playing recording list of a user, wherein the video playing recording list records a plurality of pieces of video information watched by the user;
setting a video identifier corresponding to each video in the video playing recording list;
acquiring the playing times of each video in the video playing record list, and establishing a mapping table of the video identification of each video and the corresponding playing times;
and carrying out vector training on the mapping table to generate a vector of each video.
Preferably, the acquiring a video play record list of the user includes:
acquiring a video access log of a user;
extracting video playing records in the video access log according to preset statistical time;
and cleaning the extracted video playing record according to a preset video cleaning rule to generate a video playing record list of the user, wherein the video playing record list records a plurality of video information watched by the user.
Preferably, the preset video cleansing rule at least comprises one of the following rules:
the rule of filtering out videos with playing time not meeting a threshold value, the rule of filtering out videos which are jumped and watched by users, and the rule of merging the series played television shows into one album video.
Preferably, the obtaining the playing times of each video in the video playing record list and establishing a mapping table between the video identifier of each video and the corresponding playing times includes:
acquiring the playing times of each video in the video playing record list;
extracting a video to be corrected, correcting the playing times of the video to be corrected to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected;
and establishing a mapping table of the video identification of each video and the corresponding playing times.
Preferably, the extracting a video to be corrected, performing correction processing on the playing times of the video to be corrected to obtain corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected includes:
extracting videos to be corrected from the video playing record list according to the heat information of each video;
setting a correction coefficient of the video to be corrected, calculating to obtain a corrected playing time of the video to be corrected according to a preset formula M-Nk, and taking the corrected playing time as the playing time of the video to be corrected, wherein N is an initial playing time, and k is the correction coefficient.
An apparatus for representing video information, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a video playing recording list of a user, and the video playing recording list records a plurality of pieces of video information watched by the user;
the setting module is used for setting a video identifier corresponding to each video in the video playing record list;
the establishing module is used for acquiring the playing times of each video in the video playing record list and establishing a mapping table of the video identification of each video and the corresponding playing times;
and the generating module is used for carrying out vector training on the mapping table to generate the vector of each video.
Preferably, the obtaining module includes:
the acquisition unit is used for acquiring a video access log of a user;
the extraction unit is used for extracting video playing records in the video access logs according to preset statistical time;
and the processing unit is used for cleaning the extracted video playing record according to a preset video cleaning rule to generate a video playing record list of the user, wherein the video playing record list records a plurality of pieces of video information watched by the user.
Preferably, the preset video definition rule includes at least one of the following rules:
the rule of filtering out videos with playing time not meeting a threshold value, the rule of filtering out videos which are jumped and watched by users, and the rule of merging the series played television shows into one album video.
Preferably, the establishing module comprises:
the number obtaining unit is used for obtaining the playing number of each video in the video playing record list;
the correction processing unit is used for extracting a video to be corrected, correcting the playing times of the video to be corrected to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected;
and the mapping establishing unit is used for establishing a mapping table of the video identifier of each video and the corresponding playing times.
Preferably, the correction processing unit includes:
the video extracting unit is used for extracting the video to be corrected from the video playing recording list according to the heat information of each video;
a calculating subunit, configured to set a correction coefficient of the video to be corrected, and set the correction coefficient to be N according to a preset formula MkAnd calculating to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected, wherein N is the initial playing times, and k is a correction coefficient.
Compared with the prior art, the method and the device have the advantages that the video information watched by the user is recorded into the video playing recording list, the corresponding video label is set for each video, the mapping table of the video labels and the video playing times is established, and finally the vector of each video is generated by carrying out vector training on the mapping table. Further provided is a method of representing video information as a vector. Because the video information is expressed into a vector mode, the similarity calculation of the video information based on the vector can be realized, the similarity relation of each video can be accurately identified through the calculation of the vector similarity, and then the vector is used as a video information expression form, so that the video information expression form can be effectively used as the basis of the work of video retrieval, video similarity detection, video recommendation and the like, and the experience degree of a user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart illustrating a method for representing video information according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for establishing a video mapping table according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device for representing video information according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a schematic flowchart of a method for representing video information according to an embodiment of the present invention is shown, where the method may include the following steps:
s11, acquiring a video playing record list of a user, wherein the video playing record list records a plurality of pieces of video information watched by the user;
the video play recording list is generated according to a video access log of a user, and in particular, in another embodiment of the present invention, a method for generating a video play recording list is further provided, where the method includes:
acquiring a video access log of a user;
extracting video playing records in the video access log according to preset statistical time;
and cleaning the extracted video playing record according to a preset video cleaning rule to generate a video playing record list of the user, wherein the video playing record list records a plurality of video information watched by the user.
It can be understood that each video client background or video server has a corresponding log recording system, and the log recording system records video access information of a user, so that the related information of the video of the user can be acquired through the user access log.
The amount of information is huge because the logging system records all information of videos watched by each user, and is also more time-efficient and accurate in order to reduce workload and to be able to make subsequent video recommendations for the video user, for example, the watching preference of the user is changed with time, for example, during the period when a certain star is red, the user may watch the obvious relevant video more frequently, and over time, the user may watch the relevant video of another star more frequently. Therefore, before generating the video playing record list of the user, a statistical time needs to be set, that is, statistics on the video playing related records in the time period can be performed by taking each day as a unit or taking each week as a unit, and the preset of the statistical time is related to the final purpose, which is not limited by the invention.
After the video playing record of the user is extracted, the video playing record list of the user cannot be directly generated in order to ensure the accuracy and objectivity of the video information. The extracted video playing record of the user needs to be preprocessed, and the video playing record needs to be cleaned according to a preset video cleaning rule in the preprocessing process, wherein the preset video cleaning rule at least comprises one of the following rules:
the rule of filtering out videos with playing time not meeting a threshold value, the rule of filtering out videos which are jumped and watched by users, and the rule of merging the series played television shows into one album video.
For example, a video with a playing time length not meeting the threshold refers to a video with a shorter playing time length, specifically, when a user selects a corresponding video for playing on a video client or a web page, a video that is not intended by the user may be selected due to some misoperation, and at this time, when the video is played, the user often finds the error in the first few seconds of the video playing, so as to close the video, and the playing is recorded as the video playing of the user in a video log recording system, but the video recording has no analytical significance for analyzing the video watching habits of the user, so the video recording is cleaned.
Similarly, the video that the user skips to watch the video is also filtered out, and the video that the user skips to watch is also a video that is of little interest to the user, so the video is also cleaned.
Meanwhile, the continuously played television series is combined into an album video, so that the influence on the statistics frequency of the subsequent videos is avoided. For example, if a user watches the first three episodes of a series, the log would have been played three times, essentially watching the same series, so merging to avoid affecting the statistics would require merging such videos into an album video.
S12, setting a video identifier corresponding to each video in the video playing record list;
the corresponding video identification is set for each video, so that the video can be marked by using simple identification words, and subsequent searching and analysis are facilitated. Specifically, for example, a label is set for a watched travel video as a travel, and a secondary label, such as tour-zhang nationality, drama-langas, etc., may be set for distinguishing more accurate information of each video.
S13, acquiring the playing times of each video in the video playing record list, and establishing a mapping table of the video identification of each video and the corresponding playing times;
on the basis of the technical solution of the present invention, referring to fig. 2, another embodiment of the present invention further provides a method for establishing a video mapping table, which may include:
s131, acquiring the playing times of each video in the video playing record list;
s132, extracting a video to be corrected, correcting the playing times of the video to be corrected to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected;
s133, establishing a mapping table of the video identification of each video and the corresponding playing times.
Counting the playing times of each video in the video playing record list, and then establishing a mapping relation between a video mark and the playing times, but before the mapping relation is established, the playing times of some videos need to be corrected, which specifically comprises the following steps:
extracting videos to be corrected from the video playing record list according to the heat information of each video;
setting a correction coefficient of the video to be corrected, and setting the correction coefficient to be N according to a preset formula MkAnd calculating to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected, wherein N is the initial playing times, and k is a correction coefficient.
The method includes the steps that videos to be corrected are extracted according to heat information of each video, the heat information is mainly reflected in the aspects of recommendation information and ranking list information of the videos, for example, a habit of some users watching the videos is that the users like to watch videos with the ranking list being front, and some users can set pages for receiving the recommendation information on video clients, namely the users can often watch video information pushed by the video clients, so that the playing times of the hot videos are far greater than those of other videos, and in order to guarantee the objectivity of a generated result, the playing times of the videos need to be corrected. For example, the playing frequency of the hot video is certainly higher than that of other videos, in order to meet the statistical rule, a correction coefficient k is set for the playing frequency, and the correction coefficient takes the value of 0-1, so that the playing frequency similar to the hot video can be corrected, the playing frequency is more objective, and the statistical significance is achieved.
And S14, performing vector training on the mapping table to generate a vector of each video.
In this step, a vector training model is mainly used to generate a vector of each video. For example, the CBOW model is used for training, and a reasonable window is selected in the CBOW model, where the window is a concept in the CBOW model, for example, if the window size is 5, the first 5 words and the last 5 words of a word are considered, that is, the mapping table is trained with reference to this mode, and each video can be mapped into a high-dimensional space, so as to obtain a vector.
When the video information representation method is applied to video detection, video similarity detection, video recommendation and other works, namely, the values of the corresponding dimensions of the vectors of each video are subjected to related calculation, such as cosine values, Euclidean distances and the like between the two vectors, so that the similarity of the two corresponding videos can be judged. For example, the vector similarity of two videos with video tags of nine villages and zhangjia kingdom is much greater than the vector similarity of two videos with video tags of nine villages and anime kingdom. Therefore, judgment basis and analysis basis are provided by representing the video information by vectors in aspects of recommendation, retrieval and the like of videos.
According to the technical scheme of the video information representation method disclosed by the embodiment of the invention, a plurality of pieces of video information watched by a user are recorded into a video playing record list, then a corresponding video label is set for each video, a mapping table of the video labels and the video playing times is established, and finally vector training is carried out on the mapping table to generate the vector of each video. Further provided is a method of representing video information as a vector. Because the video information is expressed into a vector mode, the similarity calculation of the video information based on the vector can be realized, the similarity relation of each video can be accurately identified through the calculation of the vector similarity, and then the vector is used as a video information expression form, so that the video information expression form can be effectively used as the basis of the work of video retrieval, video similarity detection, video recommendation and the like, and the experience degree of a user is improved.
Corresponding to the technical solution of the method for representing video information provided in the embodiment of the present invention, an embodiment of the present invention further provides a device for representing video information, and referring to fig. 3, the device may include:
the system comprises an acquisition module 1, a processing module and a display module, wherein the acquisition module is used for acquiring a video playing recording list of a user, and the video playing recording list records a plurality of pieces of video information watched by the user;
the setting module 2 is used for setting a video identifier corresponding to each video in the video playing recording list;
the establishing module 3 is used for acquiring the playing times of each video in the video playing record list and establishing a mapping table of the video identification of each video and the corresponding playing times;
and the generating module 4 is used for performing vector training on the mapping table to generate a vector of each video.
Specifically, the obtaining module 1 includes:
the acquisition unit is used for acquiring a video access log of a user;
the extraction unit is used for extracting video playing records in the video access logs according to preset statistical time;
and the processing unit is used for cleaning the extracted video playing record according to a preset video cleaning rule to generate a video playing record list of the user, wherein the video playing record list records a plurality of pieces of video information watched by the user.
Correspondingly, the preset video definition rule at least comprises one of the following rules:
the rule of filtering out videos with playing time not meeting a threshold value, the rule of filtering out videos which are jumped and watched by users, and the rule of merging the series played television shows into one album video.
Specifically, the establishing module 3 includes:
the number obtaining unit is used for obtaining the playing number of each video in the video playing record list;
the correction processing unit is used for extracting a video to be corrected, correcting the playing times of the video to be corrected to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected;
and the mapping establishing unit is used for establishing a mapping table of the video identifier of each video and the corresponding playing times.
Accordingly, the correction processing unit includes:
the video extracting unit is used for extracting the video to be corrected from the video playing recording list according to the heat information of each video;
a calculating subunit, configured to set a correction coefficient of the video to be corrected, and set the correction coefficient to be N according to a preset formula MkAnd calculating to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected, wherein N is the initial playing times, and k is a correction coefficient.
In the video information presentation apparatus provided in the embodiment of the present invention, the video play record list of the user is obtained by the obtaining module, the video identifier of each video is set in the setting module, the mapping table of the video identifier and the corresponding video play frequency is established by the establishing module, and finally, the vector of each video is generated in the generating module. Further provided is a method of representing video information as a vector. Because the video information is expressed into a vector mode, the similarity calculation of the video information based on the vector can be realized, and the similarity relation of each video can be accurately identified due to the calculation of the vector similarity, so that the vector is used as a video representation, the video representation can be effectively used as the basis of the work of video retrieval, video similarity detection, video recommendation and the like, and the experience degree of a user is improved.
The terms "first" and "second," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not set forth for a listed step or element but may include steps or elements not listed.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method for representing video information, the method comprising:
acquiring a video playing recording list of a user, wherein the video playing recording list records a plurality of pieces of video information watched by the user;
setting a video identifier corresponding to each video in the video playing recording list;
acquiring the playing times of each video in the video playing record list, and establishing a mapping table of the video identification of each video and the corresponding playing times;
performing vector training on the mapping table to map each video to a high-dimensional space to obtain a vector of each video;
the obtaining of the playing times of each video in the video playing record list and the establishing of the mapping table of the video identifier of each video and the corresponding playing times include:
acquiring the playing times of each video in the video playing record list;
extracting a video to be corrected, correcting the playing times of the video to be corrected to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected;
and establishing a mapping table of the video identification of each video and the corresponding playing times.
2. The method of claim 1, wherein obtaining the video play record list of the user comprises:
acquiring a video access log of a user;
extracting video playing records in the video access log according to preset statistical time;
and cleaning the extracted video playing record according to a preset video cleaning rule to generate a video playing record list of the user, wherein the video playing record list records a plurality of video information watched by the user.
3. The method of claim 2, wherein the preset video cleansing rules comprise at least one of the following rules:
the rule of filtering out videos with playing time not meeting a threshold value, the rule of filtering out videos which are jumped and watched by users, and the rule of merging the series played television shows into one album video.
4. The method according to claim 1, wherein the extracting a video to be corrected, performing correction processing on the playing times of the video to be corrected to obtain corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected comprises:
extracting videos to be corrected from the video playing record list according to the heat information of each video;
setting a correction coefficient of the video to be corrected, and setting the correction coefficient to be N according to a preset formula MkAnd calculating to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected, wherein N is the initial playing times, and k is a correction coefficient.
5. An apparatus for representing video information, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a video playing recording list of a user, and the video playing recording list records a plurality of pieces of video information watched by the user;
the setting module is used for setting a video identifier corresponding to each video in the video playing record list;
the establishing module is used for acquiring the playing times of each video in the video playing record list and establishing a mapping table of the video identification of each video and the corresponding playing times;
the generating module is used for carrying out vector training on the mapping table so as to map each video to a high-dimensional space to obtain a vector of each video;
the establishing module comprises:
the number obtaining unit is used for obtaining the playing number of each video in the video playing record list;
the correction processing unit is used for extracting a video to be corrected, correcting the playing times of the video to be corrected to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing times of the video to be corrected;
and the mapping establishing unit is used for establishing a mapping table of the video identifier of each video and the corresponding playing times.
6. The apparatus of claim 5, wherein the obtaining module comprises:
the acquisition unit is used for acquiring a video access log of a user;
the extraction unit is used for extracting video playing records in the video access logs according to preset statistical time;
and the processing unit is used for cleaning the extracted video playing record according to a preset video cleaning rule to generate a video playing record list of the user, wherein the video playing record list records a plurality of pieces of video information watched by the user.
7. The apparatus of claim 6, wherein the preset video cleansing rule comprises at least one of the following rules:
the rule of filtering out videos with playing time not meeting a threshold value, the rule of filtering out videos which are jumped and watched by users, and the rule of merging the series played television shows into one album video.
8. The apparatus according to claim 5, wherein the correction processing unit includes:
the video extracting unit is used for extracting the video to be corrected from the video playing recording list according to the heat information of each video;
a calculating subunit, configured to set a correction coefficient of the video to be corrected, and set the correction coefficient to be N according to a preset formula MkCalculating to obtain the corrected playing times of the video to be corrected, and taking the corrected playing times as the playing of the video to be correctedAnd the times, wherein N is the initial playing time and k is a correction coefficient.
CN201711417051.4A 2017-12-25 2017-12-25 Video information representation method and device Active CN108153863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711417051.4A CN108153863B (en) 2017-12-25 2017-12-25 Video information representation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711417051.4A CN108153863B (en) 2017-12-25 2017-12-25 Video information representation method and device

Publications (2)

Publication Number Publication Date
CN108153863A CN108153863A (en) 2018-06-12
CN108153863B true CN108153863B (en) 2021-12-17

Family

ID=62465520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711417051.4A Active CN108153863B (en) 2017-12-25 2017-12-25 Video information representation method and device

Country Status (1)

Country Link
CN (1) CN108153863B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492687A (en) * 2018-10-31 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for handling information

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887459A (en) * 2010-06-28 2010-11-17 中国科学院计算技术研究所 Network video topic detection method and system thereof
CN102006506A (en) * 2010-11-24 2011-04-06 深圳市同洲电子股份有限公司 Video server as well as hierarchical storage management method and device of same
CN103020094A (en) * 2011-12-19 2013-04-03 北京捷成世纪科技股份有限公司 Method for counting video playing times
CN104486649A (en) * 2014-12-18 2015-04-01 北京百度网讯科技有限公司 Video content rating method and device
CN105069041A (en) * 2015-07-23 2015-11-18 合一信息技术(北京)有限公司 Video user gender classification based advertisement putting method
CN105282565A (en) * 2015-09-29 2016-01-27 北京奇艺世纪科技有限公司 Video recommendation method and device
CN105808537A (en) * 2014-12-29 2016-07-27 Tcl集团股份有限公司 A Storm-based real-time recommendation method and a system therefor
CN105847984A (en) * 2016-03-25 2016-08-10 乐视控股(北京)有限公司 Video recommending method and apparatus
CN106303588A (en) * 2016-08-22 2017-01-04 乐视控股(北京)有限公司 Video recommendation method, client and server
CN107391687A (en) * 2017-07-24 2017-11-24 华中师范大学 A kind of mixing commending system towards local chronicle website

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7698728B2 (en) * 2003-11-12 2010-04-13 Home Box Office, Inc. Automated playlist chaser
CN104376003B (en) * 2013-08-13 2019-07-05 深圳市腾讯计算机系统有限公司 A kind of video retrieval method and device
CN104602126B (en) * 2013-10-31 2017-12-26 联想(北京)有限公司 A kind of information processing method and electronic equipment
WO2015066919A1 (en) * 2013-11-11 2015-05-14 深圳锐取信息技术股份有限公司 Video training method and system, and video making method and system
CN104156472B (en) * 2014-08-25 2018-05-08 北京四达时代软件技术股份有限公司 A kind of video recommendation method and system
CN104243590B (en) * 2014-09-19 2017-10-13 广州华多网络科技有限公司 Resource object recommends method and apparatus
CN104394499B (en) * 2014-11-21 2016-06-22 华南理工大学 Based on the Virtual Sound playback equalizing device and method that audiovisual is mutual
CN106407418A (en) * 2016-09-23 2017-02-15 Tcl集团股份有限公司 A face identification-based personalized video recommendation method and recommendation system
CN107249145B (en) * 2017-05-05 2019-08-02 中广热点云科技有限公司 A kind of method of pushing video
CN107341272A (en) * 2017-08-25 2017-11-10 北京奇艺世纪科技有限公司 A kind of method for pushing, device and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887459A (en) * 2010-06-28 2010-11-17 中国科学院计算技术研究所 Network video topic detection method and system thereof
CN102006506A (en) * 2010-11-24 2011-04-06 深圳市同洲电子股份有限公司 Video server as well as hierarchical storage management method and device of same
CN103020094A (en) * 2011-12-19 2013-04-03 北京捷成世纪科技股份有限公司 Method for counting video playing times
CN104486649A (en) * 2014-12-18 2015-04-01 北京百度网讯科技有限公司 Video content rating method and device
CN105808537A (en) * 2014-12-29 2016-07-27 Tcl集团股份有限公司 A Storm-based real-time recommendation method and a system therefor
CN105069041A (en) * 2015-07-23 2015-11-18 合一信息技术(北京)有限公司 Video user gender classification based advertisement putting method
CN105282565A (en) * 2015-09-29 2016-01-27 北京奇艺世纪科技有限公司 Video recommendation method and device
CN105847984A (en) * 2016-03-25 2016-08-10 乐视控股(北京)有限公司 Video recommending method and apparatus
CN106303588A (en) * 2016-08-22 2017-01-04 乐视控股(北京)有限公司 Video recommendation method, client and server
CN107391687A (en) * 2017-07-24 2017-11-24 华中师范大学 A kind of mixing commending system towards local chronicle website

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
互联网热点视频分析方法研究;王仝杰;《广播电视信息》;20140815(第08期);第43-45、50页 *

Also Published As

Publication number Publication date
CN108153863A (en) 2018-06-12

Similar Documents

Publication Publication Date Title
US20230306057A1 (en) Relevance-Based Image Selection
US10575037B2 (en) Video recommending method, server, and storage media
US20220116347A1 (en) Location resolution of social media posts
CN101281540B (en) Apparatus, method and computer program for processing information
CN108520046B (en) Method and device for searching chat records
US20130148898A1 (en) Clustering objects detected in video
US8706655B1 (en) Machine learned classifiers for rating the content quality in videos using panels of human viewers
CN102929966B (en) A kind of for providing the method and system of personalized search list
CN112738556B (en) Video processing method and device
CN111797820B (en) Video data processing method and device, electronic equipment and storage medium
WO2017173801A1 (en) Personalized multimedia recommendation method and apparatus
CN108197336B (en) Video searching method and device
CN112199582A (en) Content recommendation method, device, equipment and medium
CN106126698B (en) Retrieval pushing method and system based on Lucence
JP2014153977A (en) Content analysis device, content analysis method, content analysis program, and content reproduction system
CN108153863B (en) Video information representation method and device
CN113407772B (en) Video recommendation model generation method, video recommendation method and device
US20230044146A1 (en) Video processing method, video searching method, terminal device, and computer-readable storage medium
CN115221267A (en) Consumer portrait generation method and device
CN110139134B (en) Intelligent personalized bullet screen pushing method and system
CN112417956A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN105868271A (en) Name statistics method and apparatus
CN113472834A (en) Object pushing method and device
CN117221669B (en) Bullet screen generation method and device
JP6858003B2 (en) Classification search system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant