CN104391960A - Video annotation method and system - Google Patents

Video annotation method and system Download PDF

Info

Publication number
CN104391960A
CN104391960A CN201410714405.1A CN201410714405A CN104391960A CN 104391960 A CN104391960 A CN 104391960A CN 201410714405 A CN201410714405 A CN 201410714405A CN 104391960 A CN104391960 A CN 104391960A
Authority
CN
China
Prior art keywords
video
labeling
markup information
threshold value
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410714405.1A
Other languages
Chinese (zh)
Other versions
CN104391960B (en
Inventor
何裕南
杨琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Ca Ltd
Original Assignee
Beijing QIYI Century Science and Technology Ca Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Ca Ltd filed Critical Beijing QIYI Century Science and Technology Ca Ltd
Priority to CN201410714405.1A priority Critical patent/CN104391960B/en
Publication of CN104391960A publication Critical patent/CN104391960A/en
Application granted granted Critical
Publication of CN104391960B publication Critical patent/CN104391960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00624Recognising scenes, i.e. recognition of a whole field of perception; recognising scene-specific objects
    • G06K9/00711Recognising video content, e.g. extracting audiovisual features from movies, extracting representative key-frames, discriminating news vs. sport content
    • G06K9/00718Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/232Content retrieval operation locally within server, e.g. reading video streams from disk arrays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream

Abstract

An embodiment of the invention provides a video annotation method and system. The method includes the steps: a server provides an annotation interface; when a video player terminal plays a video, annotation information is generated according to a video image and submitted to the server through the annotation interface; the server receives the annotation information and extracts a corresponding video segment according to the annotation information; the server judges, in the video segment, whether or not a video annotation having an overlap ratio reaching an overlap threshold with the annotation information exists, combines the annotation information into the video annotation if yes, and generates the video annotation according to the annotation information if not.

Description

A kind of video labeling method and system
Technical field
The present invention relates to Internet technical field, particularly a kind of video labeling method and system.
Background technology
Video labeling is in present stage Internet video playing process, is a kind of New function that user provides.So-called video labeling is exactly mark out the element such as some personage, article or scene occurred in video image, and shows the relevant information of this element to user, or provides display link relevant to this element.
Such as shown in Figure 1A, be " the girl's face " that present in video image; The position that in figure, wire frame is irised out, namely represents the region be marked in this image; In Figure 1A, " glasses that girl wears " this element is mark object.When user instruction rests on the region be marked, just can appear in one's mind in image and to be marked the relevant related information of element, described related information can be the relevant information being marked element (glasses that girl wears), as shown in Figure 1B.
User in the process of viewing video, both can check existing mark, and also can mark video voluntarily and check for other beholders.But in prior art, a large number of users marks video image, and the defect caused is, the element in image shown by same or similar region, often by repeat mark, is easy to the confusion causing related information to show.
Summary of the invention
In view of this, the object of the present invention is to provide a kind of video labeling method and system, by merging same area marking, to realize the orderly display of markup information.
For achieving the above object, the present invention has following technical scheme:
A kind of video labeling method, described method comprises:
Server end arranges mark interface;
When video playing terminal is in playing process, generate markup information for video image and submitted to server by described mark interface, then server receives described markup information, and extracts corresponding video section according to described markup information;
Server judges in described video section, whether there is the video labeling that registration and described markup information reach the threshold value that overlaps, if exist, is merged into by described markup information in described video labeling; If do not exist, according to described markup information generating video mark.
Described for video image generate markup information be specially:
Video playing terminal carries out video labeling for the fixed area in particular video frequency image, and using information corresponding for the image-region that is marked as markup information;
Then described markup information comprises video numbering, mark moment and tab area;
Described video is numbered the ID being marked video; When the described mark moment is for being marked image display, be marked the reproduction time of video; Described tab area is that video labeling covers the coordinate range being marked video image.
Describedly extract corresponding video section according to described markup information and be specially:
According to picture material, video is divided into some sections, and is that described section sets up video index according to reproduction time;
Search corresponding video index by described video numbering, and according to the described mark moment, described video index is inquired about, obtain video section corresponding to this mark moment.
Described method also comprises:
When not finding corresponding video index by described video numbering, then for the video that described video numbering is corresponding sets up video index, and obtain the video index set up.
Described tab area and described video labeling comprise coordinate data, then judge in described video section, whether there is registration and are specially with the video labeling that described markup information reaches the threshold value that overlaps:
Preset coincidence threshold value, and obtain already present video labeling in described video section;
Calculate the coordinate data of described tab area, with the difference of the coordinate data of video labeling in video section;
If described difference is not more than described coincidence threshold value, then think that described tab area reaches with the registration of described video labeling the threshold value that overlaps.
Described method also comprises:
If in described video section, there is not the video labeling that registration and described markup information reach the threshold value that overlaps, then generate a video labeling according to described markup information.
A kind of video labeling system, described system comprises:
Extraction module, for the markup information by marking set by interface video playing terminal, and extracts corresponding video section according to described markup information;
Whether judge module, for judging in described video section, existing the video labeling that registration and described markup information reach the threshold value that overlaps, if exist, being merged in described video labeling by described markup information.
Described markup information comprises:
Video numbering, mark moment and tab area.
Described extraction module comprises:
Receiving element, for the markup information set by receiver, video playback terminal;
Indexing units for video being divided into some sections according to picture material, and is that described section sets up video index according to reproduction time;
Query unit, for searching corresponding video index by described video numbering, and inquires about described video index according to the described mark moment, obtains video section corresponding to this mark moment.
Described judge module comprises:
Setting unit, for default coincidence threshold value, and obtains already present video labeling in described video section;
Computing unit, for calculating the coordinate data of described tab area, with the difference of the coordinate data of video labeling in video section; When described difference is not more than described coincidence threshold value, think that described tab area reaches with the registration of described video labeling the threshold value that overlaps.
As seen through the above technical solutions, the beneficial effect that the present invention exists is: by merging the video labeling in identical or proximate region, avoids the video labeling set up and repeat, and the display of the related information of video labeling is able in order clear and definite.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Figure 1A ~ Fig. 1 D is described video labeling schematic diagram;
Fig. 2 is method flow diagram described in the embodiment of the present invention;
Fig. 3 is method flow diagram described in another embodiment of the present invention;
Fig. 4 is system architecture schematic diagram described in the embodiment of the present invention.
Embodiment
For making the object of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
With reference to Figure 1A ~ Figure 1B, the concept of the so-called video labeling of understanding that can be clear and definite.In the middle of video labeling technology, each user can be made to enjoy marking Function; This just as easy as rolling off a log appearance, a large amount of different user has all marked identical or close region, and may be the related information that this zone association is identical or different.As shown in Figure 1 C, three regions having overlapping wire frame to mark for different user, when user instruction rests in overlapping region, appear the related information that the related information may be just mark arbitrarily in one's mind.When a large number of users marks, when making certain region there is dozens of even up to a hundred marks, the confusion of related information display is well imagined.This is also problem demanding prompt solution in prior art.
The present invention, by the video labeling by merging in identical or proximate region, solves above technical matters.Concrete, the present invention will provide a kind of video labeling method, and shown in Figure 2 is the specific embodiment of the method for the invention:
Step 201, server end arrange mark interface.
Servers installed mark interface, and provides described mark interface to playback terminal, namely means to playback terminal and provides the correlation function that video carries out marking; Playback terminal can submit to relevant markup information to produce mark by this mark interface.
Step 202, when video playing terminal is in playing process, generate markup information being submitted to server by described mark interface for video image, then server receives described markup information, and extracts corresponding video section according to described markup information.
When the user of playback terminal is by mark interface, when carrying out video labeling for the fixed area in particular video frequency image, just can obtain information corresponding for the image-region that is marked, and this part information will as markup information.
Described markup information comprises video numbering, mark moment and tab area.Described video is numbered the ID being marked video; When the described mark moment is for being marked image display, be marked the reproduction time of video; Described tab area is that video labeling covers the coordinate range being marked video image, can be specifically the coordinate range that video labeling covers, also just be similar to the wire frame in Figure 1B or Fig. 1 C.
User sets markup information by playback terminal and markup information is uploaded to video server; Video server just specify that the concrete form of mark, so that follow-up according to the actual generating video mark of described markup information.In addition, as required, the related information with this mark correspondence can also be added in described markup information.
In the present embodiment, after video server have received described markup information, find the video be marked according to described markup information, and extract corresponding video section from being marked video.Described video section, by video before and after mark moment and even mark moment within the scope of the some time all frame of video formed, frame of video all in video section, has identical or similar image.
Consecutive image such as under Same Scene camera lens, is namely divided within same video section.Suppose certain video 0 " ~ 20 " image that shows is similar to " the girl's face " shown in Figure 1A; Camera lens switches immediately, 20 " ~ 38 " image that shows is similar to " doggie " shown in Fig. 1 D.So this video 0 " ~ 20 " be a video section, 20 " ~ 38 " be another video section.
Be appreciated that the frame iff mark described mark moment place or a few two field picture, the time that so this video labeling exists will be too of short duration, cannot be used by user.And in video section, what same image-region often showed is identity element all the time.So in general, video labeling only should not exist only in the mark moment, but should exist all the time within the video section time.
Step 203, server judge in described video section, whether there is the video labeling that registration and described markup information reach the threshold value that overlaps, if exist, are merged into by described markup information in described video labeling; If do not exist, according to described markup information generating video mark.
Information displaying for avoiding repeat mark to cause in the present embodiment is chaotic, so after extracting described video section, judges, within this video section, whether Already in mark the higher video labeling of area coincidence degree in markup information.If existed, just no longer set up new video labeling for described markup information, but be merged in the middle of already present video labeling, retain original standardized tab area.Thus avoid the mark of repetition.If there is no the video labeling repeated in identical or proximate region, also would not produce the problem that related information display is chaotic naturally.
If instead still there is not the higher mark of registration within this section, then the markup information submitted to according to this generates new mark.
As seen through the above technical solutions, the beneficial effect that the present embodiment exists is: by merging the video labeling in identical or proximate region, avoids the video labeling set up and repeat, and the display of the related information of video labeling is able in order clear and definite.
With reference to shown in Fig. 3, it is another specific embodiment of the method for the invention.By on the basis of previous embodiment in the present embodiment, make a more detailed description and disclose.Described in the present embodiment, method comprises the following steps:
Step 301, according to picture material, video is divided into some sections, and is that described section sets up video index according to reproduction time.
In the present embodiment, video server can complete the division of video section in advance, and sets up video index according to dividing condition.
Such as, video is numbered to the video of 00001, carries out the division of video section, this video footage is 1 ', wherein 0 " ~ 15 " be first video section, 15 " ~ 35 " be second video section, 35 " ~ 53 " be the 3rd video section, 53 " ~ 60 " be the 4th video section.A video index is set up as table 1 to this division result:
Session name Time range
Section 1 0”~15”
Section 2 15”~35”
Section 3 35”~53”
Section 4 53”~60”
Table 1
Step 302, arrange mark interface, by the markup information set by described mark interface video playing terminal, described markup information comprises video numbering, mark moment and tab area.
Described in the present embodiment, video is numbered 00001, and the mark moment is 26 ", described tab area is the coordinate data x ∈ (225,324) in image; Y ∈ (105,188).
Step 303, search corresponding video index by described video numbering, and according to the described mark moment, described video index is inquired about, obtain video section corresponding to this mark moment.
Be numbered 00001 according to video, the mark moment is 26 ", the section 2 in table 1 can be extracted.
Also it should be noted that, when not finding corresponding video index by described video numbering, namely illustrating that described video is numbered corresponding video and also do not set up video index.Then in this case, video server is that the video of described video numbering correspondence sets up video index, and obtains the video index set up, then extracts corresponding section from set up video index.
Step 304, default coincidence threshold value, and obtain already present video labeling in described video section;
After extracting the section 2 in table 1, obtain the video labeling existed in this section.Described already present video labeling, often through merging, and the video labeling that scope has been standardized.Get in the present embodiment, there is a video labeling in section 2, its coordinate data is x ∈ (220,320); Y ∈ (100,180).Need in addition to arrange a coincidence threshold value, to judge tab area in markup information and the registration that there is mark; In the present embodiment, registration threshold value is essentially coordinate data difference, is specially 10.
Step 305, calculate the coordinate data of described tab area, with the difference of the coordinate data of video labeling in video section.
If the described difference of step 306 is not more than described coincidence threshold value, then think that described tab area reaches with the registration of described video labeling the threshold value that overlaps.
By can be calculated, the coordinate data difference of tab area and video labeling is as follows: △ x (5,4), △ y (5,8); Above difference is not more than registration threshold value 10, so think that its registration reaches registration threshold value.
If step 307 exists the video labeling that registration and described markup information reach the threshold value that overlaps, then described markup information is merged in described video labeling.
In the present embodiment, described registration reaches registration threshold value, just no longer sets up video labeling for described markup information, but is merged in the middle of already present video labeling, the scope retention criteria scope x ∈ (220,320) of mark; Y ∈ (100,180), the time that standard exists and section 2 duration 15 " ~ 35 "; If further comprise related information in described markup information, then described related information is associated with on described video labeling.
If in the described video section of step 308, there is not the video labeling that registration and described markup information reach the threshold value that overlaps, then generate a video labeling according to described markup information.
Preferably, if there is no registration reaches the video labeling of registration threshold value, then illustrate still there is not video labeling within tab area, then newly set up a video labeling according to described markup information.
As seen through the above technical solutions, the beneficial effect that the present embodiment exists is: in the present embodiment, described method overall technical architecture is more complete, open more abundant.
With reference to shown in Fig. 4, it is the specific embodiment of system of the present invention.Described system is for realizing the method described in previous embodiment, and the two technical scheme is consistent in essence, and in previous embodiment, corresponding description is equally applicable in the present embodiment.Described system comprises:
Extraction module, for the markup information by marking set by interface video playing terminal, and extracts corresponding video section according to described markup information; Described markup information comprises: video numbering, mark moment and tab area.
Described extraction module comprises:
Receiving element, for the markup information set by receiver, video playback terminal.
Indexing units for video being divided into some sections according to picture material, and is that described section sets up video index according to reproduction time.
Query unit, for searching corresponding video index by described video numbering, and inquires about described video index according to the described mark moment, obtains video section corresponding to this mark moment.
Whether judge module, for judging in described video section, existing the video labeling that registration and described markup information reach the threshold value that overlaps, if exist, being merged in described video labeling by described markup information.
Described judge module comprises:
Setting unit, for default coincidence threshold value, and obtains already present video labeling in described video section.
Computing unit, for calculating the coordinate data of described tab area, with the difference of the coordinate data of video labeling in video section; When described difference is not more than described coincidence threshold value, think that described tab area reaches with the registration of described video labeling the threshold value that overlaps.
As seen through the above technical solutions, the beneficial effect that system described in the present embodiment exists is: by merging the video labeling in identical or proximate region, avoids the video labeling set up and repeat, and the display of the related information of video labeling is able in order clear and definite.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. a video labeling method, is characterized in that, described method comprises:
Server end arranges mark interface;
When video playing terminal is in playing process, generate markup information for video image and submitted to server by described mark interface, then server receives described markup information, and extracts corresponding video section according to described markup information;
Server judges in described video section, whether there is the video labeling that registration and described markup information reach the threshold value that overlaps, if exist, is merged into by described markup information in described video labeling; If do not exist, according to described markup information generating video mark.
2. method according to claim 1, is characterized in that, describedly generates markup information for video image and is specially:
Video playing terminal carries out video labeling for the fixed area in particular video frequency image, and using information corresponding for the image-region that is marked as markup information;
Then described markup information comprises video numbering, mark moment and tab area;
Described video is numbered the ID being marked video; When the described mark moment is for being marked image display, be marked the reproduction time of video; Described tab area is that video labeling covers the coordinate range being marked video image.
3. method according to claim 2, is characterized in that, describedly extracts corresponding video section according to described markup information and is specially:
According to picture material, video is divided into some sections, and is that described section sets up video index according to reproduction time;
Search corresponding video index by described video numbering, and according to the described mark moment, described video index is inquired about, obtain video section corresponding to this mark moment.
4. method according to claim 3, it is characterized in that, described method also comprises:
When not finding corresponding video index by described video numbering, then for the video that described video numbering is corresponding sets up video index, and obtain the video index set up.
5. method according to claim 2, it is characterized in that, described tab area and described video labeling comprise coordinate data, then judge in described video section, whether there is registration and are specially with the video labeling that described markup information reaches the threshold value that overlaps:
Preset coincidence threshold value, and obtain already present video labeling in described video section;
Calculate the coordinate data of described tab area, with the difference of the coordinate data of video labeling in video section;
If described difference is not more than described coincidence threshold value, then think that described tab area reaches with the registration of described video labeling the threshold value that overlaps.
6. method according to Claims 1 to 5 any one, is characterized in that, described method also comprises:
If in described video section, there is not the video labeling that registration and described markup information reach the threshold value that overlaps, then generate a video labeling according to described markup information.
7. a video labeling system, is characterized in that, described system comprises:
Extraction module, for the markup information by marking set by interface video playing terminal, and extracts corresponding video section according to described markup information;
Whether judge module, for judging in described video section, existing the video labeling that registration and described markup information reach the threshold value that overlaps, if exist, being merged in described video labeling by described markup information.
8. system according to claim 7, it is characterized in that, described markup information comprises:
Video numbering, mark moment and tab area.
9. system according to claim 8, it is characterized in that, described extraction module comprises:
Receiving element, for the markup information set by receiver, video playback terminal;
Indexing units for video being divided into some sections according to picture material, and is that described section sets up video index according to reproduction time;
Query unit, for searching corresponding video index by described video numbering, and inquires about described video index according to the described mark moment, obtains video section corresponding to this mark moment.
10. system according to claim 8, it is characterized in that, described judge module comprises:
Setting unit, for default coincidence threshold value, and obtains already present video labeling in described video section;
Computing unit, for calculating the coordinate data of described tab area, with the difference of the coordinate data of video labeling in video section; When described difference is not more than described coincidence threshold value, think that described tab area reaches with the registration of described video labeling the threshold value that overlaps.
CN201410714405.1A 2014-11-28 2014-11-28 A kind of video labeling method and system Active CN104391960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410714405.1A CN104391960B (en) 2014-11-28 2014-11-28 A kind of video labeling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410714405.1A CN104391960B (en) 2014-11-28 2014-11-28 A kind of video labeling method and system

Publications (2)

Publication Number Publication Date
CN104391960A true CN104391960A (en) 2015-03-04
CN104391960B CN104391960B (en) 2019-01-25

Family

ID=52609864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410714405.1A Active CN104391960B (en) 2014-11-28 2014-11-28 A kind of video labeling method and system

Country Status (1)

Country Link
CN (1) CN104391960B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101573A (en) * 2016-06-24 2016-11-09 中译语通科技(北京)有限公司 The grappling of a kind of video labeling and matching process
CN106101842A (en) * 2016-06-27 2016-11-09 杭州当虹科技有限公司 A kind of advertisement editing system based on intellectual technology
CN106303726A (en) * 2016-08-30 2017-01-04 北京奇艺世纪科技有限公司 The adding method of a kind of video tab and device
CN108521592A (en) * 2018-04-23 2018-09-11 威创集团股份有限公司 Markup information processing method, device, system, computer equipment and storage medium
CN110347866A (en) * 2019-07-05 2019-10-18 联想(北京)有限公司 Information processing method, device, storage medium and electronic equipment
CN110377567A (en) * 2019-07-25 2019-10-25 苏州思必驰信息科技有限公司 The mask method and system of multimedia file
CN110971964A (en) * 2019-12-12 2020-04-07 腾讯科技(深圳)有限公司 Intelligent comment generation and playing method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398843A (en) * 1999-10-11 2009-04-01 韩国电子通信研究院 Video summary description scheme and method
CN101589383A (en) * 2006-12-22 2009-11-25 谷歌公司 The annotation framework that is used for video
WO2011064674A2 (en) * 2009-11-30 2011-06-03 France Telecom Content management system and method of operation thereof
CN103024480A (en) * 2012-12-28 2013-04-03 杭州泰一指尚科技有限公司 Method for implanting advertisement in video
CN103442308A (en) * 2013-08-22 2013-12-11 百度在线网络技术(北京)有限公司 Audio and video file labeling method and device and information recommendation method and device
CN103488661A (en) * 2013-03-29 2014-01-01 吴晗 Audio/video file annotation system
US20140079325A1 (en) * 2012-09-14 2014-03-20 Buffalo Inc. Image information processing system, image information processor and recording media
CN103970906A (en) * 2014-05-27 2014-08-06 百度在线网络技术(北京)有限公司 Method and device for establishing video tags and method and device for displaying video contents

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398843A (en) * 1999-10-11 2009-04-01 韩国电子通信研究院 Video summary description scheme and method
CN101589383A (en) * 2006-12-22 2009-11-25 谷歌公司 The annotation framework that is used for video
WO2011064674A2 (en) * 2009-11-30 2011-06-03 France Telecom Content management system and method of operation thereof
US20140079325A1 (en) * 2012-09-14 2014-03-20 Buffalo Inc. Image information processing system, image information processor and recording media
CN103024480A (en) * 2012-12-28 2013-04-03 杭州泰一指尚科技有限公司 Method for implanting advertisement in video
CN103488661A (en) * 2013-03-29 2014-01-01 吴晗 Audio/video file annotation system
CN103442308A (en) * 2013-08-22 2013-12-11 百度在线网络技术(北京)有限公司 Audio and video file labeling method and device and information recommendation method and device
CN103970906A (en) * 2014-05-27 2014-08-06 百度在线网络技术(北京)有限公司 Method and device for establishing video tags and method and device for displaying video contents

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周建芳 等: "基于标注的体育视频管理系统的设计与实现", 《湖北体育科技》 *
王晗 等: "使用异构互联网图像组的视频标注", 《计算机学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101573A (en) * 2016-06-24 2016-11-09 中译语通科技(北京)有限公司 The grappling of a kind of video labeling and matching process
CN106101842A (en) * 2016-06-27 2016-11-09 杭州当虹科技有限公司 A kind of advertisement editing system based on intellectual technology
CN106303726A (en) * 2016-08-30 2017-01-04 北京奇艺世纪科技有限公司 The adding method of a kind of video tab and device
CN108521592A (en) * 2018-04-23 2018-09-11 威创集团股份有限公司 Markup information processing method, device, system, computer equipment and storage medium
CN110347866A (en) * 2019-07-05 2019-10-18 联想(北京)有限公司 Information processing method, device, storage medium and electronic equipment
CN110377567A (en) * 2019-07-25 2019-10-25 苏州思必驰信息科技有限公司 The mask method and system of multimedia file
CN110971964A (en) * 2019-12-12 2020-04-07 腾讯科技(深圳)有限公司 Intelligent comment generation and playing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN104391960B (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN104391960A (en) Video annotation method and system
CN104994425B (en) A kind of video identifier method and apparatus
CN103634605B (en) Processing method and device for video images
US9357267B2 (en) Synchronizing video content with extrinsic data
MX359407B (en) Album display method and device.
CN106303726B (en) Video tag adding method and device
CN109089154B (en) Video extraction method, device, equipment and medium
US20130138673A1 (en) Information processing device, information processing method, and program
CN103929669A (en) Interactive video generator, player, generating method and playing method
CN103974061A (en) Play test method and system
KR102173417B1 (en) Fingerprint layout for content intellectual culture
CN105898556A (en) Plug-in subtitle automatic synchronization method and device
CN106484122A (en) A kind of virtual reality device and its browse trace tracking method
US20180358050A1 (en) Media-Production System With Social Media Content Interface Feature
CN103475911A (en) Television information providing method and system based on video characteristics
CN105138551A (en) Method and apparatus for obtaining user interest tag
CN105049910A (en) Video processing method and device
CN109327715B (en) Video risk identification method, device and equipment
CN104317860A (en) Evaluation device of stereoscopic advertisement player and evaluation method of evaluation device
CN105100920A (en) Video preview method and device
WO2021068558A1 (en) Simultaneous subtitle translation method, smart television, and storage medium
US9542976B2 (en) Synchronizing videos with frame-based metadata using video content
KR101583745B1 (en) Apparatus for real-time video clip editing and apparatus for supporting turn-based game using the same
CN104363477A (en) Method and system for bidding information based on video annotation
KR101749420B1 (en) Apparatus and method for extracting representation image of video contents using closed caption

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
GR01 Patent grant
GR01 Patent grant