CN104391960B - A kind of video labeling method and system - Google Patents
A kind of video labeling method and system Download PDFInfo
- Publication number
- CN104391960B CN104391960B CN201410714405.1A CN201410714405A CN104391960B CN 104391960 B CN104391960 B CN 104391960B CN 201410714405 A CN201410714405 A CN 201410714405A CN 104391960 B CN104391960 B CN 104391960B
- Authority
- CN
- China
- Prior art keywords
- video
- labeling
- markup information
- section
- mark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/232—Content retrieval operation locally within server, e.g. reading video streams from disk arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8455—Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the present invention provides a kind of video labeling method and system, which comprises server end setting mark interface;When video playing terminal is in playing process, for video image generation markup information and by the mark interface to server submission, then server receives the markup information, and extracts corresponding video section according to the markup information;Server judges in the video section, if reaches the video labeling for being overlapped threshold value with the markup information there are registration, then the markup information is merged into the video labeling if it exists;Video labeling is then generated according to the markup information if it does not exist.
Description
Technical field
The present invention relates to Internet technical field, in particular to a kind of video labeling method and system.
Background technique
Video labeling is a kind of new function provided for user at this stage in network video playing process.So-called video mark
Note is exactly to mark out the elements such as certain personages, article or the scene occurred in video image, and show the element to user
Relevant information, or provide and show link relevant to the element.
Such as shown in Figure 1A, for " the girl's face " presented in video image;The position that wire frame is irised out in figure, that is, indicating should
The region being marked in image;" glasses that girl wears " this element is mark object in Figure 1A.When user instruction rests on
Can emerge when the region being marked, in image and be marked the relevant related information of element out, the related information can be by
The relevant information of element (glasses that girl wears) is marked, as shown in Figure 1B.
User can both check existing mark, can also voluntarily mark to video during watching video
Note is checked for other viewers.But in the prior art, a large number of users is labeled video image, caused defect
It is that element shown by same or similar region is often repeated mark in image, it is easy to related information be caused to show
It is chaotic.
Summary of the invention
In view of this, being marked by merging with region the purpose of the present invention is to provide a kind of video labeling method and system
Note, to realize the orderly display of markup information.
To achieve the above object, the present invention has following technical solution:
A kind of video labeling method, which comprises
Server end setting mark interface;
When video playing terminal is in playing process, markup information is generated for video image and passes through the mark interface
It is submitted to server, then server receives the markup information, and extracts corresponding video section according to the markup information;
Server judges in the video section, if reaches the view for being overlapped threshold value with the markup information there are registration
The markup information, is then merged into the video labeling by frequency marking note if it exists;If it does not exist then according to the markup information
Generate video labeling.
It is described to generate markup information for video image specifically:
Video playing terminal carries out video labeling, and the image being identified by for the fixed area in particular video frequency image
The corresponding information in region is as markup information;
Then the markup information includes video number, mark moment and tab area;
The video number is the ID for being marked video;The mark moment is when being marked image display, to be marked view
The play time of frequency;The tab area is the coordinate range that video labeling covering is marked video image.
It is described that corresponding video section is extracted according to the markup information specifically:
Video is divided into several sections according to picture material, and is that the section establishes video index according to play time;
Corresponding video index is searched by video number, and the video index is looked into according to the mark moment
It askes, obtains mark moment corresponding video section.
The method also includes:
When not finding corresponding video index by video number, then corresponding video is numbered for the video and built
Vertical video index, and obtain established video index.
The tab area and the video labeling include coordinate data, then judge in the video section, if exist
Registration reaches the video labeling for being overlapped threshold value with the markup information specifically:
It is default to be overlapped threshold value, and obtain already present video labeling in the video section;
The difference of the coordinate data of video labeling in the coordinate data for calculating the tab area, with video section;
If the difference is not more than the coincidence threshold value, then it is assumed that the registration of the tab area and the video labeling
Reach coincidence threshold value.
The method also includes:
If reaching the video labeling for being overlapped threshold value with the markup information there is no registration, then in the video section
A video labeling is generated according to the markup information.
A kind of video labeling system, the system comprises:
Extraction module, for passing through markup information set by mark interface video playing terminal, and according to described
Markup information extracts corresponding video section;
Judgment module, for judging in the video section, if reach there are registration with the markup information and be overlapped
The markup information is then merged into the video labeling by the video labeling of threshold value if it exists.
The markup information includes:
Video number, mark moment and tab area.
The extraction module includes:
Receiving unit, for receiving markup information set by video playing terminal;
Indexing units for video to be divided into several sections according to picture material, and are the section according to play time
Establish video index;
Query unit, for searching corresponding video index by video number, and according to the mark moment pair
The video index inquiry, obtains mark moment corresponding video section.
The judgment module includes:
Setting unit is overlapped threshold value for default, and obtains already present video labeling in the video section;
Computing unit, the number of coordinates for calculating the coordinate data of the tab area, with video labeling in video section
According to difference;When the difference is not more than the coincidence threshold value, it is believed that the tab area is overlapped with the video labeling
Degree reaches coincidence threshold value.
As seen through the above technical solutions, beneficial effect existing for the present invention is: by merging in same or similar region
Video labeling, avoid and establish duplicate video labeling, so that the display of the related information of video labeling becomes clear orderly.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the present invention
Some embodiments for those of ordinary skill in the art without creative efforts, can also basis
These attached drawings obtain other attached drawings.
Figure 1A~Fig. 1 D is the video labeling schematic diagram;
Fig. 2 is the method for embodiment of the present invention flow chart;
Fig. 3 is another embodiment of the present invention the method flow chart;
Fig. 4 is system structure diagram described in the embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
A~Figure 1B referring to Fig.1 can explicitly understand the concept of so-called video labeling.It, can in video labeling technology
Each user is set to enjoy marking Function;This is just easy to occur, and a large amount of different users are labelled with same or similar area
Domain, and same or different related information may be associated with for the region.As shown in Figure 1 C, three have the wire frame of overlapping for difference
The region that user is marked, when user instruction rests in overlapping region, the related information emerged out may be any mark
The related information of note.When a large number of users is labeled, so that some region is there are when dozens of even up to a hundred marks, association is believed
Cease display confusion as one can imagine.This is also urgent problem to be solved in the prior art.
The present invention solves the above technical problem for by merging the video labeling in same or similar region.Specifically, this
Invention will provide a kind of video labeling method, the specific embodiment shown in Figure 2 for the method for the invention:
Step 201, server end setting mark interface.
Server setting mark interface, and the mark interface is provided to playback terminal, this means that playback terminal
Provide the correlation function that video is labeled;Playback terminal can submit relevant markup information with life by the mark interface
Produce mark.
Step 202, when video playing terminal is in playing process, generate markup information for video image and by described
It marks interface to submit to server, then server receives the markup information, and extracts corresponding view according to the markup information
Frequency section.
When the user of playback terminal is by mark interface, for the fixed area progress video labeling in particular video frequency image
When, the corresponding information of the image-region that will be identified by, and this partial information will be used as markup information.
The markup information includes video number, mark moment and tab area.The video number is to be marked video
ID;The mark moment is when being marked image display, to be marked the play time of video;The tab area is video mark
Note covering is marked the coordinate range of video image, specifically can be the coordinate range of video labeling covering, is also similar to figure
Wire frame in 1B or Fig. 1 C.
User sets markup information by playback terminal and markup information is uploaded to video server;Video server
Just it specifies the concrete form of mark, video labeling is actually generated according to the markup information in order to subsequent.In addition, the mark
Infuse in information to add with the corresponding related information of the mark as needed.
In the present embodiment, after video server has received the markup information, according to the markup information find by
The video of mark, and corresponding video section is extracted from being marked in video.The video section is by marking in video
It infuses within the scope of moment or even front and back some time at mark moment composed by all video frames, all videos in video section
Frame has identical or similar image.
Such as the consecutive image under Same Scene camera lens, that is, it is divided within same video section.Assuming that certain video
The image of 0 "~20 " displays is similar to " girl's face " shown in figure 1A;Camera lens switches over immediately, the figure of 20 "~38 " displays
As being similar to shown in Fig. 1 D " doggie ".So video 0 "~20 " are a video section, and 20 "~38 " be another video
Section.
It is appreciated that if only the frame or a few frame images at the mark moment are marked, then the video labeling is deposited
Time will be excessively of short duration, can not be used by a user.And in video section, what same image-region was often shown is always
Identity element.So in general, video labeling should not be merely present in the mark moment, but should be within the video section time
It is existing always.
Step 203, server judge in the video section, if reach there are registration with the markup information and be overlapped
The markup information is then merged into the video labeling by the video labeling of threshold value if it exists;If it does not exist then according to
Markup information generates video labeling.
To avoid the display of information caused by repeat mark chaotic in the present embodiment, thus extract the video section it
Afterwards, judge within the video section, if be already present on the mark higher video labeling of area coincidence degree in markup information.Such as
Fruit exists, and just no longer establishes new video labeling for the markup information, but is incorporated into already present video labeling and works as
In, retain original standardized tab area.Thus avoid duplicate mark.In same or similar region if there is no
Duplicate video labeling would not naturally also lead to the problem of related information display confusion.
If instead there is no the higher marks of registration within the section, then the markup information submitted according to this is raw
The mark of Cheng Xin.
As seen through the above technical solutions, beneficial effect existing for the present embodiment is: by merging same or similar region
Interior video labeling avoids and establishes duplicate video labeling, so that the display of the related information of video labeling has become clear
Sequence.
It is another specific embodiment of the method for the invention referring to shown in Fig. 3.It will be in aforementioned implementation in the present embodiment
On the basis of example, it is described in more detail and openly.Method described in the present embodiment the following steps are included:
Video is divided into several sections according to picture material, and is that the section establishes view according to play time by step 301
Frequency indexes.
In the present embodiment, video server can be previously-completed the division of video section, and established and regarded according to dividing condition
Frequency indexes.
For example, the video for being 00001 to video number, carries out the division of video section, which is 1 ', wherein
0 "~15 " be first video section, 15 "~35 " be second video section, 35 "~53 " be third video section, 53 "
~60 " be the 4th video section.One video index such as table 1 is established to this division result:
Session name | Time range |
Section 1 | 0 "~15 " |
Section 2 | 15 "~35 " |
Section 3 | 35 "~53 " |
Section 4 | 53 "~60 " |
Table 1
Step 302, setting mark interface, are believed by mark set by the mark interface video playing terminal
Breath, the markup information include video number, mark moment and tab area.
The number of video described in the present embodiment is 00001, and the mark moment is 26 ", and the tab area is the seat in image
It marks data x ∈ (225,324);y∈(105,188).
Step 303 searches corresponding video index by video number, and according to the mark moment to the view
Frequency search index obtains mark moment corresponding video section.
It is 00001 according to video number, the mark moment is 26 ", can extract the section 2 in table 1.
It should also be noted that, illustrating the view when not finding corresponding video index by video number
Frequency numbers corresponding video and establishes video index not yet.Then in this case, video server is that video number corresponds to
Video establish video index, and obtain established video index, then corresponding area is extracted from the video index established
Section.
Step 304, default coincidence threshold value, and obtain already present video labeling in the video section;
After extracting the section 2 in table 1, already existing video labeling in the section is obtained.The already present view
Frequency marking note, often has already passed through merging, and the video labeling that range has been standardized.It is got in the present embodiment, area
There are a video labeling in section 2, coordinate data is (220,320) x ∈;y∈(100,180).Additionally need setting one
It is overlapped threshold value, to judge the registration of the tab area in markup information Yu existing mark;Registration threshold value in the present embodiment
It is essentially coordinate data difference, specially 10.
Step 305, the coordinate data for calculating the tab area, the difference with the coordinate data of video labeling in video section
Value.
If step 306, the difference are not more than the coincidence threshold value, then it is assumed that the tab area and the video labeling
Registration reach coincidence threshold value.
By can be calculated, the coordinate data difference of tab area and video labeling is as follows: △ x (5,4), △ y (5,
8);The above difference is not more than registration threshold value 10, it is believed that its registration has reached registration threshold value.
Step 307, if it exists registration reach the video labeling for being overlapped threshold value with the markup information, then by the mark
Information is merged into the video labeling.
In the present embodiment, the registration has reached registration threshold value, just no longer establishes video mark for the markup information
Note, but be merged into already present video labeling, the range retention criteria range x ∈ (220,320) of mark;y∈
(100,180), 2 duration 15 "~35 " of time, that is, section existing for standard;If further wrapped in the markup information
Related information is included, then the related information is associated on the video labeling.
If reaching the video for being overlapped threshold value with the markup information there is no registration in step 308, the video section
Mark then generates a video labeling according to the markup information.
Preferably, the video labeling of registration threshold value is reached if there is no registration, then is illustrated within tab area still
There is no video labelings, then newly establish a video labeling according to the markup information.
As seen through the above technical solutions, beneficial effect existing for the present embodiment is: in the present embodiment, the method is whole
Technical solution is more complete, open more abundant.
It is the specific embodiment of system of the present invention referring to shown in Fig. 4.The system is for realizing in previous embodiment
The method, the two technical solution is substantially consistent, accordingly described in previous embodiment it is equally applicable in this present embodiment.Institute
The system of stating includes:
Extraction module, for passing through markup information set by mark interface video playing terminal, and according to described
Markup information extracts corresponding video section;The markup information includes: video number, mark moment and tab area.
The extraction module includes:
Receiving unit, for receiving markup information set by video playing terminal.
Indexing units for video to be divided into several sections according to picture material, and are the section according to play time
Establish video index.
Query unit, for searching corresponding video index by video number, and according to the mark moment pair
The video index inquiry, obtains mark moment corresponding video section.
Judgment module, for judging in the video section, if reach there are registration with the markup information and be overlapped
The markup information is then merged into the video labeling by the video labeling of threshold value if it exists.
The judgment module includes:
Setting unit is overlapped threshold value for default, and obtains already present video labeling in the video section.
Computing unit, the number of coordinates for calculating the coordinate data of the tab area, with video labeling in video section
According to difference;When the difference is not more than the coincidence threshold value, it is believed that the tab area is overlapped with the video labeling
Degree reaches coincidence threshold value.
As seen through the above technical solutions, beneficial effect existing for system described in the present embodiment is: by merge it is identical or
Video labeling in proximate region avoids and establishes duplicate video labeling, so that the related information of video labeling is shown
With clearly orderly.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (9)
1. a kind of video labeling method, which is characterized in that the described method includes:
Server end setting mark interface;
When video playing terminal is in playing process, markup information is generated and by the mark interface to clothes for video image
Business device is submitted, then server receives the markup information, and extracts corresponding video section, the mark according to the markup information
Note information includes video number, mark moment and tab area, and the video in the video section has same or similar
Image;
Server judges in the video section, if reaches there are registration with tab area in the markup information and is overlapped threshold
The markup information is then merged into the video labeling by the video labeling of value if it exists;If it does not exist then according to the mark
It infuses information and generates video labeling.
2. method according to claim 1, which is characterized in that described to generate markup information for video image specifically:
Video playing terminal carries out video labeling, and the image-region being identified by for the fixed area in particular video frequency image
Corresponding information is as markup information;
The video number is the ID for being marked video;The mark moment is when being marked image display, to be marked video
Play time;The tab area is the coordinate range that video labeling covering is marked video image.
3. method according to claim 2, which is characterized in that described to extract corresponding video section according to the markup information
Specifically:
Video is divided into several sections according to picture material, and is that the section establishes video index according to play time;
Corresponding video index is searched by video number, and the video index is inquired according to the mark moment,
Obtain mark moment corresponding video section.
4. method according to claim 3, which is characterized in that the method also includes:
When not finding corresponding video index by video number, then corresponding video is numbered for the video and establish view
Frequency indexes, and obtains established video index.
5. method according to claim 2, which is characterized in that the tab area and the video labeling include number of coordinates
According to then judging in the video section, if reach there are registration with tab area in the markup information and be overlapped threshold value
Video labeling specifically:
It is default to be overlapped threshold value, and obtain already present video labeling in the video section;
The difference of the coordinate data of video labeling in the coordinate data for calculating the tab area, with video section;
If the difference is not more than the coincidence threshold value, then it is assumed that the tab area and the registration of the video labeling reach
It is overlapped threshold value.
6. any one the method according to claim 1~5, which is characterized in that the method also includes:
If in the video section, reaching the video mark for being overlapped threshold value with tab area in the markup information there is no registration
Note then generates a video labeling according to the markup information.
7. a kind of video labeling system, which is characterized in that the system comprises:
Extraction module, for passing through markup information set by mark interface video playing terminal, and according to the mark
The corresponding video section of information extraction, the markup information include video number, mark moment and tab area, the video area
Video in section has same or similar image;
Judgment module, for judging in the video section, if there are registrations and tab area in the markup information to reach
To the video labeling for being overlapped threshold value, then the markup information is merged into the video labeling if it exists.
8. system according to claim 7, which is characterized in that the extraction module includes:
Receiving unit, for receiving markup information set by video playing terminal;
Indexing units for video to be divided into several sections according to picture material, and are section foundation according to play time
Video index;
Query unit, for searching corresponding video index by video number, and according to the mark moment to described
Video index inquiry, obtains mark moment corresponding video section.
9. system according to claim 7, which is characterized in that the judgment module includes:
Setting unit is overlapped threshold value for default, and obtains already present video labeling in the video section;
Computing unit, for calculating the coordinate data of the tab area, with the coordinate data of video labeling in video section
Difference;When the difference is not more than the coincidence threshold value, it is believed that the tab area and the registration of the video labeling reach
To coincidence threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410714405.1A CN104391960B (en) | 2014-11-28 | 2014-11-28 | A kind of video labeling method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410714405.1A CN104391960B (en) | 2014-11-28 | 2014-11-28 | A kind of video labeling method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104391960A CN104391960A (en) | 2015-03-04 |
CN104391960B true CN104391960B (en) | 2019-01-25 |
Family
ID=52609864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410714405.1A Active CN104391960B (en) | 2014-11-28 | 2014-11-28 | A kind of video labeling method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104391960B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106101573A (en) * | 2016-06-24 | 2016-11-09 | 中译语通科技(北京)有限公司 | The grappling of a kind of video labeling and matching process |
CN106101842A (en) * | 2016-06-27 | 2016-11-09 | 杭州当虹科技有限公司 | A kind of advertisement editing system based on intellectual technology |
EP3497550B1 (en) * | 2016-08-12 | 2023-03-15 | Packsize, LLC | Systems and methods for automatically generating metadata for media documents |
CN106303726B (en) * | 2016-08-30 | 2021-04-16 | 北京奇艺世纪科技有限公司 | Video tag adding method and device |
CN108521592A (en) * | 2018-04-23 | 2018-09-11 | 威创集团股份有限公司 | Markup information processing method, device, system, computer equipment and storage medium |
CN110347866B (en) * | 2019-07-05 | 2023-06-23 | 联想(北京)有限公司 | Information processing method, information processing device, storage medium and electronic equipment |
CN110377567A (en) * | 2019-07-25 | 2019-10-25 | 苏州思必驰信息科技有限公司 | The mask method and system of multimedia file |
CN110971964B (en) * | 2019-12-12 | 2022-11-04 | 腾讯科技(深圳)有限公司 | Intelligent comment generation and playing method, device, equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103488661A (en) * | 2013-03-29 | 2014-01-01 | 吴晗 | Audio/video file annotation system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100371813B1 (en) * | 1999-10-11 | 2003-02-11 | 한국전자통신연구원 | A Recorded Medium for storing a Video Summary Description Scheme, An Apparatus and a Method for Generating Video Summary Descriptive Data, and An Apparatus and a Method for Browsing Video Summary Descriptive Data Using the Video Summary Description Scheme |
US7559017B2 (en) * | 2006-12-22 | 2009-07-07 | Google Inc. | Annotation framework for video |
WO2011064674A2 (en) * | 2009-11-30 | 2011-06-03 | France Telecom | Content management system and method of operation thereof |
JP6011185B2 (en) * | 2012-09-14 | 2016-10-19 | 株式会社バッファロー | Image information processing system, image information processing apparatus, and program |
CN103024480B (en) * | 2012-12-28 | 2016-06-01 | 杭州泰一指尚科技有限公司 | A kind of method embedding advertisement in video |
CN103442308A (en) * | 2013-08-22 | 2013-12-11 | 百度在线网络技术(北京)有限公司 | Audio and video file labeling method and device and information recommendation method and device |
CN103970906B (en) * | 2014-05-27 | 2017-07-04 | 百度在线网络技术(北京)有限公司 | The method for building up and device of video tab, the display methods of video content and device |
-
2014
- 2014-11-28 CN CN201410714405.1A patent/CN104391960B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103488661A (en) * | 2013-03-29 | 2014-01-01 | 吴晗 | Audio/video file annotation system |
Non-Patent Citations (1)
Title |
---|
基于标注的体育视频管理系统的设计与实现;周建芳 等;《湖北体育科技》;20140915;第33卷(第9期);第757-759页;第758页第2.3节,第759页第3.2.1节 |
Also Published As
Publication number | Publication date |
---|---|
CN104391960A (en) | 2015-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104391960B (en) | A kind of video labeling method and system | |
CN106792100B (en) | Video bullet screen display method and device | |
CN105488478B (en) | Face recognition system and method | |
CN103324729B (en) | A kind of method and apparatus for recommending multimedia resource | |
CN102547141B (en) | Method and device for screening video data based on sports event video | |
KR102258407B1 (en) | Fingerprint layouts for content fingerprinting | |
CN103763624B (en) | Television channel program interaction method and device | |
CN102595206B (en) | Data synchronization method and device based on sport event video | |
US20130138673A1 (en) | Information processing device, information processing method, and program | |
CN104104952A (en) | Audio/video processing method and system adapted to storage and play of mobile device | |
CN106851395B (en) | Video playing method and player | |
US20170134806A1 (en) | Selecting content based on media detected in environment | |
CN104471562B (en) | The method and apparatus for forming the label for arranging multimedia element | |
CN103200441A (en) | Obtaining method, conforming method and device of television channel information | |
CN106162222B (en) | A kind of method and device of video lens cutting | |
CN104699806B (en) | A kind of video searching method and device | |
KR101749420B1 (en) | Apparatus and method for extracting representation image of video contents using closed caption | |
CN104703037A (en) | Method and device for outputting channel | |
CN105245948A (en) | Video processing method and video processing device | |
WO2016185258A1 (en) | File processing method, file processing apparatus and electronic equipment | |
ZHANG et al. | Video-frame insertion and deletion detection based on consistency of quotients of MSSIM | |
KR20150023492A (en) | Synchronized movie summary | |
CN108322782B (en) | Method, device and system for pushing multimedia information | |
WO2014190494A1 (en) | Method and device for facial recognition | |
Lee et al. | Design of smart broadcasting scenario for media commerce services |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |