CN101064846A - Time-shifted television video matching method combining program content metadata and content analysis - Google Patents

Time-shifted television video matching method combining program content metadata and content analysis Download PDF

Info

Publication number
CN101064846A
CN101064846A CN 200710041117 CN200710041117A CN101064846A CN 101064846 A CN101064846 A CN 101064846A CN 200710041117 CN200710041117 CN 200710041117 CN 200710041117 A CN200710041117 A CN 200710041117A CN 101064846 A CN101064846 A CN 101064846A
Authority
CN
China
Prior art keywords
video
metadata
user
information
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200710041117
Other languages
Chinese (zh)
Other versions
CN100493195C (en
Inventor
陈晓琳
杨小康
郑世宝
张瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN 200710041117 priority Critical patent/CN100493195C/en
Publication of CN101064846A publication Critical patent/CN101064846A/en
Application granted granted Critical
Publication of CN100493195C publication Critical patent/CN100493195C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A time-shift TV set video matching method which combines program content metadata and content analysis belongs to field of electronic information technique. Steps are following: (1) capture of metadata: picking up video program metadata information as package identification symbol, parsing as digital broadcasting operation information rules, and building metadata information index for transferring of inquiring module; (2) video matching in compressed code flow: firstly the video sequence is comminuted into scenes, and selecting key frame within scenes, then picking up movement characteristic and statistic characteristic of key frame to build video structure storeroom and characteristic storeroom, at last searching as inquiring characteristic of user, results which are sequenced for user. The invention makes the best of current technique, adds the metadata high level semanteme characteristic, considers the feedback opinion of user, and increases the precision of result.

Description

Time-shifted television video matching method in conjunction with programme content metadata and content analysis
Technical field
What the present invention relates to is the method in a kind of telecommunication technology field, specifically is a kind of time-moving television program video matching process in conjunction with programme content metadata and content analysis.
Background technology
Up to the present television services is remote video communication business the most popular with users, up to the present television services still is to provide one way traffic with broadcast mode to the user basically, and the desired video frequency program that obtains that obtains whenever and wherever possible that the user wishes still is a dream always.In the past few decades, the people in the industry is devoted to study the video traffic with interactivity always.Time-moving television (Time-shifted TV) function frees the user from traditional program timetable, this revolutionary service can allow the user when seeing live TV programme, realization is to time-out, the back operation of program, and can be fast-forward to the in progress moment of current live telecast.The realization of time-moving television is in live telecast, stores a duplicate simultaneously to streaming media server (ME).Selected the TV programme of some periods as the user after, system navigates on the corresponding media file time point and plays.One road code check is the channel of 2Mpbs, preserves a week, and the program size is 15.12GB, and huge file like this is difficult to control, operation and location.Traditional matching process is generally based on low-level feature (as color histogram, texture or shape etc.), and these features do not meet general user's cognition custom usually, and corresponding Man Machine Interface has significant limitation.Yet for the understandability of realizing more being close to the users from succinct inquiry mode, and improve precision, research is in recent years changeed gradually and is used high-level semantic feature calculation similarity degree matching process.
Find through literature search prior art, Jiebo Luo etc. are at IEEE Signal ProcessingMagazine (IEEE signal processing magazine) volume March the 23rd in 2006, the 2nd phase, propose the metadata (metadata) of combination image and the method that characteristics of image is retrieved and classified in the article of delivering on the 101-114 page or leaf " Pictures are not taken in a vacuum (picture is not to take in a vacuum) ", thereby improve the accuracy of coupling.But the raising to the retrieval precision of image only introduced in article, is not applied in video coupling field.In further retrieving, find identical with theme of the present invention or similar bibliographical information as yet.
Summary of the invention
The objective of the invention is to overcome deficiency of the prior art, a kind of time-shifted television video matching method in conjunction with programme content metadata and content analysis is provided.Make low-level features such as its color that has not only made full use of object video, motion vector; And utilized programme content descriptive metadata in the program making process.Metadata can make things convenient for the user to find own programs of interest in a large amount of programs.The present invention has fully improved the accuracy of inquiry velocity and video content coupling.
The present invention is achieved by the following technical solutions, and concrete steps are as follows:
(1) obtaining of metadata: extract the video frequency program metadata information by the bag identifier, resolve, set up the metadata information index, offer enquiry module and call according to digital broadcasting service information standard.Matching process at first carries out meta data match, dwindles matching range by searching index of metadata, improves matching speed.
SI (business information) is meant in meeting the transmission stream of MPEG-2, inserts some special information.Wherein, EIT (program segment information table) provides the information of the program that comprises in each business in chronological order.As program identification number, title, beginning and ending time, length, running status, whether encrypt, encryption system of program introduction, program stream type, use, program category, restriction rank, interactive contact telephone number etc.
EIT comprises two kinds of dissimilar tables, is respectively EIT p/f table and EIT-S table.EIT p/f has provided the information of a current and back incident in the specified services, and EIT-S then comprises in the week or the programme distant information of longer time.Each EIT table transmits being divided into a plurality of sections, and the event information Duan Jun of any formation EIT transmits in PID (bag identifier) be the TS bag of 0x0012.Decoder extracts these information by PID, preserves as the metadata of material.These information provide convenience for next step search.
In programinfo structure PROG_INFO_STRUCT, the two-dimensional array event_info_database that has defined the EVENT_INFO_BASIC type is used for depositing the information of a program in the business.These information comprise information such as title, time started, duration.
Date and time is to provide according to the form of MJD (Modified Julian Date)+UTC (generalized time coordinate) and with the form of 16 systems in TS stream.Can be exchanged into local date and time with reference to GY/Z174-2001 (digital TV broadcasting service information standard).The title of program then realizes by resolving descriptor short_event_descriptor.The information that all parsings obtain finally all will be used to set up programme content metadata information index, as the foundation of matching inquiry video clips metadata and consumer positioning program of interest.
(2) coupling of the video in the compressed bit stream
Query (inquiry) according to the user submits to searches video clips similarly in database.Its basic thought is some feature of extracting video sample earlier, obtains end product according to the similarity with each video clips comparison then.
At first video sequence to be divided into camera lens (shot), and in camera lens, select key frame (key frame); Extract interior motion feature of camera lens and the static nature in the key frame then, set up video structure storehouse and feature database; The inquiry of submitting to according to the user is at last mated retrieval according to the image low-level feature, gives the user after the result is sorted by the similitude degree; Retrieval is an approximate match, and the progressively cyclic process of refinement can be repeatedly mutual when the user is dissatisfied to Query Result and feedback, optimizes Query Result, until obtaining satisfied result.
1. shot boundary detects: camera lens is the elementary cell of video data, it represent in the scene in time with the space on continuous action, be the video image that the once-through operation of video camera is produced.Video processing at first just need be divided into camera lens to video, and with as basic indexing units, this process just is called the detection of shot boundary.
Because video data is to preserve with the form of compression, therefore, select directly in compression domain, to carry out Boundary Detection here.Utilize the DC component information structuring DC image of image, the method for difference detects between reusable frame.The DC image only is the sub-fraction in the video data, has really comprised the basic global information of original image, and has compressibility processing procedure characteristics more efficiently.
2. key-frame extraction: an earlier selected frame is as the initial classes heart, judges according to the distance of present frame and the existing class heart then to be classified as that to have a certain class now still be as the new class heart; After cluster is finished, get from the nearest frame of the class heart, form the key frame sequence as key frame.
Key frame is the still image sequence that extracts from video sequence, has represented the main contents of a camera lens.The use of key frame has significantly reduced the data volume of video index, simultaneously also provides an organization framework for retrieval and browsing video.
3. based on the feature extraction of key frame: utilize color histogram, color correlation diagram and color moment character representation color of image feature; Adopt wavelet transformation to carry out the analysis of textural characteristics; Shape facility requires the consistency to displacement, rotation, convergent-divergent, and shape facility is selected edge orientation histogram.
Each camera lens is carried out feature extraction, obtain a feature space that as far as possible fully reflects the camera lens content, this feature space will be as the foundation of video coupling.
Below 1. 2. 3. be bookkeeping, do not need user's intervention, can finish by off-line at video database.Generated for the gradable description of the structuring of video content.
4. retrieval coupling: seek key frame similar in the database according to certain similarity to the query specification characteristic vector, might think the video of user's needs, according to sequencing of similarity, all enumerate out;
Matching process is to come key frame similar to inquiry in the searching database according to certain similarity.Normally used querying method is to inquire about by direct characterization or by example.During inquiry, the user also can specify and use specific feature set.If retrieve the key frame that satisfies condition, the user just can utilize and play the video clips of watching its representative.
5. relevant feedback: the user provides feedback information to system, and system then adjusts query contents automatically according to user's feedback.
Because the difference that the complexity of video content and machine and people understand things, matching result is unsatisfactory usually.Utilize the cyclic process of relevant feedback, constantly learn to change threshold value repeated matching process, progressively optimize the result, up to the requirement of satisfying the user according to user's feedback information.
Below 4. 5. need to carry out repeatedly, till the user is satisfied according to user's suggestion.Finally play the video lens segment that the user needs by key frame.
Effect of the present invention is: adopt the method at time-shifted television video coupling of the present invention, can obtain the accuracy rate of matching speed and Geng Gao faster.
Why the present invention has significant technique effect, its reason is: at the characteristics of time-shifted television video itself, in implementation process, fully utilized the low-level feature such as color, motion vector of programme content descriptive metadata in the program making process and object video.At present domestic do not have report to mention metadata is applied to the video coupling as yet.Index of metadata has made things convenient for the user to navigate to own programs of interest fast in the program of magnanimity, thereby has reduced the amount of calculation of subsequent process; Improved the accuracy of coupling.
Description of drawings
Fig. 1 is a structured flowchart of the present invention
Fig. 2 is video coupling flow chart of the present invention
Embodiment
Below in conjunction with accompanying drawing embodiments of the invention are elaborated: present embodiment has provided detailed execution mode and process being to implement under the prerequisite with the technical solution of the present invention, but protection scope of the present invention is not limited to following embodiment.
In specific implementation process, the user at first searches its interested media content to request, and process metadata adopts the index of metadata that generates in advance with quadrat method to determine the position of medium by inquiry; Through the flow process of video coupling, find satisfactory video clips again.
As shown in Figure 1, present embodiment can be divided into following step.
(1) metadata obtains
Decoder is pressed PID (bag identifier) and is extracted metadata information.In programinfo structure PROG_INFO_STRUCT, the two-dimensional array event_info_database that has defined the EVENT_INFO_BASIC type is used for depositing information such as the title of a program in the business, time started, duration.The temporal information of extracting need be changed according to GY/Z174-2001 (digital broadcasting service information standard), resolves the title that the short_event_descriptor descriptor obtains program.The programme content metadata that parsing obtains is used to set up programme content metadata information index.The metadata of the inquiry video clips that parsing obtains is by searching the position of index consumer positioning program of interest.
(2) coupling of the video in the compressed bit stream
As shown in Figure 2, the coupling of the video in the compressed bit stream relates to following aspect altogether:
1. shot boundary detects
Because video data is to preserve with the form of compression, therefore, select directly in compression domain, to carry out Boundary Detection effectively here.From each two field picture, extract earlier the DC component of each 8 * 8 sub-piece, obtain one and only be the DC image of original image 1/64 size; Again relatively before and after two frames frame pitch from, when distance surpasses thresholding, both can think shot boundary.Here, thresholding need preestablish or obtain with the training data training.
2. key-frame extraction
An earlier selected frame can select first frame of camera lens usually as the initial classes heart.Judge that according to the distance of the present frame and the existing class heart being classified as existing a certain class still is as the new class heart then, after cluster is finished, get from the nearest frame of the class heart, form the key frame sequence as key frame.
3. feature extraction
Color characteristic calculates simple, stable in properties, and is all insensitive for rotation, translation, dimensional variation, thereby has higher robustness.Use color histogram, auto color correlation diagram and color moment feature are as color characteristic.
Texture is the characteristics of image relevant with the body surface material, has reflected local irregularities and integral body characteristic clocklike.The textural characteristics that wavelet transformation is represented is average and the standard variance with Energy distribution on each decomposition level of each wave band.Set up a gray level co-occurrence matrixes based on directivity between the pixel and distance in addition again, extracting from matrix then has energy, moisture in the soil, contrast and consistency as textural characteristics.
The extraction of shape facility need adopt image segmentation algorithm that different objects are split from image, mates measurement again.Shape facility requires the consistency to displacement, rotation, convergent-divergent, and its extraction generally only limits to be very easy to the object of identification.Shape facility uses edge orientation histogram to represent.
Motion feature is the key character of video lens, and it has reflected that the time domain of video changes, and also is the important content that mates with video example.In the MPEG code stream, the motion vector of B and P frame can be used for extracting motion feature.Utilize the motion of macro block to obtain characteristic vector, judge the motion of camera lens again with this characteristic vector.
4. coupling retrieval
The inquiry video is carried out characteristic vector extraction and fusion, seek key frame similar to query specification in the database then, similarity is weighed by Euclidean distance.Content-based video coupling is a kind of similitude coupling, is different from the accurate coupling based on keyword.Can set the thresholding of similarity, might think the video that the user needs, according to sequencing of similarity, all enumerate out, be convenient to collect feedback opinion, carry out next step correction.By after the location of meta data match and the irrelevant video of inquiry significantly reduce; Consider follow-up feedback element, thresholding can be set and return 50% key frame.
5. relevant feedback
The user provides feedback information according to the correlation between this Query Result and the own desired result to system, and system then carries out the matching process of a new round to the result of user's feedback as new inquiry, and Query Result is approached to user's expectation.The reciprocal process of feedback can be regarded as a training process, and the result of user's satisfaction or unsatisfied two aspects can be used for revising the compound mode of characteristic vector or the judgement standard of grader, progressively improves output result's precision.
The present invention has made full use of existing technology, and compressed bit stream is only carried out partial decoding of h, has reduced computation complexity; Introduce the programme content metadata, combined high-level semantic feature and low layer visual signature; Come training system and do not use the auxiliary method of metadata to think comparison by field feedback, improved the accuracy 5%~10% of video matching result.

Claims (6)

1, a kind of time-shifted television video matching method in conjunction with programme content metadata and content analysis is characterized in that step is as follows:
(1) obtaining of metadata: extract the video frequency program metadata information by the bag identifier, resolve according to digital broadcasting service information standard, set up the metadata information index, matching process at first carries out meta data match, dwindles matching range by searching index of metadata;
(2) coupling of the video in the compressed bit stream: at first video sequence is divided into camera lens, and in camera lens, select key frame, extract interior motion feature of camera lens and the static nature in the key frame then, set up video structure storehouse and feature database, the inquiry of submitting to according to the user is at last mated retrieval according to feature, gives the user after the result is sorted by the similitude degree.
2, time-shifted television video matching method in conjunction with programme content metadata and content analysis according to claim 1, it is characterized in that, obtaining of described metadata, be meant: decoder extracts metadata information by the bag identifier, in programinfo structure PROG_INFO_STRUCT, the two-dimensional array event_info_database that has defined the EVENT_INFO_BASIC type is used for depositing the title of a program in the business, time started, duration information, the temporal information of extracting need be changed according to digital broadcasting service information standard, parsing short_event_descriptor descriptor obtains the title of program, the programme content metadata that parsing obtains, be used to set up programme content metadata information index, the metadata of resolving the inquiry video clips that obtains is by searching the position of index consumer positioning program of interest.
3, the time-shifted television video matching method in conjunction with programme content metadata and content analysis according to claim 1 is characterized in that, the video coupling in the described compressed bit stream may further comprise the steps:
1. shot boundary detects: extract earlier the DC component of each 8 * 8 sub-piece from each two field picture, obtain one and only be the DC image of original image 1/64 size; Again relatively before and after two frames frame pitch from, when distance surpasses the thresholding that preestablishes or obtain with the training data training, promptly think shot boundary;
2. key-frame extraction: first frame of earlier selected camera lens is as the initial class heart, judge that according to the distance of the present frame and the existing class heart being classified as existing a certain class still is as the new class heart then, after cluster is finished, get from the nearest frame of the class heart as key frame, formed the key frame sequence of representative shot content;
3. based on the feature extraction of key frame: utilize color histogram, color correlation diagram and color moment character representation color of image feature; Adopt wavelet transformation to carry out the analysis of textural characteristics; Shape facility requires the consistency to displacement, rotation, convergent-divergent, and shape facility is selected edge orientation histogram, at last Feature Conversion is become vector and normalization;
4. coupling retrieval: according to seeking key frame similar in the database to the query specification characteristic vector by calculating distance, might think the user need video, according to sequencing of similarity, enumerate out preceding 50% key frame;
5. relevant feedback: the user provides feedback information to system, and system then carries out more accurate matching process according to user's feedback.
4, the time-shifted television video matching method in conjunction with programme content metadata and content analysis according to claim 3, it is characterized in that, described step 5., the user is according to the correlation between this Query Result and oneself the desired result, provide feedback information to system, system then carries out the matching process of a new round to the result of user's feedback as new inquiry, and Query Result is approached to user's expectation; The reciprocal process of feedback is regarded a training process as, progressively improves output result's precision.
5, the time-shifted television video matching method in conjunction with programme content metadata and content analysis according to claim 3, it is characterized in that, described step 1. or step 2. or step 3., it is bookkeeping at video database, do not need user's intervention, but off-line is finished, and has generated for the gradable description of the structuring of video content.
6, the time-shifted television video matching method in conjunction with programme content metadata and content analysis according to claim 3, it is characterized in that, described step 4. or step 5., need carry out repeatedly according to user's suggestion, till the user is satisfied, finally play the video lens segment that the user needs by key frame.
CN 200710041117 2007-05-24 2007-05-24 Time-shifted television video matching method combining program content metadata and content analysis Active CN100493195C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200710041117 CN100493195C (en) 2007-05-24 2007-05-24 Time-shifted television video matching method combining program content metadata and content analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200710041117 CN100493195C (en) 2007-05-24 2007-05-24 Time-shifted television video matching method combining program content metadata and content analysis

Publications (2)

Publication Number Publication Date
CN101064846A true CN101064846A (en) 2007-10-31
CN100493195C CN100493195C (en) 2009-05-27

Family

ID=38965508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200710041117 Active CN100493195C (en) 2007-05-24 2007-05-24 Time-shifted television video matching method combining program content metadata and content analysis

Country Status (1)

Country Link
CN (1) CN100493195C (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930608A (en) * 2010-08-26 2010-12-29 北京交通大学 Method and system for blindly detecting tampered image
CN102067173A (en) * 2008-05-28 2011-05-18 苹果公司 Defining a border for an image
WO2011057560A1 (en) * 2009-11-13 2011-05-19 Huawei Technologies Co.,Ltd. Media distribution with service continuity
CN102222103A (en) * 2011-06-22 2011-10-19 央视国际网络有限公司 Method and device for processing matching relationship of video content
CN102930028A (en) * 2011-11-07 2013-02-13 微软公司 Similarity and relevance of content
CN103020138A (en) * 2012-11-22 2013-04-03 江苏乐买到网络科技有限公司 Method and device for video retrieval
CN103065660A (en) * 2012-12-11 2013-04-24 天津天地伟业数码科技有限公司 Video file locating method of embedded video recorder
WO2013063745A1 (en) * 2011-10-31 2013-05-10 Nokia Corporation On-demand video cut service
CN103974142A (en) * 2013-01-31 2014-08-06 深圳市快播科技有限公司 Video playing method and system
CN104023181A (en) * 2014-06-23 2014-09-03 联想(北京)有限公司 Information processing method and device
CN104620235A (en) * 2012-09-07 2015-05-13 华为技术有限公司 System and method for segment demarcation and identification in adaptive streaming
CN104636330A (en) * 2013-11-06 2015-05-20 北京航天长峰科技工业集团有限公司 Related video rapid searching method based on structural data
CN104903892A (en) * 2012-12-12 2015-09-09 悟图索知株式会社 Searching system and searching method for object-based images
CN106548118A (en) * 2015-09-23 2017-03-29 北京丰源星际传媒科技有限公司 The recognition and retrieval method and system of cinema projection content
CN107147949A (en) * 2017-05-05 2017-09-08 中广热点云科技有限公司 The playing progress rate control method and system of a kind of direct broadcast time-shift
CN107301245A (en) * 2017-07-14 2017-10-27 国网山东省电力公司淄博供电公司 A kind of power information video searching system
CN109040774A (en) * 2018-07-24 2018-12-18 优地技术有限公司 A kind of programme information extracting method, terminal device and server
CN110248245A (en) * 2019-06-21 2019-09-17 维沃移动通信有限公司 A kind of video locating method, device, mobile terminal and storage medium
CN111738173A (en) * 2020-06-24 2020-10-02 北京奇艺世纪科技有限公司 Video clip detection method and device, electronic equipment and storage medium

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102067173A (en) * 2008-05-28 2011-05-18 苹果公司 Defining a border for an image
CN102067173B (en) * 2008-05-28 2015-06-10 苹果公司 Defining a border for an image
WO2011057560A1 (en) * 2009-11-13 2011-05-19 Huawei Technologies Co.,Ltd. Media distribution with service continuity
US8176195B2 (en) 2009-11-13 2012-05-08 Futurewei Technologies, Inc. Media distribution with service continuity
CN101930608A (en) * 2010-08-26 2010-12-29 北京交通大学 Method and system for blindly detecting tampered image
CN102222103A (en) * 2011-06-22 2011-10-19 央视国际网络有限公司 Method and device for processing matching relationship of video content
CN102222103B (en) * 2011-06-22 2013-03-27 央视国际网络有限公司 Method and device for processing matching relationship of video content
WO2013063745A1 (en) * 2011-10-31 2013-05-10 Nokia Corporation On-demand video cut service
CN102930028A (en) * 2011-11-07 2013-02-13 微软公司 Similarity and relevance of content
CN104620235A (en) * 2012-09-07 2015-05-13 华为技术有限公司 System and method for segment demarcation and identification in adaptive streaming
CN104620235B (en) * 2012-09-07 2018-01-16 华为技术有限公司 For the section boundary in adaptive crossfire and the system and method for identification
CN103020138A (en) * 2012-11-22 2013-04-03 江苏乐买到网络科技有限公司 Method and device for video retrieval
CN103065660A (en) * 2012-12-11 2013-04-24 天津天地伟业数码科技有限公司 Video file locating method of embedded video recorder
CN104903892A (en) * 2012-12-12 2015-09-09 悟图索知株式会社 Searching system and searching method for object-based images
CN104903892B (en) * 2012-12-12 2018-02-02 悟图索知株式会社 Object-based image retrieval system and search method
CN103974142A (en) * 2013-01-31 2014-08-06 深圳市快播科技有限公司 Video playing method and system
CN103974142B (en) * 2013-01-31 2017-08-15 深圳市快播科技有限公司 A kind of video broadcasting method and system
CN104636330A (en) * 2013-11-06 2015-05-20 北京航天长峰科技工业集团有限公司 Related video rapid searching method based on structural data
CN104023181A (en) * 2014-06-23 2014-09-03 联想(北京)有限公司 Information processing method and device
CN104023181B (en) * 2014-06-23 2018-08-31 联想(北京)有限公司 Information processing method and device
CN106548118A (en) * 2015-09-23 2017-03-29 北京丰源星际传媒科技有限公司 The recognition and retrieval method and system of cinema projection content
CN107147949A (en) * 2017-05-05 2017-09-08 中广热点云科技有限公司 The playing progress rate control method and system of a kind of direct broadcast time-shift
CN107147949B (en) * 2017-05-05 2020-05-05 中广热点云科技有限公司 Live broadcast time shifting playing progress control method and system
CN107301245A (en) * 2017-07-14 2017-10-27 国网山东省电力公司淄博供电公司 A kind of power information video searching system
CN107301245B (en) * 2017-07-14 2020-03-06 国网山东省电力公司淄博供电公司 Power information video search system
CN109040774A (en) * 2018-07-24 2018-12-18 优地技术有限公司 A kind of programme information extracting method, terminal device and server
CN110248245A (en) * 2019-06-21 2019-09-17 维沃移动通信有限公司 A kind of video locating method, device, mobile terminal and storage medium
CN111738173A (en) * 2020-06-24 2020-10-02 北京奇艺世纪科技有限公司 Video clip detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN100493195C (en) 2009-05-27

Similar Documents

Publication Publication Date Title
CN100493195C (en) Time-shifted television video matching method combining program content metadata and content analysis
CN100409236C (en) Streaming video bookmarks
JP3568117B2 (en) Method and system for video image segmentation, classification, and summarization
CN1851710A (en) Embedded multimedia key frame based video search realizing method
US7421455B2 (en) Video search and services
US7624337B2 (en) System and method for indexing, searching, identifying, and editing portions of electronic multimedia files
EP1125245B1 (en) Image description system and method
CN1430159A (en) Multimedia data searching and browsing system
CN104754413A (en) Image search based television signal identification and information recommendation method and device
CN101398854A (en) Video fragment searching method and system
CN1166202C (en) Dynamic extraction of feature from compressed digital video signals by video reproducing system
CN102486800A (en) Video searching method, system and method for establishing video database
CN104216956A (en) Method and device for searching picture information
CN106777159B (en) Video clip retrieval and positioning method based on content
CN1131637C (en) Method of generating data stream index file and using said file accessing frame and shearing lens
CN103279473A (en) Method, system and mobile terminal for searching massive amounts of video content
CN111723692B (en) Near-repetitive video detection method based on label features of convolutional neural network semantic classification
Jensen et al. Valid Time.
CN103902569A (en) Video matching method based on Bag of Words
Abdel-Mottaleb et al. Multimedia descriptions based on MPEG-7: extraction and applications
Minu et al. Scrutinizing the video and video retrieval concept
Valdés et al. Efficient video summarization and retrieval tools
Juan et al. Content-based video retrieval system research
Shankar et al. Implementation of object oriented approach to query processing for video subsequence identification
CN1253822C (en) K near neighbour method used for video frequency fragment fast liklihood inquiry

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant