CN104991953A - Coarse and fine granularity video searching method based on reverse index - Google Patents

Coarse and fine granularity video searching method based on reverse index Download PDF

Info

Publication number
CN104991953A
CN104991953A CN201510427283.2A CN201510427283A CN104991953A CN 104991953 A CN104991953 A CN 104991953A CN 201510427283 A CN201510427283 A CN 201510427283A CN 104991953 A CN104991953 A CN 104991953A
Authority
CN
China
Prior art keywords
video
vocabulary
window
retrieved
inverted index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510427283.2A
Other languages
Chinese (zh)
Inventor
徐杰
吴鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Niujiena Technology Co Ltd
Original Assignee
Chengdu Niujiena Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Niujiena Technology Co Ltd filed Critical Chengdu Niujiena Technology Co Ltd
Priority to CN201510427283.2A priority Critical patent/CN104991953A/en
Publication of CN104991953A publication Critical patent/CN104991953A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a coarse and fine granularity video searching method based on a reverse index. A video is subjected to searching comparing through a searching manner by layers and granularities. The coarse and fine granularity video searching method comprises: firstly, establishing a reverse index table of all the vides in a video database by using a reverse index algorithm, and obtaining a corresponding candidate video set at a searching phase through a reverse index technology so as to reduce an accurate finding range and finish the finding of coarse and fine granularities, thereby effectively reducing the time expenditure for searching the videos; and secondly, carrying out video coarse and fine granularity searching based on an improved BLAST algorithm so as to rapidly find the video matched with a video to be searched from the video database, and counting up the similarity of the video to be searched and the found video so as to obtain a searching result. The coarse and fine granularity video searching method has the characteristics of rapidness and accuracy for finding the videos, instantaneity and the like.

Description

A kind of thickness granularity video retrieval method based on inverted index
Technical field
The invention belongs to field of video retrieval, more specifically say, relate to a kind of thickness granularity video retrieval method based on inverted index.
Background technology
Along with the fast development of multimedia technology and network technology, the quantity of digital video presents explosive growth on the internet.In the video resource of vastness, how to search target video is rapidly and accurately that current safety of internet video is supervised and retrieves technical barrier urgently to be resolved hurrily.Therefore, many researchists have dropped into great effort in video identification and retrieval technique, and in numerous solution, video finger print becomes the research emphasis of correlation technique in recent years.
Video finger print can be divided into embedded fingerprint and non-embedded fingerprint according to range of application.The finger print information relevant to video content is embedded into as copyright in digital video product by embedded fingerprint technology, is mainly used in copyright protection field.And non-embedded video finger print technology is by extracting the characteristic information of video content, and feature based information calculate one group unique corresponding and can the Serial No. of unique identification video with video content, the retrieval realizing video is retrieved in the comparison finally by video finger print.Because non-embedded fingerprint and video content are closely related and retrieval mode is more simple efficient, be therefore mainly used in Video security monitoring and content retrieval field.
In the prior art, based on the approximate character string video finger print sequences match algorithm of low variance, this algorithm, by improving original characters string matching algorithm to obtain lower variance, improves the accuracy of coupling to a certain extent and can not increase algorithm complex; Based on the video copy detection algorithm of test of hypothesis; this algorithm utilizes the test of hypothesis thought in Probability to distinguish similar video and non-similar video; have very large probability after video after copy and former video are calculated by algorithm and be divided into same class; so just can determine that whether video is through illegal copies, thus play the effect of protection copyright.The expense of this algorithm is lower, but robustness is poor, and change if video is subject to brightness, shear, convergent-divergent etc. will largely affect algorithm accuracy after attacking; Based on the video finger print alignment algorithm of DNA sequence dna alignment algorithm, video finger print is compared to the video gene being similar to DNA in biology by this algorithm, and utilize sequence analysis method fasta algorithm in biology to analyze video finger print, because fasta algorithm relates to backtracking in the process of comparison between two, algorithm expense is comparatively large, and its time efficiency cannot meet the requirement of real-time; A kind of visual word bag model that utilizes carries out modeling and the method for mating to video content, this algorithm by quantifying after video features build visual dictionary, video lens is expressed as some vision keyword set, build the inverted index of camera lens keyword phrase on this basis, the retrieval for video is mated.This algorithm effectively have compressed video and expresses, and improves effectiveness of retrieval by inverted index, but retrieval accuracy need to improve.
As can be seen from above research, great majority to video finger print retrieval matching algorithm probe into mainly concentrate on algorithm robustness and accuracy on, and real-time need improve; Also there is a few studies direction to concentrate on the retrieval rate improving algorithm, but also sacrifice the accuracy of retrieval simultaneously.And the video frequency searching of Internet Video security monitoring not only requires that retrieval rate has certain real-time, result for retrieval is also needed to have higher accuracy and robustness, therefore, the present invention is in order to effectively balance real-time and the accuracy of retrieval, propose the video frequency searching mode based on non-embedded video finger print, and use for reference the thought of inverted index and biological sequence analysis, realize the retrieval quickly and accurately to video.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, a kind of thickness granularity video retrieval method based on inverted index is provided, under the real-time effectively balancing retrieval and accuracy, realizes the retrieval quickly and accurately to video.
For achieving the above object, the present invention is based on the thickness granularity video retrieval method of inverted index, it is characterized in that, comprise the following steps:
(1), inverted index file is built to whole video library
(1.1), N bar video has been deposited in setting video storehouse; Utilize shot segmentation technique to carry out key-frame extraction to Article 1 video, then extract the unique point in this key frame of video by SURF algorithm;
(1.2) visual dictionary of training in advance, is utilized to carry out classification process to all unique points: the Euclidean distance calculating all vocabulary in each unique point and visual dictionary, each unique point is divided in the minimum vocabulary of Euclidean distance, add up feature point number in each vocabulary classification again, and using frequency that feature point number occurs in video as this vocabulary;
(1.3) frequency that in visual dictionary, all vocabulary occurs in video, is counted, extract the key visual vocabulary of k the highest vocabulary of frequency as video, and be deposited in the vocabulary of inverted index file, video information corresponding for this k vocabulary is deposited in the Inverted List of inverted index file simultaneously;
(1.4), according to the method described in step (1.1) ~ step (1.3), continue to process the video of next in video library, until N article of video, thus complete inverted index file is constructed to whole video library;
(2), video frequency searching
(2.1) the video coarseness, based on inverted index file is retrieved
(2.1.1) k the key visual vocabulary of video q to be retrieved, is extracted according to the method described in step (1.1) ~ step (1.3);
(2.1.2), utilize this k key visual vocabulary to search in inverted index file, find all videos with this k terminology match, and the video found is labeled as video h successively i, i=1,2 ..., n, n represent the total number of the video found;
(2.2), based on the video fine granularity retrieval of the BLAST algorithm improved
(2.2.1), window sliding step-length N is set;
(2.2.2), to video q to be retrieved and video h icarry out window value calculating
Arrange a moving window, the reference position of moving window is video h ifirst, namely moving window first is at video h ifirst on, by this moving window respectively at video q to be retrieved and video h ion slide backwards on end from reference position, each length of sliding is window sliding step-length N, and calculates when sliding at every turn, and the value of the binary sequence that moving window covers as window value, and is arranged in order, and forms window value sequence;
(2.2.3) the video q that, comparison is to be retrieved and video h iwindow value;
By video h imove backward successively from first with the location point of the window value sequence of video q to be retrieved, until there is identical window value before location point, with video h simultaneously i, location point that video q window value to be retrieved is identical is that namely benchmark aligns, comparison video h i, all window value before and after this location point of video q to be retrieved, and add up similarity, by video finger print h iand similarity returns as result for retrieval.
Goal of the invention of the present invention is achieved in that
The present invention is based on the thickness granularity video retrieval method of inverted index, by the retrieval mode of layering point granularity, retrieval comparison is carried out to video.First Inversed File Retrieval Algorithm is utilized to set up inverted index table to all videos in video database, calculate the corresponding candidate video collection of acquisition in retrieval phase by Inverted Index Technique again and reduce accurate seek scope with this, complete searching of coarseness, effectively reduce the time overhead of video frequency searching; Secondly, the retrieval of video fine granularity is carried out based on the BLAST algorithm improved, thus in video database, find the video matched with video to be retrieved fast, by treating retrieve video and the similarity statistics of searching video, and then obtain result for retrieval, there is the features such as video retrieval is quick, accurate and real-time.
Accompanying drawing explanation
Fig. 1 is the theory diagram of the thickness granularity video frequency searching that the present invention is based on inverted index;
Fig. 2 is the process flow diagram of the thickness granularity video retrieval method that the present invention is based on inverted index;
Fig. 3 is the schematic diagram utilizing shot segmentation technique to extract key frame;
Fig. 4 extracts the unique point schematic diagram in key frame;
Fig. 5 is the frequency diagram that in visual dictionary, all vocabulary occurs in video.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described, so that those skilled in the art understands the present invention better.Requiring particular attention is that, in the following description, when perhaps the detailed description of known function and design can desalinate main contents of the present invention, these are described in and will be left in the basket here.
Embodiment
Fig. 1 is the theory diagram of the thickness granularity video frequency searching that the present invention is based on inverted index.
In the present embodiment, as shown in Figure 1, the present invention is based on the thickness granularity video retrieval method of inverted index, mainly comprise and set up index stage and video frequency searching stage, as shown in Figure 2, its concrete steps are as follows:
T1, set up the index stage
(1) the video coarseness, based on inverted index is retrieved
(T1.1), inverted index file is built to whole video library
(T1.1.1), N bar video has been deposited in setting video storehouse; Shot segmentation technique is utilized to carry out key-frame extraction to Article 1 video; N bar video has been deposited in setting video storehouse; Shot segmentation technique is utilized to carry out key-frame extraction to Article 1 video, as shown in Figure 3;
(T1.1.2), adopt SURF algorithm to extract the unique point in key frame, in the present embodiment, as shown in Figure 4,5373 are add up to the unique point of this video extraction;
(T1.1.3) visual dictionary of training in advance, is utilized to carry out classification process to all unique points: the Euclidean distance calculating all vocabulary in each unique point and visual dictionary, each unique point is divided in the minimum vocabulary of Euclidean distance, add up feature point number in each vocabulary classification again, and using frequency that feature point number occurs in video as this vocabulary; In the present embodiment, as shown in Figure 5, after classification process is carried out to 5373 unique points, the frequency that occurred vocabulary occurs in video is obtained;
(T1.1.4) frequency that in visual dictionary, all vocabulary occurs in video, is counted, extract the key visual vocabulary of k=4 the highest vocabulary of frequency as video, and be deposited in the vocabulary of inverted index file, video information corresponding for this k=4 vocabulary is deposited in the Inverted List of inverted index file simultaneously;
(T1.1.5), according to the method described in step (T1.1.1) ~ step (T1.1.4), continue to process the video of next in video library, until N article of video, thus complete inverted index file is constructed to whole video library;
Wherein, inverted index file mainly comprises two tables, i.e. vocabulary and Inverted Lists; The key visual vocabulary in visual dictionary is deposited in vocabulary; The video information corresponding with this key visual vocabulary is deposited in Inverted List; At initial time, Inverted List is empty.
T2, video frequency searching stage
Be divided into again video coarseness to retrieve and the retrieval of video fine granularity in the video frequency searching stage, concrete steps comprise:
(T2.1) the video coarseness, based on inverted index file is retrieved
(T2.1.1) k=4 the key visual vocabulary of video q to be retrieved, is extracted according to the method described in step (T1.1.1) ~ step (T1.1.4);
(T2.1.2), utilize this k=4 key visual vocabulary to search in inverted index file, find all videos with this k terminology match, and the video found is labeled as video h successively i, i=1,2 ..., n, n represent the total number of the video found;
In the present embodiment, deposit N=100 bar time video different in size in setting video storehouse, wherein content contains the polytypes such as news, physical culture, flash, preview, amusement, music MV.
If vocabulary size is set as 250,500,1000,1500 respectively in inverted index file, keyword quantity set is 4, by described step above, sets up inverted index to 100 videos;
Vocabulary size Correct retrieval False retrieval Accuracy rate The average used time Average candidate video quantity
250 92 8 92% 39.2s 7
500 88 12 88% 115.7s 5
1000 81 19 81% 212.5s 3
1500 68 32 68% 384.6s 2
Table 1
Result can be found out by experiment, as shown in table 1, and along with the size of vocabulary constantly increases, video utilizes vocabulary to carry out classifying and word frequency statistics more may occur error, and therefore accuracy rate is in decline.Because along with the increase of vocabulary.In addition, along with the size of vocabulary constantly increases, the word frequency statistics time increases, and the time used of an inquiry video is increasing.But along with vocabulary reduces, the candidate video quantity that retrieval obtains is also in increase, and follow-up fine granularity retrieval comparison number of times will increase, and efficiency will decline.Therefore, comprehensive accuracy rate, average used time and candidate video quantity three factors are considered, when vocabulary size is 250, its retrieval effectiveness is best.
Secondly, based on experimental result above, vocabulary size is decided to be 250 by us, and keyword quantity is selected 1,4,8 and 12 4 group and tested simultaneously, and its result for retrieval is as shown in table 2;
Keyword quantity Correct retrieval False retrieval Accuracy rate Average candidate video quantity
1 72 28 72% 1
4 92 8 92% 7
8 94 6 94% 25
10 98 4 98% 40
Table 2
The data result drawn by experiment we can find out, along with the continuous increase of keyword quantity, candidate video increasing number, therefore precision ratio is improving constantly.Comprehensive accuracy rate and candidate video quantity composite factor are considered, keyword quantity set is 4 by we.
Through experimental analysis when selecting that vocabulary size is 250, keyword quantity is 4, reasonable result for retrieval can be produced, for fine-grained accurately search ready
(T2.2), based on the video finger print fine granularity retrieval of the BLAST algorithm improved
(T2.2.1), window sliding step-length N is set; In the present embodiment, window sliding step-length is set to N=16bit;
(T2.2.2), to video q to be retrieved and video h icarry out window value calculating
Arrange a moving window, the reference position of moving window is video h ifirst, namely moving window first is at video h ifirst on, by this moving window respectively at video q to be retrieved and video h ion slide backwards on end from reference position, each length of sliding is window sliding step-length N, and calculates when sliding at every turn, and the value of the binary sequence that moving window covers as window value, and is arranged in order, and forms window value sequence;
(T2.2.3) the video q that, comparison is to be retrieved and video h iwindow value;
By video h imove backward successively from first with the location point of the window value sequence of video q to be retrieved, until there is identical window value before location point, with video h simultaneously i, location point that video q window value to be retrieved is identical is that namely benchmark aligns, comparison video h i, all window value before and after this location point of video q to be retrieved, and add up similarity, by video finger print h iand similarity returns as result for retrieval.
In the present embodiment, if the video q to be retrieved of input is: 0000110000110000000011001010100110111001; Setting a sliding step is the moving window of 16bit, and moving window has a corresponding binary sequence and its corresponding decimal integer on each position of video q to be retrieved, as shown in table 3;
The window's position The corresponding decimal number of window The corresponding binary sequence of window
8 3120 0000110000110000
9 12480 0011000011000000
10 49920 1100001100000000
11 3075 0000110000000011
12 12300 0011000000001100
13 49202 1100000000110010
14 202 0000000011001010
15 810 0000001100101010
16 3241 0000110010101001
17 12966 0011001010100110
18 51867 1100101010011011
19 10862 0010101001101110
20 43449 1010100110111001
Table 3
For the similar video selected in candidate video database, the window of 16bit is utilized to carry out the calculating of window value, and slide on similar video with identical step-length, when mating, if the window value of a certain position of similar video is equal with a certain window value of video q to be retrieved, then continue the front and rear part of comparing this window value, until two sequences comparison completes, finally count similarity degree, return results.
If a similar video of candidate video databases storage is: 00,011,111,110,100,001,100,001,100,000,000,110,010,101,001,101,110,010,000 0111, it is defined to the window of same coupling string, the window value of the window's position 8 time starting to calculate similar video is 2036, the window's position of neither one string to be matched is equal with it, so move 1bit by after window, continue to calculate, until calculate position be 16 window value be 12480, find that it is equal with the window value of the window's position 9 to be matched, whether the window value continuing so again to compare after the position 9 of the window value after this position and video q to be retrieved is equal, by that analogy, until complete the comparison of two video finger prints, till drawing the similarity of two video finger prints.
Although be described the illustrative embodiment of the present invention above; so that those skilled in the art understand the present invention; but should be clear; the invention is not restricted to the scope of embodiment; to those skilled in the art; as long as various change to limit and in the spirit and scope of the present invention determined, these changes are apparent, and all innovation and creation utilizing the present invention to conceive are all at the row of protection in appended claim.

Claims (2)

1., based on a thickness granularity video retrieval method for inverted index, it is characterized in that, comprise the following steps:
(1), inverted index file is built to whole video library
(1.1), N bar video has been deposited in setting video storehouse; Utilize shot segmentation technique to carry out key-frame extraction to Article 1 video, then extract the unique point in this key frame of video by SURF algorithm;
(1.2) visual dictionary of training in advance, is utilized to carry out classification process to all unique points: the Euclidean distance calculating all vocabulary in each unique point and visual dictionary, each unique point is divided in the minimum vocabulary of Euclidean distance, add up feature point number in each vocabulary classification again, and using frequency that feature point number occurs in video as this vocabulary.
(1.3) frequency that in visual dictionary, all vocabulary occurs in video, is counted, extract the key visual vocabulary of k the highest vocabulary of frequency as video, and be deposited in the vocabulary of inverted index file, video information corresponding for this k vocabulary is deposited in the Inverted List of inverted index file simultaneously;
(1.4), according to the method described in step (1.1) ~ step (1.4), continue to process the video of next in video library, until N article of video, thus complete inverted index file is constructed to whole video library;
(2), video frequency searching
(2.1) the video coarseness, based on inverted index file is retrieved
(2.1.1) k the key visual vocabulary of video q to be retrieved, is extracted according to the method described in step (1.1) ~ step (1.3);
(2.1.2), utilize this k key visual vocabulary to search in inverted index file, find all videos with this k terminology match, and the video found is labeled as video h successively i, i=1,2 ..., n, n represent the total number of the video found;
(2.2), based on the video fine granularity retrieval of the BLAST algorithm improved
(2.2.1), window sliding step-length N is set;
(2.2.2), to video q to be retrieved and video h icarry out window value calculating
` arranges a moving window, and the reference position of moving window is video h ithe one one, namely moving window first is at video h ifirst on, by this moving window respectively at video q to be retrieved and video h ion slide backwards on end from reference position, each length of sliding is window sliding step-length N, and calculates when sliding at every turn, and the value of the binary sequence that moving window covers as window value, and is arranged in order, and forms window value sequence;
(2.2.3) the video q that, comparison is to be retrieved and video h iwindow value;
By video h imove backward successively from first with the location point of the window value sequence of video q to be retrieved, until there is identical window value before location point, with video h simultaneously i, location point point that video q window value to be retrieved is identical is that namely benchmark aligns, comparison video h i, all window value before and after this location point of video q to be retrieved, and add up similarity, by video finger print h iand similarity returns as result for retrieval.
2. the thickness granularity video retrieval method based on inverted index according to claim 1, is characterized in that, described inverted index file comprises two tables, i.e. vocabulary and Inverted Lists; The key visual vocabulary in visual dictionary is deposited in vocabulary; The video information corresponding with this key visual vocabulary is deposited in Inverted List; At initial time, Inverted List is empty.
CN201510427283.2A 2015-07-20 2015-07-20 Coarse and fine granularity video searching method based on reverse index Pending CN104991953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510427283.2A CN104991953A (en) 2015-07-20 2015-07-20 Coarse and fine granularity video searching method based on reverse index

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510427283.2A CN104991953A (en) 2015-07-20 2015-07-20 Coarse and fine granularity video searching method based on reverse index

Publications (1)

Publication Number Publication Date
CN104991953A true CN104991953A (en) 2015-10-21

Family

ID=54303768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510427283.2A Pending CN104991953A (en) 2015-07-20 2015-07-20 Coarse and fine granularity video searching method based on reverse index

Country Status (1)

Country Link
CN (1) CN104991953A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388729A (en) * 2017-08-14 2019-02-26 阿里巴巴集团控股有限公司 Search method, device and the audio query system of audio sub fingerprint
CN109492127A (en) * 2018-11-12 2019-03-19 网易传媒科技(北京)有限公司 Data processing method, device, medium and calculating equipment
CN110321454A (en) * 2019-08-06 2019-10-11 北京字节跳动网络技术有限公司 Processing method, device, electronic equipment and the computer readable storage medium of video
CN111078824A (en) * 2019-12-18 2020-04-28 南京录信软件技术有限公司 Method for reducing storage space occupied by Lucene dictionary-free n-gram word segmentation
CN112182287A (en) * 2020-09-10 2021-01-05 南京信息工程大学 Video copy detection method based on space-time visual phrase and hierarchical matching
CN114443895A (en) * 2022-01-26 2022-05-06 网易传媒科技(北京)有限公司 Retrieval method and apparatus, computer-readable storage medium, and computing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208026A (en) * 2011-05-27 2011-10-05 电子科技大学 Method for extracting digital video fingerprints
CN102307301A (en) * 2011-05-30 2012-01-04 电子科技大学 Audio-video fingerprint generation method based on key frames
CN104063706A (en) * 2014-06-27 2014-09-24 电子科技大学 Video fingerprint extraction method based on SURF algorithm
CN104142984A (en) * 2014-07-18 2014-11-12 电子科技大学 Video fingerprint retrieval method based on coarse and fine granularity

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208026A (en) * 2011-05-27 2011-10-05 电子科技大学 Method for extracting digital video fingerprints
CN102307301A (en) * 2011-05-30 2012-01-04 电子科技大学 Audio-video fingerprint generation method based on key frames
CN104063706A (en) * 2014-06-27 2014-09-24 电子科技大学 Video fingerprint extraction method based on SURF algorithm
CN104142984A (en) * 2014-07-18 2014-11-12 电子科技大学 Video fingerprint retrieval method based on coarse and fine granularity

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梅玫: ""基于内容的同源视频检索技术研究"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388729A (en) * 2017-08-14 2019-02-26 阿里巴巴集团控股有限公司 Search method, device and the audio query system of audio sub fingerprint
CN109492127A (en) * 2018-11-12 2019-03-19 网易传媒科技(北京)有限公司 Data processing method, device, medium and calculating equipment
CN110321454A (en) * 2019-08-06 2019-10-11 北京字节跳动网络技术有限公司 Processing method, device, electronic equipment and the computer readable storage medium of video
CN111078824A (en) * 2019-12-18 2020-04-28 南京录信软件技术有限公司 Method for reducing storage space occupied by Lucene dictionary-free n-gram word segmentation
CN112182287A (en) * 2020-09-10 2021-01-05 南京信息工程大学 Video copy detection method based on space-time visual phrase and hierarchical matching
CN112182287B (en) * 2020-09-10 2021-06-08 南京信息工程大学 Video copy detection method based on space-time visual phrase and hierarchical matching
CN114443895A (en) * 2022-01-26 2022-05-06 网易传媒科技(北京)有限公司 Retrieval method and apparatus, computer-readable storage medium, and computing device

Similar Documents

Publication Publication Date Title
CN104991953A (en) Coarse and fine granularity video searching method based on reverse index
CN104142984B (en) It is a kind of to be based on thick fine-grained video fingerprint retrieval method
CN103297851B (en) The express statistic of object content and automatic auditing method and device in long video
CN106649490B (en) Image retrieval method and device based on depth features
CN102890700B (en) Method for retrieving similar video clips based on sports competition videos
CN103617217A (en) Hierarchical index based image retrieval method and system
Wang et al. Compact CNN based video representation for efficient video copy detection
CN104520875A (en) A method and an apparatus for the extraction of descriptors from video content, preferably for search and retrieval purpose
CN104516903A (en) Keyword extension method and system and classification corpus labeling method and system
CN111460961A (en) CDVS-based similarity graph clustering static video summarization method
Liu et al. An image-based near-duplicate video retrieval and localization using improved edit distance
Nagel et al. Event Fisher Vectors: Robust Encoding Visual Diversity of Visual Streams.
CN103778206A (en) Method for providing network service resources
Wu et al. Ultrahigh-speed tv commercial detection, extraction, and matching
Sitaula et al. Content and context features for scene image representation
CN103020321A (en) Neighbor searching method and neighbor searching system
Jiang et al. Video copy detection using a soft cascade of multimodal features
CN105760875A (en) Binary image feature similarity discrimination method based on random forest algorithm
Jiang et al. PKU-IDM@ TRECVID2011 CBCD: Content-Based Copy Detection with Cascade of Multimodal Features and Temporal Pyramid Matching.
Yang et al. Multiscale video sequence matching for near-duplicate detection and retrieval
CN110298228A (en) A kind of multi-Target Image search method
Hu et al. STRNN: End-to-end deep learning framework for video partial copy detection
Yuan et al. Mining actor correlations with hierarchical concurrence parsing
Lu et al. Adaptive reference frame selection for near-duplicate video shot detection
Chiu et al. Efficient histogram-based indexing for video copy detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151021

RJ01 Rejection of invention patent application after publication