CN103646401B - The method that video finger print extracts is realized based on time gradient and spatial gradient - Google Patents

The method that video finger print extracts is realized based on time gradient and spatial gradient Download PDF

Info

Publication number
CN103646401B
CN103646401B CN201310698603.9A CN201310698603A CN103646401B CN 103646401 B CN103646401 B CN 103646401B CN 201310698603 A CN201310698603 A CN 201310698603A CN 103646401 B CN103646401 B CN 103646401B
Authority
CN
China
Prior art keywords
video
gradient
finger print
frame
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310698603.9A
Other languages
Chinese (zh)
Other versions
CN103646401A (en
Inventor
于震宇
张树民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI ZIZHU DIGITAL CREATIVE HARBOR Co Ltd
Original Assignee
SHANGHAI ZIZHU DIGITAL CREATIVE HARBOR Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI ZIZHU DIGITAL CREATIVE HARBOR Co Ltd filed Critical SHANGHAI ZIZHU DIGITAL CREATIVE HARBOR Co Ltd
Priority to CN201310698603.9A priority Critical patent/CN103646401B/en
Publication of CN103646401A publication Critical patent/CN103646401A/en
Application granted granted Critical
Publication of CN103646401B publication Critical patent/CN103646401B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of method realizing video finger print extraction based on time gradient and spatial gradient, Video segmentation is become multiple scene including utilizing scene cut technology and each scene is extracted key frame;By each key frame and front and back frame be converted into gray level image;The size of the gray level image described in adjustment the gray level image piecemeal after being sized;Calculate the spatial gradient mean size of every piece of gray level image and barycenter gradient direction and time gradient mean size as video finger print.This kind is used to realize, based on time gradient and spatial gradient, the method that video finger print extracts, achieve the time and space feature simultaneously considering video, calculate based on time and gradient based on space, and utilize Grad to calculate extraction video finger print feature, improve the stability that video finger print extracts, by comparing for the extraction of the video finger print of video and video finger print, we can have broader applications scope at video successful match source video in the case of means processed.

Description

The method that video finger print extracts is realized based on time gradient and spatial gradient
Technical field
The present invention relates to field of video processing, particularly relate to video finger print and extract field, specifically refer to a kind of based on time gradient with Spatial gradient realizes the method that video finger print extracts.
Background technology
In the last few years, the digital production of video data, store and distribute and quantitatively considerably increase, digital video mutual More and more universal.While digital video development, its copyright protection is also a problem become more and more important.The copyright of digital video Protection, needs one and is effectively protected, management, and the method for index video content, to tackle video dubbing piracy event.
Video finger print is effective ways of digital video copyright protection, and the target of video fingerprint recognition is to provide and carries out content Quick and the reliable method identified.Video finger print is a unique feature video clips and other video clips distinguished Vector, the target of video fingerprint recognition is by the distance of each fingerprint in fingerprint to be retrieved in measured database and data base To determine a given video.Video fingerprint recognition in file-sharing business, broadcast monitoring, Large video data base is from dynamic search Draw etc. in field and be used widely.
Video finger print needs to meet following character:
(1) robustness: i.e. stability, being subject to the video segment of distortion the fingerprint extracted should regard with original from one section Frequently the fingerprint of fragment is similar.
(2) independence: two videos the most different, it is desirable to have different fingerprints.
(3) database retrieval efficiency: for a large-scale application data base, fingerprint should be advantageous for counting efficiently According to library searching.
The implementation utilizing video finger print technical protection copyright is as described below: go out fingerprint for video extraction, then will extract Fingerprint store.When we need the similar video finding out infringement copyright in some videos that need to detect, we extract Go out these fingerprints that need to detect video, contrast with the video stored, to judge video similarity.
In video dubbing piracy event, replicating video would generally carry out certain process to original video, such as, scale, stretching, Shear, the means such as insertion, and multiple means may be taked simultaneously.Pirate video after conversion process on pixel value May not possess similarity with source video, but our video finger print that extracts need nonetheless remain for the pirate relation that detects between them, I.e. video finger print needs robustness.
In various video finger print technology extractive techniques, article " Sunil Lee, Yoo C.D.;Video Fingerprint Based on Centroids of Gradient Orientations;Acoustics,Speech and Signal Processing,ICASSP;Volume:2; 2006 " method for extracting video fingerprints that a kind of barycenter gradient direction based on space extracts is described.The method calculates simplicity, and And the feature robustness of extraction is the strongest.The present invention is that a kind of video finger print based on this article extracts improvement, and the present invention not only extracts Barycenter gradient direction based on space, the most also extracts gradient mean size based on room and time.
Summary of the invention
It is an object of the invention to the shortcoming overcoming above-mentioned prior art, it is provided that a kind of barycenter being capable of being based not only on space Gradient direction, also extract gradient mean size based on space and event, improve video finger print extract stability, have wider The method realizing video finger print extraction based on time gradient and spatial gradient of general range of application.
To achieve these goals, the present invention based on time gradient and spatial gradient realize the method that video finger print extracts have as Lower composition:
Should realize, based on time gradient and spatial gradient, the method that video finger print extracts, it is mainly characterized by, and described method includes Following steps:
(1) scene cut technology is utilized Video segmentation to become multiple scene and each scene is extracted key frame;
(2) by each key frame and front and back frame be converted into gray level image;
(3) adjust described in the size of gray level image and gray level image piecemeal after being sized;
(4) the spatial gradient mean size of every piece of gray level image and barycenter gradient direction and time gradient mean size conduct are calculated Video finger print.
It is preferred that Video segmentation is become multiple scene and each scene is extracted key frame, bag by the described scene cut technology that utilizes Include following steps:
(11) by video with the fixed frame rate resampling of systemic presupposition;
(12) gradual scene change-detection and abrupt scene change-detection it are utilized respectively to detect at the scene changes of described video And be multiple scene by Video segmentation;
(13) in each scene, key frame is extracted.
More preferably, described extracts key frame in each scene, comprises the following steps:
(131) judging whether the scene length being extracted is less than 2L, wherein L is systemic presupposition frame number, if it is, continue Step (132), otherwise continues step (133);
(132) select the frame in the middle of this scene as key frame, then proceed to step (2);
(133) every L frame selects a key frame in this scenario.
It is preferred that described by each key frame and front and back frame be converted into gray level image, comprise the following steps:
(21) consecutive frame before and after each key frame and each key frame is extracted;
(22) to each key frame and front and back frame carry out gray processing process obtain gray level image.
It is preferred that the size of gray level image described in described adjustment the gray level image piecemeal after being sized, including following Step:
(31) described gray level image is adjusted to systemic presupposition fixed size;
(32) gray level image after being sized is divided into bulk, each be sized after gray level image be all divided into N × The matrix of M.
It is preferred that the described spatial gradient mean size calculating every piece of gray level image and barycenter gradient direction and time gradient are put down All sizes, as video finger print, comprise the following steps:
(41) spatial gradient mean size and barycenter gradient direction and time gradient mean size are calculated for each block of image;
(42) using calculated spatial gradient mean size and barycenter gradient direction and time gradient mean size as video Feature i.e. video finger print.
More preferably, described calculating spatial gradient mean size and barycenter gradient direction and time gradient mean size, including with Lower step:
(411) according to below equation calculate video kth frame coordinate (x, y) the spatial gradient Δ p of position:
Δ p = G x G y = ∂ p ∂ x ∂ p ∂ y
Gx=p [x+1, y, k]-p [x-1, y, k]
Gy=p [x, y+1, k]-p [x, y-1, k]
Wherein, p [x, y, k] is coordinate (x, y) brightness value of position point of video kth frame;
(412) barycenter gradient direction c1 [n, m, k] at video kth frame line n and m column position is calculated according to below equation:
c 1 [ n , m , k ] = Σ ( x , y ) ∈ B n , m , k m [ x , y , k ] θ [ x , y , k ] Σ ( x , y ) ∈ B n , m , k m [ x , y , k ]
m [ x , y , k ] = G x 2 + G y 2
θ [ x , y , k ] = tan - 1 ( G y G x )
Wherein, Bn,m,kIt is that kth frame is positioned at line n and one block of image of m row;
(413) spatial gradient of the one block of image being positioned at line n and m row in video kth frame is calculated according to below equation Mean size value c2 [n, m, k], wherein X, Y are length and the width of the video block after segmentation:
c 2 [ n , m , k ] = 1 X Y Σ ( x , y ) ∈ B n , m , k m [ x , y , k ] ;
(414) time gradient of the one block of image being positioned at line n and m row in video kth frame is calculated according to below equation Mean size value c3 [n, m, k]:
c 3 [ n , m , k ] = 1 X Y Σ ( x , y ) ∈ B n , m , k m t [ x , y , k ]
Mt [x, y, k]=Gt
Gt=p [x, y, k+1]-p [x, y, k-1]
Wherein, mt [x, y, k] is video kth frame coordinate (x, y) time gradient of position.
Further, described by average to calculated spatial gradient mean size and barycenter gradient direction and time gradient Size, as video features i.e. video finger print, comprises the following steps:
(421) according to the video finger print f [n, m, k] of below equation calculating video kth frame line n m row:
F [n, m, k]=[w1 × c1 [n, m, k], w2 × c2 [n, m, k], w3 × c3 [n, m, k]]
Wherein, w1, w2, w3 are respectively barycenter gradient direction, spatial gradient mean size and the weighted value of time gradient mean size;
(422) N × M dimension video finger print vector f of video kth frame is calculated according to below equationk:
fk=[f [1,1, k] f [1,2, k] ... f [N, M, k]]
Wherein, total line number that the gray level image that N is kth frame is divided, total columns that the gray level image that M is kth frame is divided.
Yet further, after described step (4), further comprising the steps of:
(5) similarity between two videos is judged by the video finger print similarity of two videos of contrast.
Yet further, the described video finger print similarity by two videos of contrast judges the similarity between two videos, Particularly as follows:
Similarity value D (the f between two videos is calculated according to below equation1,f2):
D ( f 1 , f 2 ) = 1 N M K Σ n = 1 N Σ m = 1 M Σ k = 1 K ( 1 3 Σ d = 1 3 ( f 1 [ n , m , k , d ] - f 2 [ n , m , k , d ] ) 2 )
Wherein, f1,f2Being respectively the video finger print extracted from two sections of different video segments, d takes different value interval scale f [n, m.k] In different elements, difference element contrasts by respectively, and K is the key frame sum of sampling.
Have employed the method realizing video finger print extraction based on time gradient and spatial gradient in this invention, there is following useful effect Really:
Achieve the time and space feature simultaneously considering video, calculate based on time and gradient based on space, and utilize gradient Value calculates extracts video finger print feature, improves the stability that video finger print extracts, takes relatively flexible video finger print matching way, By comparing for the extraction of the video finger print of video and video finger print, we can shear at video through scaling, stretching, The means such as insertion processed in the case of successful match source video (now video variance value is little).And then video finger print technology is permissible Contribute to the protection of video copy, when occurring that pirate situation can search rapidly source video, there is broader applications scope.
Accompanying drawing explanation
Fig. 1 is the flow chart realizing the method that video finger print extracts based on time gradient and spatial gradient of the present invention.
Fig. 2 is the detail flowchart realizing the method that video finger print extracts based on time gradient and spatial gradient of the present invention.
Fig. 3 be in the present invention video scene length more than the extracting mode of key frame during 2L frame.
Detailed description of the invention
In order to more clearly describe the technology contents of the present invention, conduct further description below in conjunction with specific embodiment.
The invention discloses the extracting method of a kind of video finger print, as it is shown in figure 1, comprise the following steps:
(1) scene cut technology is utilized to split for video, for the video extraction key frame of different sections.
(2) by key frame and the gray level image of frame gray processing acquisition front and back thereof.
(3) gray level image is sized also piecemeal.
(4) for each fritter gray level image, calculate mean size and the barycenter direction of the gradient in space, calculate the time simultaneously Gradient mean size, to extract the feature of each key frame as video finger print.Finally by contrast video finger print to judge two The similarity of video.
It is an object of the invention to design the fingerprint extraction system that a kind of robustness is higher.Preferably to play protection video copy Effect.
For reaching above-mentioned purpose, as in figure 2 it is shown, the present invention fingerprint extraction process comprise the following steps:
A () input video is resampled with a fixing frame rate (f frame per second), to tackle the change of frame rate.
B () is utilized respectively gradual scene change-detection and abrupt scene change-detection to detect at the scene changes of video, and then will Video segmentation is scene one by one.
Key frame of video is extracted among (c) scene after singulation.If a scene length is less than 2L, then key frame selects Frame in the middle of scene.If the length of scene is more than 2L, then every L frame selects a key frame, as shown in Figure 3.
D () extracts key frame and consecutive frame (consecutive frame after resampling) successively thereof, and these frames are converted into gray-scale map Picture.
E () adjusts the size of gray level image, be adjusted to fixing size.Make these frames width and the most respectively specification turn to two Individual fixing value: W, H.
Gray level image f () will be sized after is divided into bulk.Frame after each adjustment is divided into N row and M row, shape Become the matrix of N × M.The numerical value of M, N selects relatively flexible, and segmentation number can be carried out different values by us.
G (), for each piece in the matrix of N × M, calculates mean size and the barycenter direction of the gradient in space, calculates simultaneously The gradient mean size of time.
At the coordinate of video kth frame, (x, y) position, with function P, (x, y k) represent the brightness value of this point.Coordinate (x, space ladder y) Degree is defined as:
Δ p = G x G y = ∂ p ∂ x ∂ p ∂ y .
(x, it is exactly gradient direction that p function y) changes the fastest direction to coordinate.During Practical Calculation, coordinate (x, y) position GxAnd GyCalculated by following formula:
Gx=p [x+1, y, k]-p [x-1, y, k]
Gy=p [x, y+1, k]-p [x, y-1, k]
Amplitude function m [x, y, k] of gradient vector p and phase function θ [x, y, k] be as shown by the following formula:
m [ x , y , k ] = G x 2 + G y 2
θ [ x , y , k ] = tan - 1 ( G y G x )
In our video fingerprint method, our in a matrix each piece calculating this value of barycenter gradient direction:
c 1 [ n , m , k ] = Σ ( x , y ) ∈ B n , m , k m [ x , y , k ] θ [ x , y , k ] Σ ( x , y ) ∈ B n , m , k m [ x , y , k ]
Bn,m,kIt is that kth frame is positioned at line n and that block of image of m row, and c1 [n, m, k] is from block Bn,m,kMiddle acquirement The barycenter of gradient direction.Due to the normalization of all gradient magnitude, the value of barycenter is between-pi/2 to pi/2.
Article " Sunil Lee, Yoo C.D.;Video Fingerprint Based on Centroids of Gradient Orientations; Acoustics,Speech and Signal Processing,ICASSP;Volume:2;2006 " it is exactly a kind of barycenter based on space ladder The video finger print that degree direction is extracted.But the size of spatial gradient can be able to be extracted as regarding with reflecting video feature in fact Frequently a part for fingerprint.Additionally we can extract spatial gradient to improve the robustness of video finger print.
Our video fingerprint method not only extracts barycenter gradient direction, also extracts the mean size of gradient based on room and time c2[n,m,k].In the video block of one segmentation, gradient mean size based on space is shown below.
c 2 [ n , m , k ] = 1 X Y Σ ( x , y ) ∈ B n , m , k m [ x , y , k ]
Wherein X, Y are length and the width of the video block after segmentation.
The mean size extracting time-based gradient needs to calculate the time gradient of each pixel.
G t = ∂ p ∂ t
Because having been carried out scene cut, the key frame in a scene is much like with the frame of surrounding.So coordinate (x, y) position The time gradient at place we calculated by following formula and represent.
Gt=p [x, y, k+1]-p [x, y, k-1]
We represent kth frame position (x, y) time gradient at place with mt [x, y, k].
Mt [x, y, k]=Gt
In the video block of one segmentation, time-based gradient mean size is shown below.
c 3 [ n , m , k ] = 1 X Y Σ ( x , y ) ∈ B n , m , k m t [ x , y , k ]
We utilize the barycenter of the time gradient after weighting, spatial gradient, and spatial gradient as three of video finger print vector Element.The video finger print f [n, m, k] being positioned at line n and m row in kth frame represents.
F [n, m, k]=[w1 × c1 [n, m, k], w2 × c2 [n, m, k], w3 × c3 [n, m, k]]
F [n, m, k] comprises 3 elements, and w1, w2, w3 represent the weight of each element.Due to the existence of weight, when regarding During the contrast of fingerprint frequently, represent the different element differences of different physical quantities with a unified tolerance to represent video variance degree.
N × the M of kth frame ties up fingerprint vector fkObtained by following formula:
fk=[f [1,1, k] f [1,2, k] ... f [N, M, k]]
H value that calculating is obtained by () is as the feature extracted from video will, namely video finger print.Video finger print may be used for regarding Frequency contrast.
Further, as embodiments of the invention, step (b) can take " Zhenyu Yu, Zhiping Lin;Scene change detection using motion vectors and dc components of prediction residual in H.264 compressed videos;Industrial Electronics and Applications(ICIEA);2012 " it is used as scene detection Mode.
Further, as embodiments of the invention, gray level image can be divided into the bulk that number is different by step (f).Point Cutting number value the least, fingerprint robustness is the highest, and independence is the lowest;Segmentation number value is the biggest, and fingerprint robustness is the lowest, solely Vertical property is the highest.
Further, as embodiments of the invention, step (h) can take relatively flexible video finger print way of contrast.Depending on During A, B contrast frequently, the video of the key frame of video of corresponding sequence number during the video finger print of a frame can contrast video B in video A Fingerprint, it is also possible to the video finger print of the contrast a range of key frame of video of video B, finds immediate fingerprint.
The foregoing describe the extraction process of video finger print.Video finger print describes the feature of a video, if needing to contrast two The similarity of video, we also need to be expressed the similarity degree of two videos by contrast video finger print.
We can take equation below to calculate the difference value of two videos to express the difference degree of video:
D ( f 1 , f 2 ) = 1 N M K Σ n = 1 N Σ m = 1 M Σ k = 1 K ( 1 3 Σ d = 1 3 ( f 1 [ n , m , k , d ] - f 2 [ n , m , k , d ] ) 2 )
Here f1And f2Represent the fingerprint sequence extracted from two sections of different video segments.D represents these two sections that we calculate The difference value of video.D takes the different elements in different value interval scale f [n, m, k], and difference element is contrasted by respectively.
Corresponding key frame is contrasted by this formula, calculates simple.But the video being because contrast may have insertion to delete video The operation of section, the key frame that contrast video after treatment proposes also may not be consistent with source video, so in actual treatment, We can take relatively flexible fingerprint way of contrast, the most similar to judge two videos.
D ( f 1 , f 2 ) = 1 N M K Σ n = 1 N Σ m = 1 M min 1 ≤ k ≤ K ( 1 3 Σ d = 1 3 ( f 1 [ n , m , k , d ] - f 2 [ n , m , k , d ] ) 2 )
Have employed the method realizing video finger print extraction based on time gradient and spatial gradient in this invention, there is following useful effect Really:
Achieve the time and space feature simultaneously considering video, calculate based on time and gradient based on space, and utilize gradient Value calculates extracts video finger print feature, improves the stability that video finger print extracts, takes relatively flexible video finger print matching way, By comparing for the extraction of the video finger print of video and video finger print, we can shear at video through scaling, stretching, The means such as insertion processed in the case of successful match source video (now video variance value is little).And then video finger print technology is permissible Contribute to the protection of video copy, when occurring that pirate situation can search rapidly source video, there is broader applications scope.
In this description, the present invention is described with reference to its specific embodiment.But it is clear that still may be made that various Amendment and conversion are without departing from the spirit and scope of the present invention.Therefore, specification and drawings is considered as illustrative rather than limits Property processed.

Claims (6)

1. the method realizing video finger print extraction based on time gradient and spatial gradient, it is characterised in that described method bag Include following steps:
(1) scene cut technology is utilized Video segmentation to become multiple scene and each scene is extracted key frame;
(2) by each key frame and front and back frame be converted into gray level image;
(3) adjust described in the size of gray level image and gray level image piecemeal after being sized;
(4) the spatial gradient mean size of every piece of gray level image and barycenter gradient direction and time gradient mean size conduct are calculated Video finger print;
The described spatial gradient mean size calculating every piece of gray level image and barycenter gradient direction and time gradient mean size are made For video finger print, comprise the following steps:
(41) spatial gradient mean size and barycenter gradient direction and time gradient mean size are calculated for each block of image;
(42) using calculated spatial gradient mean size and barycenter gradient direction and time gradient mean size as video Feature i.e. video finger print;
Described calculating spatial gradient mean size and barycenter gradient direction and time gradient mean size, comprise the following steps:
(411) according to below equation calculate video kth frame coordinate (x, y) the spatial gradient Δ p of position:
Δ p = G x G y = ∂ p ∂ x ∂ p ∂ y
Gx=p [x+1, y, k]-p [x-1, y, k]
Gy=p [x, y+1, k]-p [x, y-1, k]
Wherein, p [x, y, k] is coordinate (x, y) brightness value of position point of video kth frame;
(412) barycenter gradient direction c1 [n, m, k] at video kth frame line n and m column position is calculated according to below equation:
c 1 [ n , m , k ] = Σ ( x , y ) ∈ B n , m , k m [ x , y , k ] θ [ x , y , k ] Σ ( x , y ) ∈ B n , m , k m [ x , y , k ]
m [ x , y , k ] = G x 2 + G y 2
θ [ x , y , k ] = tan - 1 ( G y G x )
Wherein, Bn,m,kIt is that kth frame is positioned at line n and one block of image of m row;
(413) spatial gradient of the one block of image being positioned at line n and m row in video kth frame is calculated according to below equation Mean size value c2 [n, m, k]:
c 2 [ n , m , k ] = 1 X Y Σ ( x , y ) ∈ B n , m , k m [ x , y , k ] ;
(414) time gradient of the one block of image being positioned at line n and m row in video kth frame is calculated according to below equation Mean size value c3 [n, m, k], wherein X, Y are length and the width of the video block after segmentation:
c 3 [ n , m , k ] = 1 X Y Σ ( x , y ) ∈ B n , m , k m t [ x , y , k ]
Mt [x, y, k]=Gt
Gt=p [x, y, k+1]-p [x, y, k-1]
Wherein, mt [x, y, k] is video kth frame coordinate (x, y) time gradient of position;
Described using calculated spatial gradient mean size and barycenter gradient direction and time gradient mean size as video Feature i.e. video finger print, comprises the following steps:
(421) according to the video finger print f [n, m, k] of below equation calculating video kth frame line n m row:
F [n, m, k]=[w1 × c1 [n, m, k], w2 × c2 [n, m, k], w3 × c3 [n, m, k]]
Wherein, w1, w2, w3 are respectively barycenter gradient direction, spatial gradient mean size and the weighted value of time gradient mean size;
(422) N × M dimension video finger print vector f of video kth frame is calculated according to below equationk:
fk=[f [1,1, k] f [1,2, k] ... f [N, M, k]]
Wherein, total line number that the gray level image that N is kth frame is divided, total columns that the gray level image that M is kth frame is divided.
The method realizing video finger print extraction based on time gradient and spatial gradient the most according to claim 1, its feature exists In, Video segmentation is become multiple scene and each scene is extracted key frame, including following step by the described scene cut technology that utilizes Rapid:
(11) by video with the fixed frame rate resampling of systemic presupposition;
(12) gradual scene change-detection and abrupt scene change-detection it are utilized respectively to detect at the scene changes of described video And be multiple scene by Video segmentation;
(13) in each scene, key frame is extracted.
The method realizing video finger print extraction based on time gradient and spatial gradient the most according to claim 2, its feature exists In, described extracts key frame in each scene, comprises the following steps:
(131) judging whether the scene length being extracted is less than 2L, wherein L is systemic presupposition frame number, if it is, continue Step (132), otherwise continues step (133);
(132) select the frame in the middle of this scene as key frame, then proceed to step (2);
(133) every L frame selects a key frame in this scenario.
The method realizing video finger print extraction based on time gradient and spatial gradient the most according to claim 1, its feature exists In, described by each key frame and front and back frame be converted into gray level image, comprise the following steps:
(21) consecutive frame before and after each key frame and each key frame is extracted;
(22) to each key frame and front and back frame carry out gray processing process obtain gray level image.
The method realizing video finger print extraction based on time gradient and spatial gradient the most according to claim 1, its feature exists In the size of, the described gray level image described in adjustment the gray level image piecemeal after being sized, comprise the following steps:
(31) described gray level image is adjusted to systemic presupposition fixed size;
(32) gray level image after being sized is divided into bulk, each be sized after gray level image be all divided into N × The matrix of M.
The method realizing video finger print extraction based on time gradient and spatial gradient the most according to claim 1, its feature exists In, after described step (4), further comprising the steps of:
(5) similarity between two videos is judged by the video finger print similarity of two videos of contrast;
The described video finger print similarity by two videos of contrast judges the similarity between two videos, particularly as follows:
Similarity value D (the f between two videos is calculated according to below equation1,f2):
D ( f 1 , f 2 ) = 1 N M K Σ n = 1 N Σ m = 1 M Σ k = 1 K ( 1 3 Σ d = 1 3 ( f 1 [ n , m , k , d ] - f 2 [ n , m , k , d ] ) 2 )
Wherein, f1,f2Being respectively the video finger print extracted from two sections of different video segments, d takes different value interval scale Different elements in f [n, m.k], contrast respectively by difference element, and K is the key frame sum of sampling.
CN201310698603.9A 2013-12-18 2013-12-18 The method that video finger print extracts is realized based on time gradient and spatial gradient Expired - Fee Related CN103646401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310698603.9A CN103646401B (en) 2013-12-18 2013-12-18 The method that video finger print extracts is realized based on time gradient and spatial gradient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310698603.9A CN103646401B (en) 2013-12-18 2013-12-18 The method that video finger print extracts is realized based on time gradient and spatial gradient

Publications (2)

Publication Number Publication Date
CN103646401A CN103646401A (en) 2014-03-19
CN103646401B true CN103646401B (en) 2016-09-14

Family

ID=50251611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310698603.9A Expired - Fee Related CN103646401B (en) 2013-12-18 2013-12-18 The method that video finger print extracts is realized based on time gradient and spatial gradient

Country Status (1)

Country Link
CN (1) CN103646401B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809248B (en) * 2015-05-18 2018-01-23 成都华栖云科技有限公司 Video finger print extracts and search method
CN112866800A (en) * 2020-12-31 2021-05-28 四川金熊猫新媒体有限公司 Video content similarity detection method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1595425A (en) * 2004-07-13 2005-03-16 清华大学 Method for identifying multi-characteristic of fingerprint
CN103324944A (en) * 2013-06-26 2013-09-25 电子科技大学 Fake fingerprint detecting method based on SVM and sparse representation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8995708B2 (en) * 2011-09-08 2015-03-31 Samsung Electronics Co., Ltd. Apparatus and method for robust low-complexity video fingerprinting

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1595425A (en) * 2004-07-13 2005-03-16 清华大学 Method for identifying multi-characteristic of fingerprint
CN103324944A (en) * 2013-06-26 2013-09-25 电子科技大学 Fake fingerprint detecting method based on SVM and sparse representation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Robust Video Fingerprinting for Content-Based Video Identification;Sunil Lee et al.;《IEEE Transactions on Circuits and Systems for Video Technology》;20080731;第18卷(第7期);第983-988页 *
Scene Change Detection Using Motion Vectors and DC Components of Prediction Residual in H.264 Compressed Videos;Zhenyu Yu et al.;《2012 7th IEEE Conference on Industrial Electronics and Applications》;20120718;第990-995页 *
Video Fingerprinting Based on Centroids of Gradient Orientations;Sunil Lee et al.;《2006 IEEE International Conference on Acoustics, Speech and Signal Processing》;20060514;第2卷;第1-4页 *
基于视频指纹的视频片段检索方法;李泽洲 等;《计算机工程》;20100430;第36卷(第7期);第239-241页 *

Also Published As

Publication number Publication date
CN103646401A (en) 2014-03-19

Similar Documents

Publication Publication Date Title
WO2020125216A1 (en) Pedestrian re-identification method, device, electronic device and computer-readable storage medium
Guo et al. Action recognition using sparse representation on covariance manifolds of optical flow
US8660385B2 (en) Feature-based signatures for image identification
JP5604256B2 (en) Human motion detection device and program thereof
US20110194779A1 (en) Apparatus and method for detecting multi-view specific object
US20180150714A1 (en) A method and a device for extracting local features of a three-dimensional point cloud
WO2018082308A1 (en) Image processing method and terminal
KR100944903B1 (en) Feature extraction apparatus of video signal and its extraction method, video recognition system and its identification method
KR101968921B1 (en) Apparatus and method for robust low-complexity video fingerprinting
US9047534B2 (en) Method and apparatus for detecting near-duplicate images using content adaptive hash lookups
EP2383990B1 (en) Time segment representative feature vector generation device
US8953852B2 (en) Method for face recognition
CN111598067B (en) Re-recognition training method, re-recognition method and storage device in video
CN105809182B (en) Image classification method and device
US9747521B2 (en) Frequency domain interest point descriptor
CN109697240B (en) Image retrieval method and device based on features
CN103646401B (en) The method that video finger print extracts is realized based on time gradient and spatial gradient
CN109726621B (en) Pedestrian detection method, device and equipment
CN111476070A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2006009035A1 (en) Signal detecting method, signal detecting system, signal detecting program and recording medium on which the program is recorded
CN105791878B (en) Image error concealing method and system
CN109360199A (en) Blind detection method of image repetition region based on Watherstein histogram Euclidean measurement
JP4133246B2 (en) Image deformation information generation apparatus, image deformation information generation method, and image deformation information generation program
CN101056411B (en) Method for detecting the image displacement
Jameson et al. Extraction of arbitrary text in natural scene image based on stroke width transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160914

Termination date: 20201218

CF01 Termination of patent right due to non-payment of annual fee