CN101685496A - Video segment duplication detecting method - Google Patents
Video segment duplication detecting method Download PDFInfo
- Publication number
- CN101685496A CN101685496A CN200810223236A CN200810223236A CN101685496A CN 101685496 A CN101685496 A CN 101685496A CN 200810223236 A CN200810223236 A CN 200810223236A CN 200810223236 A CN200810223236 A CN 200810223236A CN 101685496 A CN101685496 A CN 101685496A
- Authority
- CN
- China
- Prior art keywords
- video
- video segment
- space
- detected
- former
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Television Signal Processing For Recording (AREA)
Abstract
The invention provides a video segment duplication detecting method comprising the following steps: (1) extracting the special-temporal slices of original video segments and video segments to be detected; (2) calculating the spatio-temporal union characteristics of the special-temporal slices; and (3) judging whether the video segments to be detected are the duplication of the original video segments according to the similarity of the spatio-temporal union characteristics. In the invention, the extraction of concise spatio-temporal union characteristics from a video is used as the basis of analysis, and the spatio-temporal union characteristics can express the robustness of the video format change to the video signal change, thus the impact from the video format can be reduced when the similarity of the video content is balanced, and the video segment duplication detecting accuracy can be improved.
Description
Technical field
The present invention relates to digital video technology, particularly a kind of video segment duplication detecting method.
Background technology
Whether it mainly is to judge between the video content copy of another video that video segment duplication detects, and it has important practical use in Vision information processing: at first video copy detection the video information retrieval and to Search Results filter, significant aspect the ordering.Secondly, aspect the medium tracking, video copy detection can be used for automatically monitoring the broadcast situation of certain content video.In addition, aspect the digital video copyright protection, for traditional digital watermark technology, owing to have need not to add additional information in medium, feature extraction such as can carry out at good characteristic after media releasing, and copy detection also begins to receive publicity.For this reason, the researcher has proposed the various video copy detection method, but because how the diversity of video format and content chooses effective feature, detecting video copy exactly still is an open question.Existing video copying detection method adopts the extraction frame of video more, carries out the method for mating based on characteristics of image, but the frame of video that these class methods are extracted be because its characteristics of image can be subjected to the influence that video code model changes, so practicality is relatively poor.
Therefore press for a kind of video segment duplication detecting method that can satisfy the requirement of following two aspects simultaneously: the video data that will produce the Yin Geshi transformation changes robust on the one hand, and is responsive to the video data variation that produces because of the video content variation again on the other hand.
Summary of the invention
The technical problem to be solved in the present invention provides a kind of video segment duplication detecting method that the video data that produces because of format change is changed robust.
For achieving the above object, according to an aspect of the present invention, provide a kind of video segment duplication detecting method, comprised the following steps:
1) space-time that extracts former video segment and video segment to be detected is cut into slices;
2) calculate the space-time unite feature that described space-time is cut into slices;
3), judge whether described video segment to be detected is the copy of described former video segment according to the similarity of described space-time unite feature.
Wherein, described step 1) comprises:
11) former frame of video is extracted horizontal direction pixel line h
i, and be spliced to form horizontal space-time section S in chronological order
V H, extract vertical direction pixel line v
i, and be spliced to form vertical space-time section S in chronological order
V V
12) frame of video to be detected is extracted horizontal direction pixel line h '
i, and be spliced to form dropping cut slice S in chronological order
V ' H, extract vertical direction pixel line v '
i, and be spliced to form vertical section S in chronological order
V ' V, wherein, described h
iWith described v
iPosition in former frame of video is corresponding described h ' respectively
iWith described v '
iPosition in frame of video to be detected.。
Wherein, h described in the described step 12)
iWith described h '
iLay respectively at vertical centre position of former frame of video and frame of video to be detected.。
Wherein, v described in the described step 12)
iWith described v '
iLay respectively at the lateral mid-point of former frame of video and frame of video to be detected.。
Wherein, described step 2) comprising:
21) described space-time section is divided into a plurality of processing units;
22) described processing unit is carried out discrete cosine transform, get the proper vector of the low frequency coefficient of described discrete cosine transform as described processing unit;
23) proper vector of described processing unit is arranged in chronological order, constitute described space-time unite feature.
Wherein, described step 3) comprises:
31) similarity of the space-time unite feature of the space-time unite feature of the described former video segment of calculating and described video segment to be detected;
32), calculate the similarity of described former video segment and described video segment to be detected according to the similarity of described space-time unite feature;
33) according to the similarity of described former video segment and described video segment to be detected whether greater than threshold value, judge whether described video segment to be detected is the copy of described former video segment.
Wherein, described step 33) span of described threshold value is (0.35-0.45).
Effect of the present invention is: video is extracted succinct space-time unite feature, with this basis as analysis, the variation robust that during this feature changes video format vision signal is brought, thereby can reduce the influence that is subjected to video format when weighing the video content similarity, improve the accuracy of copy detection.Select for use when feature is stored be carry medium and low frequency signal in the feature, and arrange from low to high by frequency, obtain proper vector, the feature of choosing like this is the energy representing video content on the one hand, on the other hand can be affected by noise less, be the robust features of expressing video content, thereby when carrying out content match, be difficult for affected by noise.
Description of drawings
Fig. 1 is a video segment duplication detecting method process flow diagram according to an embodiment of the invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer,, video segment duplication detecting method is according to an embodiment of the invention further described below in conjunction with accompanying drawing.
Shown in the method flow diagram of Fig. 1, extract the space-time section of former video segment and video segment to be detected; Resulting space-time section is analyzed, calculated the relevant space-time unite feature of video content; Whether the space-time unite feature according to the two mates, and judges whether certain section video is the copy of another section.According to a specific embodiment of the present invention, respectively each step is described in detail as follows:
At first, each former frame of video I being determined a fixing position, is initial point with the picture upper left corner, and right-hand as the X forward, coordinate system I is set up as the Y forward in the below
XOY, can choose certain fixedly y value, get the pixel of all Y values for y, obtain the lines that the horizontal direction width is a pixel.Also can choose certain x value, get the pixel of all X values, obtain the lines that the vertical direction width is a pixel for x.It will be understood by those skilled in the art that the lines that can also get a plurality of pixels merge obtains the lines that width is a pixel, and wherein, preferred, a plurality of pixels are adjacent.
Pixel lines to taking out from each frame of video splice in chronological order, can obtain the horizontal space-time section S of former video segment V
V HWith vertical space-time section S
V VWherein
Be horizontal direction pixel line h by former video segment V
iThe space-time section that splicing generates, h
iIt is the pixel line of i frame;
Be vertical direction pixel line v by former video segment V
iThe space-time section that splicing generates, v
iIt is the pixel line of i frame.
Similar with the space-time section of extracting former video segment, extract the pixel line h ' of all Y values of video segment V ' to be detected for y '
iThe horizontal space-time section S that forms
V ' HWith all X values be the pixel line v ' of x '
iThe vertical space-time section S that forms
V ' VWherein, y value and y ' value are corresponding, just, suppose that y accounts for the ratio of former frame of video Y direction width, accounts for being in equal proportions of frame of video Y direction width to be detected with y '; In like manner x value and x ' value also are corresponding.Preferably, comprise more video contents in order to make the space-time section, aforementioned proportion is 1/2, also, and h
iAnd h '
iBe positioned at vertical centre position of frame of video, v
iAnd v '
iBe positioned at the lateral mid-point of frame of video.
Then, the space-time unite analytic target (also instant cut-in without ball sheet) to said extracted extracts the space-time unite feature.It will be understood by those skilled in the art that section can be considered as normal image extracts information characteristics such as color, edge, gradient, but such feature is responsive for the video coding.According to a preferred embodiment of the present invention, this stable characteristics comparatively of selected frequency information, it is less affected by noise, is the robust features of expressing video content.
The space-time section of extracting is divided into each processing unit, and establishing gray-scale slice is wide along time-axis direction, and another direction is high, and wide is W, height is H, by time shaft order from left to right section is divided into the unit of a plurality of H * H, claims that each unit is a section cell block Sub.
Each the Sub piece that takes out is carried out discrete cosine transform (DCT), and the Zig-Zag preface is taken out the vector that the low-frequency ac coefficient is generated in the cut-in without ball sheet on time, and for example desirable preceding 16 low frequency coefficients are as the proper vector of this Sub piece.The proper vector of each Sub piece is arranged the proper vector that constitutes the whole video content according to time sequencing.
Wherein, the discrete cosine transform of being adopted (DCT), it is the two-dimensional dct transform that sectioning image is carried out, the two-dimensional dct transform matrix has showed the frequency that image changes along change in coordinate axis direction, and the coordinate direction of Sub is respectively time and direction in space, so its dct transform matrix has showed the change frequency of image sequence on time and space.
At last, calculate the similarity between the space-time unite feature of former video segment V and video segment V ' to be detected, thereby judge between the video whether have copy relationship.The specific implementation of this step is:
Calculate space-time section S
V HAnd S
V ' H, and S
V VAnd S
V ' VSimilarity.Introduce following symbol: SUB
VAnd SUB
V 'Be respectively former video segment V and video segment V ' to be detected the Sub sequence; Sub
V iAnd Sub
V ' iBe respectively SUB
VAnd SUB
V 'In i Sub piece;
With
Be respectively to Sub
V iAnd Sub
V ' iAfter carrying out dct transform, the proper vector that the DCT ac coefficient of P position is generated before taking out successively by the Zig-Zag order;
With
Be respectively by SubV
V iAnd SubV
V ' iPress the proper vector group that the piece order is formed.The calculating formula of similarity of space-time section is:
Wherein, d represents the distance between vector, and abs represents to take absolute value.
According to the similarity of above-mentioned space-time section, calculate the similarity between two videos.With SUBV
V HThe proper vector group of note video V slices across, SUBV
V VThe proper vector group of note video V longitudinal section; With SUBV
V ' HThe proper vector group of note video V ' slices across; SUBV
V ' VThe proper vector group of note video V ' longitudinal section.Then the similarity of video V and video V ' is defined as
Wherein, max represents to get maximal value.
When Simi (V ', when V) surpassing given threshold tau, judge that video V ' is the copy video of video V.Judge formula COPY<V ', V〉as follows, preferred, the threshold value span is (0.35-0.45), particularly gets 0.4:
By above judgement, determined whether to exist between the video result of determination of copy relationship.
One of ordinary skill in the art will appreciate that, the decision method of above-mentioned similarity is the conventional method of comparison characteristic sequence, because the selected feature of the present invention is more effective on video copy detection than traditional characteristic, therefore the method for this judgement similarity can obtain good result; Can also adopt method of other comparison characteristic sequence of the included angle cosine method etc. of calculated characteristics vector for example to judge the similarity of space-time unite feature, and then judge the similarity between video.
Should be noted that and understand, under the situation that does not break away from the desired the spirit and scope of the present invention of accompanying Claim, can make various modifications and improvement the present invention of foregoing detailed description.Therefore, the scope of claimed technical scheme is not subjected to the restriction of given any specific exemplary teachings.
Claims (7)
1. a video segment duplication detecting method comprises the following steps:
1) space-time that extracts former video segment and video segment to be detected is cut into slices;
2) calculate the space-time unite feature that described space-time is cut into slices;
3), judge whether described video segment to be detected is the copy of described former video segment according to the similarity of described space-time unite feature.
2. method according to claim 1 is characterized in that, described step 1) comprises:
11) former frame of video is extracted horizontal direction pixel line h
i, and be spliced to form horizontal space-time section S in chronological order
V H, extract vertical direction pixel line v
i, and be spliced to form vertical space-time section S in chronological order
V V
12) frame of video to be detected is extracted horizontal direction pixel line h '
i, and be spliced to form dropping cut slice S in chronological order
V ' H, extract vertical direction pixel line v '
i, and be spliced to form vertical section S in chronological order
V ' V, wherein, described h
iWith described v
iPosition in former frame of video is corresponding described h ' respectively
iWith described v '
iPosition in frame of video to be detected.
3. method according to claim 2 is characterized in that, h described in the described step 12)
iWith described h '
iLay respectively at vertical centre position of former frame of video and frame of video to be detected.
4. method according to claim 2 is characterized in that, v described in the described step 12)
iWith described v '
iLay respectively at the lateral mid-point of former frame of video and frame of video to be detected.
5. method according to claim 1 is characterized in that, described step 2) comprising:
21) described space-time section is divided into a plurality of processing units;
22) described processing unit is carried out discrete cosine transform, get the proper vector of the low frequency coefficient of described discrete cosine transform as described processing unit;
23) proper vector of described processing unit is arranged in chronological order, constitute described space-time unite feature.
6. method according to claim 1 is characterized in that, described step 3) comprises:
31) similarity of the space-time unite feature of the space-time unite feature of the described former video segment of calculating and described video segment to be detected;
32), calculate the similarity of described former video segment and described video segment to be detected according to the similarity of described space-time unite feature;
33) according to the similarity of described former video segment and described video segment to be detected whether greater than threshold value, judge whether described video segment to be detected is the copy of described former video segment.
7. method according to claim 6 is characterized in that, described step 33) span of described threshold value is (0.35-0.45).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008102232366A CN101685496B (en) | 2008-09-27 | 2008-09-27 | Video segment duplication detecting method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2008102232366A CN101685496B (en) | 2008-09-27 | 2008-09-27 | Video segment duplication detecting method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101685496A true CN101685496A (en) | 2010-03-31 |
CN101685496B CN101685496B (en) | 2011-10-19 |
Family
ID=42048652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2008102232366A Expired - Fee Related CN101685496B (en) | 2008-09-27 | 2008-09-27 | Video segment duplication detecting method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101685496B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102208026A (en) * | 2011-05-27 | 2011-10-05 | 电子科技大学 | Method for extracting digital video fingerprints |
CN103347197A (en) * | 2013-07-15 | 2013-10-09 | 中国科学院自动化研究所 | Compressed domain video copy blind detecting method based on DCT coefficient |
CN105931270B (en) * | 2016-04-27 | 2018-03-27 | 石家庄铁道大学 | Video key frame extracting method based on gripper path analysis |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100440255C (en) * | 2006-07-20 | 2008-12-03 | 中山大学 | Image zone duplicating and altering detecting method of robust |
-
2008
- 2008-09-27 CN CN2008102232366A patent/CN101685496B/en not_active Expired - Fee Related
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102208026A (en) * | 2011-05-27 | 2011-10-05 | 电子科技大学 | Method for extracting digital video fingerprints |
CN103347197A (en) * | 2013-07-15 | 2013-10-09 | 中国科学院自动化研究所 | Compressed domain video copy blind detecting method based on DCT coefficient |
CN105931270B (en) * | 2016-04-27 | 2018-03-27 | 石家庄铁道大学 | Video key frame extracting method based on gripper path analysis |
Also Published As
Publication number | Publication date |
---|---|
CN101685496B (en) | 2011-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2198376B1 (en) | Media fingerprints that reliably correspond to media content | |
CN101291392B (en) | Apparatus and method of processing image as well as apparatus and method of generating reproduction information | |
CN101453575B (en) | Video subtitle information extracting method | |
CN102081731B (en) | Method and device for extracting text from image | |
CN103034853B (en) | A kind of jpeg image general steganalysis method | |
CN101601287A (en) | Produce the equipment and the method for photorealistic image thumbnails | |
CN104331450B (en) | Video copying detection method based on multi-mode feature and tensor resolution | |
CN101650740A (en) | Method and device for detecting television advertisements | |
CN101276417A (en) | Method for filtering internet cartoon medium rubbish information based on content | |
CN102419816B (en) | Video fingerprint method for same content video retrieval | |
CN100593792C (en) | Text tracking and multi-frame reinforcing method in video | |
CN102883179A (en) | Objective evaluation method of video quality | |
CN101365072A (en) | Subtitle region extracting device and method | |
CN104954807B (en) | The video dubbing altering detecting method of resist geometric attackses | |
CN102393900A (en) | Video copying detection method based on robust hash | |
KR101191516B1 (en) | Enhanced image identification | |
CN1969294A (en) | Searching for a scaling factor for watermark detection | |
CN105657514A (en) | Method and apparatus for playing video key information on mobile device browser | |
CN105608233A (en) | Video copy detection method based on improved OM features | |
CN104036280A (en) | Video fingerprinting method based on region of interest and cluster combination | |
CN102301697B (en) | Video identifier creation device | |
CN101685496B (en) | Video segment duplication detecting method | |
CN113033379A (en) | Intra-frame evidence-obtaining deep learning method based on double-current CNN | |
CN106709915B (en) | Image resampling operation detection method | |
CN102737240A (en) | Method of analyzing digital document images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20111019 Termination date: 20210927 |