CN101685496B - Video segment duplication detecting method - Google Patents

Video segment duplication detecting method Download PDF

Info

Publication number
CN101685496B
CN101685496B CN2008102232366A CN200810223236A CN101685496B CN 101685496 B CN101685496 B CN 101685496B CN 2008102232366 A CN2008102232366 A CN 2008102232366A CN 200810223236 A CN200810223236 A CN 200810223236A CN 101685496 B CN101685496 B CN 101685496B
Authority
CN
China
Prior art keywords
video
video segment
detected
space
former
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008102232366A
Other languages
Chinese (zh)
Other versions
CN101685496A (en
Inventor
潘雪峰
张勇东
李锦涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN2008102232366A priority Critical patent/CN101685496B/en
Publication of CN101685496A publication Critical patent/CN101685496A/en
Application granted granted Critical
Publication of CN101685496B publication Critical patent/CN101685496B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a video segment duplication detecting method comprising the following steps: (1) extracting the special-temporal slices of original video segments and video segments to be detected; (2) calculating the spatio-temporal union characteristics of the special-temporal slices; and (3) judging whether the video segments to be detected are the duplication of the original video segments according to the similarity of the spatio-temporal union characteristics. In the invention, the extraction of concise spatio-temporal union characteristics from a video is used as the basis of analysis, and the spatio-temporal union characteristics can express the robustness of the video format change to the video signal change, thus the impact from the video format can be reduced when the similarity of the video content is balanced, and the video segment duplication detecting accuracy can be improved.

Description

A kind of video segment duplication detecting method
Technical field
The present invention relates to digital video technology, particularly a kind of video segment duplication detecting method.
Background technology
Whether it mainly is to judge between the video content copy of another video that video segment duplication detects, and it has important practical use in Vision information processing: at first video copy detection the video information retrieval and to Search Results filter, significant aspect the ordering.Secondly, aspect the medium tracking, video copy detection can be used for automatically monitoring the broadcast situation of certain content video.In addition, aspect the digital video copyright protection, for traditional digital watermark technology, owing to have need not to add additional information in medium, feature extraction such as can carry out at good characteristic after media releasing, and copy detection also begins to receive publicity.For this reason, the researcher has proposed the various video copy detection method, but because how the diversity of video format and content chooses effective feature, detecting video copy exactly still is an open question.Existing video copying detection method adopts the extraction frame of video more, carries out the method for mating based on characteristics of image, but the frame of video that these class methods are extracted be because its characteristics of image can be subjected to the influence that video code model changes, so practicality is relatively poor.
Therefore press for a kind of video segment duplication detecting method that can satisfy the requirement of following two aspects simultaneously: the video data that will produce the Yin Geshi transformation changes robust on the one hand, and is responsive to the video data variation that produces because of the video content variation again on the other hand.
Summary of the invention
The technical problem to be solved in the present invention provides a kind of video segment duplication detecting method that the video data that produces because of format change is changed robust.
For achieving the above object, according to an aspect of the present invention, provide a kind of video segment duplication detecting method, comprised the following steps:
1) space-time that extracts former video segment and video segment to be detected is cut into slices;
2) calculate the space-time unite feature that described space-time is cut into slices;
3), judge whether described video segment to be detected is the copy of described former video segment according to the similarity of described space-time unite feature.
Wherein, described step 1) comprises:
11) former frame of video is extracted horizontal direction pixel line hi, and be spliced to form horizontal space-time section in chronological order
Figure G2008102232366D00021
, extract vertical direction pixel line v i, and be spliced to form vertical space-time section in chronological order
Figure G2008102232366D00022
12) frame of video to be detected is extracted the horizontal direction pixel line , and be spliced to form dropping cut slice in chronological order
Figure G2008102232366D00024
, extract the vertical direction pixel line
Figure G2008102232366D00025
, and be spliced to form vertical section in chronological order
Figure G2008102232366D00026
Wherein, described h iWith described v iPosition in former frame of video is corresponding described respectively
Figure G2008102232366D00027
With described
Figure G2008102232366D00028
Position in frame of video to be detected.。
Wherein, h described in the described step 12) iWith described
Figure G2008102232366D00029
Lay respectively at vertical centre position of former frame of video and frame of video to be detected.。
Wherein, v described in the described step 12) iWith described
Figure G2008102232366D000210
Lay respectively at the lateral mid-point of former frame of video and frame of video to be detected.。
Wherein, described step 2) comprising:
21) described space-time section is divided into a plurality of processing units;
22) described processing unit is carried out discrete cosine transform, get the proper vector of the low frequency coefficient of described discrete cosine transform as described processing unit;
23) proper vector of described processing unit is arranged in chronological order, constitute described space-time unite feature.
Wherein, described step 3) comprises:
31) similarity of the space-time unite feature of the space-time unite feature of the described former video segment of calculating and described video segment to be detected;
32), calculate the similarity of described former video segment and described video segment to be detected according to the similarity of described space-time unite feature;
33) according to the similarity of described former video segment and described video segment to be detected whether greater than threshold value, judge whether described video segment to be detected is the copy of described former video segment.
Wherein, described step 33) span of described threshold value is (0.35-0.45).
Effect of the present invention is: video is extracted succinct space-time unite feature, with this basis as analysis, the variation robust that during this feature changes video format vision signal is brought, thereby can reduce the influence that is subjected to video format when weighing the video content similarity, improve the accuracy of copy detection.Select for use when feature is stored be carry medium and low frequency signal in the feature, and arrange from low to high by frequency, obtain proper vector, the feature of choosing like this is the energy representing video content on the one hand, on the other hand can be affected by noise less, be the robust features of expressing video content, thereby when carrying out content match, be difficult for affected by noise.
Description of drawings
Fig. 1 is a video segment duplication detecting method process flow diagram according to an embodiment of the invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer,, video segment duplication detecting method is according to an embodiment of the invention further described below in conjunction with accompanying drawing.
Shown in the method flow diagram of Fig. 1, extract the space-time section of former video segment and video segment to be detected; Resulting space-time section is analyzed, calculated the relevant space-time unite feature of video content; Whether the space-time unite feature according to the two mates, and judges whether certain section video is the copy of another section.According to a specific embodiment of the present invention, respectively each step is described in detail as follows:
At first, each former frame of video I being determined a fixing position, is initial point with the picture upper left corner, and right-hand as the X forward, coordinate system I is set up as the Y forward in the below XOY, can choose certain fixedly y value, get the pixel of all Y values for y, obtain the lines that the horizontal direction width is a pixel.Also can choose certain x value, get the pixel of all X values, obtain the lines that the vertical direction width is a pixel for x.It will be understood by those skilled in the art that the lines that can also get a plurality of pixels merge obtains the lines that width is a pixel, and wherein, preferred, a plurality of pixels are adjacent.
Pixel lines to taking out from each frame of video splice in chronological order, can obtain the horizontal space-time section of former video segment V With vertical space-time section
Figure G2008102232366D00032
Wherein S V H = < h 1 , . . . , h L > Be horizontal direction pixel line h by former video segment V iThe space-time section that splicing generates, h iIt is the pixel line of i frame; S V V = < v 1 , . . . , v L > Be vertical direction pixel line v by former video segment V iThe space-time section that splicing generates, v iIt is the pixel line of i frame.
Similar with the space-time section of extracting former video segment, extract the pixel line of all Y values of video segment V ' to be detected for y '
Figure G2008102232366D00035
The horizontal space-time section of forming With all X values be the pixel line of x '
Figure G2008102232366D00037
Vertical space-time section of forming
Figure G2008102232366D00038
Wherein, y value and y ' value are corresponding, just, suppose that y accounts for the ratio of former frame of video Y direction width, accounts for being in equal proportions of frame of video Y direction width to be detected with y '; In like manner x value and x ' value also are corresponding.Preferably, comprise more video contents in order to make the space-time section, aforementioned proportion is 1/2, also, and h iWith Be positioned at vertical centre position of frame of video, v iWith
Figure G2008102232366D000310
Be positioned at the lateral mid-point of frame of video.
Then, the space-time unite analytic target (also instant cut-in without ball sheet) to said extracted extracts the space-time unite feature.It will be understood by those skilled in the art that section can be considered as normal image extracts information characteristics such as color, edge, gradient, but such feature is responsive for the video coding.According to a preferred embodiment of the present invention, this stable characteristics comparatively of selected frequency information, it is less affected by noise, is the robust features of expressing video content.
The space-time section of extracting is divided into each processing unit, and establishing gray-scale slice is wide along time-axis direction, and another direction is high, and wide is W, height is H, by time shaft order from left to right section is divided into the unit of a plurality of H * H, claims that each unit is a section cell block Sub.
Each the Sub piece that takes out is carried out discrete cosine transform (DCT), and the Zig-Zag preface is taken out the vector that the low-frequency ac coefficient is generated in the cut-in without ball sheet on time, and for example desirable preceding 16 low frequency coefficients are as the proper vector of this Sub piece.The proper vector of each Sub piece is arranged the proper vector that constitutes the whole video content according to time sequencing.
Wherein, the discrete cosine transform of being adopted (DCT), it is the two-dimensional dct transform that sectioning image is carried out, the two-dimensional dct transform matrix has showed the frequency that image changes along change in coordinate axis direction, and the coordinate direction of Sub is respectively time and direction in space, so its dct transform matrix has showed the change frequency of image sequence on time and space.
At last, calculate the similarity between the space-time unite feature of former video segment V and video segment V ' to be detected, thereby judge between the video whether have copy relationship.The specific implementation of this step is: calculate the space-time section
Figure G2008102232366D00041
With , and With
Figure G2008102232366D00044
Similarity.Introduce following symbol: SUB VAnd SUB V 'Be respectively former video segment V and video segment V ' to be detected the Sub sequence;
Figure G2008102232366D00045
With
Figure G2008102232366D00046
Be respectively SUB VAnd SUB V 'In i Sub piece; Sub V V i = < Sub V V i [ 0 ] , . . . , Sub V V i [ P - 1 ] > With Sub V V &prime; i = < Sub V V &prime; i [ 0 ] , . . . , Sub V V &prime; i [ P - 1 ] > Be respectively right With
Figure G2008102232366D000410
After carrying out dct transform, the proper vector that the DCT ac coefficient of P position is generated before taking out successively by the Zig-Zag order; SUBV V = < Sub V V 0 , Sub V V 1 , &CenterDot; &CenterDot; &CenterDot; , Sub V V L > With SUBV V &prime; = < Sub V V &prime; 0 , Sub V V &prime; 1 , &CenterDot; &CenterDot; &CenterDot; , Sub V V &prime; L > Be respectively by
Figure G2008102232366D000413
With
Figure G2008102232366D000414
Press the proper vector group that the piece order is formed.The calculating formula of similarity of space-time section is:
VFVS ( SUBV V &prime; , SUBV V ) = 1 - &Sigma; i = 1 L d ( Sub V V &prime; i , Sub V V i ) &Sigma; i = 1 L ( abs ( Sub V V &prime; i ) + abs ( Sub V V i ) ) ;
Wherein, d represents the distance between vector, and abs represents to take absolute value.
According to the similarity of above-mentioned space-time section, calculate the similarity between two videos.With
Figure G2008102232366D000416
The proper vector group of note video V slices across, The proper vector group of note video V longitudinal section; With
Figure G2008102232366D000418
The proper vector group of note video V ' slices across;
Figure G2008102232366D000419
The proper vector group of note video V ' longitudinal section.Then the similarity of video V and video V ' is defined as
Simi ( V &prime; , V ) = max ( VFVS ( SUB V V &prime; H , SUB V V H ) , VFVS ( SUB V V &prime; V , SUB V V V ) ) ;
Wherein, max represents to get maximal value.
When Simi (V ', when V) surpassing given threshold tau, judge that video V ' is the copy video of video V.Judge formula COPY<V ', V〉as follows, preferred, the threshold value span is (0.35-0.45), particularly gets 0.4:
Figure G2008102232366D00052
By above judgement, determined whether to exist between the video result of determination of copy relationship.
One of ordinary skill in the art will appreciate that, the decision method of above-mentioned similarity is the conventional method of comparison characteristic sequence, because the selected feature of the present invention is more effective on video copy detection than traditional characteristic, therefore the method for this judgement similarity can obtain good result; Can also adopt method of other comparison characteristic sequence of the included angle cosine method etc. of calculated characteristics vector for example to judge the similarity of space-time unite feature, and then judge the similarity between video.
Should be noted that and understand, under the situation that does not break away from the desired the spirit and scope of the present invention of accompanying Claim, can make various modifications and improvement the present invention of foregoing detailed description.Therefore, the scope of claimed technical scheme is not subjected to the restriction of given any specific exemplary teachings.

Claims (5)

1. a video segment duplication detecting method comprises the following steps:
1) each the former frame of video in the former video segment is extracted horizontal direction pixel line h i, and be spliced to form horizontal space-time section in chronological order
Figure FSB00000557730300011
Each former frame of video in the former video segment is extracted vertical direction pixel line v i, and be spliced to form vertical space-time section in chronological order
Figure FSB00000557730300012
Each frame of video to be detected in the video segment to be detected is extracted horizontal direction pixel line h ' i, and be spliced to form dropping cut slice in chronological order
Figure FSB00000557730300013
Each frame of video to be detected in the video segment to be detected is extracted vertical direction pixel line v ' i, and be spliced to form vertical section in chronological order Wherein, described h iWith described v iPosition in former frame of video is corresponding described h ' respectively iWith described v ' iPosition in frame of video to be detected;
2) by time shaft order from left to right described space-time section is divided into the identical processing unit of a plurality of areas, described processing unit is carried out discrete cosine transform, get the proper vector of the low-frequency ac coefficient of described discrete cosine transform as described processing unit, the proper vector of described processing unit is arranged in chronological order, constitute the space-time unite feature;
3) similarity of the space-time unite feature of the space-time unite feature of the described former video segment of calculating and described video segment to be detected according to the similarity of described space-time unite feature, judges whether described video segment to be detected is the copy of described former video segment.
2. method according to claim 1 is characterized in that, described h iWith described h ' iLay respectively at vertical centre position of former frame of video and frame of video to be detected.
3. method according to claim 1 is characterized in that, described v iWith described v ' iLay respectively at the lateral mid-point of former frame of video and frame of video to be detected.
4. method according to claim 1 is characterized in that, described step 3) comprises:
31) similarity of the space-time unite feature of the space-time unite feature of the described former video segment of calculating and described video segment to be detected;
32), calculate the similarity of described former video segment and described video segment to be detected according to the similarity of described space-time unite feature;
33) according to the similarity of described former video segment and described video segment to be detected whether greater than threshold value, judge whether described video segment to be detected is the copy of described former video segment.
5. method according to claim 4 is characterized in that, described step 33) span of described threshold value is 0.35-0.45.
CN2008102232366A 2008-09-27 2008-09-27 Video segment duplication detecting method Expired - Fee Related CN101685496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102232366A CN101685496B (en) 2008-09-27 2008-09-27 Video segment duplication detecting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102232366A CN101685496B (en) 2008-09-27 2008-09-27 Video segment duplication detecting method

Publications (2)

Publication Number Publication Date
CN101685496A CN101685496A (en) 2010-03-31
CN101685496B true CN101685496B (en) 2011-10-19

Family

ID=42048652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102232366A Expired - Fee Related CN101685496B (en) 2008-09-27 2008-09-27 Video segment duplication detecting method

Country Status (1)

Country Link
CN (1) CN101685496B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102208026A (en) * 2011-05-27 2011-10-05 电子科技大学 Method for extracting digital video fingerprints
CN103347197A (en) * 2013-07-15 2013-10-09 中国科学院自动化研究所 Compressed domain video copy blind detecting method based on DCT coefficient
CN105931270B (en) * 2016-04-27 2018-03-27 石家庄铁道大学 Video key frame extracting method based on gripper path analysis

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1900970A (en) * 2006-07-20 2007-01-24 中山大学 Image zone duplicating and altering detecting method of robust

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1900970A (en) * 2006-07-20 2007-01-24 中山大学 Image zone duplicating and altering detecting method of robust

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Xiao Wu等.A Hierarchical Scheme for Rapid Video Copy Detection.《Applications of Computer Vision, 2008. WACV 2008. IEEE Workshop on》.2008,1-6. *
Xiao Wu等.Spatio-Temporal Visual Consistency for Video Copy Detection.《Visual Information Engineering, 2008. VIE 2008. 5th International Conference on》.2008,414-419. *
李勇等.一种基于时空切片提取摄像机运动的方法.《电视技术》.2004,(第11期),75-78. *
靳延安.基于内容的视频拷贝检测研究.《计算机应用》.2008,第28卷(第8期),2021-2023. *

Also Published As

Publication number Publication date
CN101685496A (en) 2010-03-31

Similar Documents

Publication Publication Date Title
Tang et al. Median filtering detection of small-size image based on CNN
CN101291392B (en) Apparatus and method of processing image as well as apparatus and method of generating reproduction information
EP2198376B1 (en) Media fingerprints that reliably correspond to media content
CN101601287B (en) Apparatus and methods of producing photorealistic image thumbnails
CN101650740B (en) Method and device for detecting television advertisements
CN103034853B (en) A kind of jpeg image general steganalysis method
CN102393900B (en) Video copying detection method based on robust hash
CN104331450B (en) Video copying detection method based on multi-mode feature and tensor resolution
CN101453575A (en) Video subtitle information extracting method
CN101276417A (en) Method for filtering internet cartoon medium rubbish information based on content
CN104661037B (en) The detection method and system that compression image quantization table is distorted
CN102419816B (en) Video fingerprint method for same content video retrieval
CN102147912A (en) Adaptive difference expansion-based reversible image watermarking method
CN100593792C (en) Text tracking and multi-frame reinforcing method in video
CN102883179A (en) Objective evaluation method of video quality
CN1969294A (en) Searching for a scaling factor for watermark detection
KR101191516B1 (en) Enhanced image identification
CN102301697B (en) Video identifier creation device
CN101685496B (en) Video segment duplication detecting method
CN106709915B (en) Image resampling operation detection method
CN102737240A (en) Method of analyzing digital document images
Hu et al. Effective forgery detection using DCT+ SVD-based watermarking for region of interest in key frames of vision-based surveillance
CN1936956A (en) Recessive writing detection method in the light of DCT zone LSB recessive writing
CN102292724B (en) Matching weighting information extracting device
CN114519689A (en) Image tampering detection method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111019

Termination date: 20210927

CF01 Termination of patent right due to non-payment of annual fee