CN105578198A - Video homologous Copy-Move detection method based on time offset characteristic - Google Patents

Video homologous Copy-Move detection method based on time offset characteristic Download PDF

Info

Publication number
CN105578198A
CN105578198A CN201510934885.7A CN201510934885A CN105578198A CN 105578198 A CN105578198 A CN 105578198A CN 201510934885 A CN201510934885 A CN 201510934885A CN 105578198 A CN105578198 A CN 105578198A
Authority
CN
China
Prior art keywords
light stream
video
block
matrix
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510934885.7A
Other languages
Chinese (zh)
Other versions
CN105578198B (en
Inventor
蒋兴浩
孙锬锋
王康
彭湃
彭瀚琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI RUNWU INFORMATION TECHNOLOGY Co Ltd
Shanghai Jiaotong University
Original Assignee
SHANGHAI RUNWU INFORMATION TECHNOLOGY Co Ltd
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI RUNWU INFORMATION TECHNOLOGY Co Ltd, Shanghai Jiaotong University filed Critical SHANGHAI RUNWU INFORMATION TECHNOLOGY Co Ltd
Priority to CN201510934885.7A priority Critical patent/CN105578198B/en
Publication of CN105578198A publication Critical patent/CN105578198A/en
Application granted granted Critical
Publication of CN105578198B publication Critical patent/CN105578198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder

Abstract

The invention provides a video homologous Copy-Move detection method based on the time offset characteristic. The video homologous Copy-Move detection method comprises the following steps that time-space blocking preprocessing is performed on a test video, and the optical flow sequence of the blocked videos is calculated by adopting an LK optical flow method so that the regional optical flow matrix of the video is obtained; a regional time domain offset matrix is calculated through phase correlation according to the obtained regional optical flow matrix; the test video is blocked, and the histogram of oriented gradient characteristic, simply called the HOG characteristic, of each block is calculated so that an HOG characteristic matrix is generated; and the similarity of the HOG characteristics of two regions is compared by utilizing the obtained time domain offset matrix so as to generate a judgment matrix, and the judgment matrix is compared with Ground Truth so as to generate accuracy. Copy-Move is detected by innovatively using the video time domain characteristic and detection accuracy is enhanced.

Description

Based on time inclined feature video homology Copy-Move detection method
Technical field
The present invention relates to altering detecting method in frame of video, particularly, relate to a kind of based on time inclined feature video homology Copy-Move detection method.
Background technology
Along with the progress of video processing technique, the technical costs that video is distorted also reduces thereupon, causes to distort now video and be seen everywhere.As the monitor video of judicial expertise evidence, nowadays can it seems and also became so not credible in the past in the past always.Meanwhile, the maturation of Internet technology, also allow the social effectiveness straight line distorting video rise, video is distorted delinquent cost and is declined.In this case, needing on the one hand to improve Web content supervision, needing on the other hand effectively to detect distorting video.
At present, video is distorted mainly to be divided in frame distorting and is distorted with interframe, and it is that interpolater comprises the atomic operation such as frame deletion, frame insertion in frame rank to video that interframe is distorted.Distorting in frame is distort (such as removing certain object occurred in video) certain object in frame of video or certain part, concrete distort operation and can comprise insertion, covering, reparation etc.This type of operand distorted is certain region of frame of video, is similar to distorted image, and interpolater needs to carry out and visible distorts vestige to the process at edge, tampered region to avoid staying.Because object can continue to occur a period of time in video, cause the work of distorting may need to carry out in continuous print multiframe.Briefly, it is distort operation in units of frame of video that interframe is distorted, and distorting in frame, is for substantially to distort unit with pixel.
Through finding the retrieval of Copy-Move tampering detection technology in existing frame, publication number is CN103945228A, publication date is that the Chinese patent on July 23rd, 2014 describes one " based on copy-move altering detecting method in the frame of video of space-time relationship ", and the method utilizes the detection method of whole frame phase correlation and Block-matching to judge whether video exists copy-move between two and distort between frame.But the method for be the copy-move behavior of moving target, and have ignored the copy-move tampering of background or stationary object, this makes the method have some limitations in tampering detection in frame.By finding the retrieval of existing document, Singapore scholar Subramanyam utilizes HOG feature to carry out detection copy-move, Italy scholar then adopts cross-correlation to detect the block of copy-move in video, no matter above which kind of mode, the mode that substantially all have employed space characteristics coupling carries out detecting and have ignored time domain characteristic.
Summary of the invention
For defect of the prior art, the object of this invention is to provide a kind of based on time inclined feature video homology Copy-Move detection method.
According to provided by the invention based on time inclined feature video homology Copy-Move detection method, comprise the steps:
Step 1: carry out space-time partitioning pretreatment to test video, obtains piecemeal video, and adopts LK optical flow method to calculate the light stream sequence of piecemeal video, obtains the region light stream matrix of this video;
Step 2: according to the region light stream matrix obtained in step 1, by the time domain excursion matrix in phase place correlation computations region, in excursion matrix, the value in each space is the space-time position coordinate the most suspicious with correspondence position, comprises locus and place frame number;
Step 3: carry out piecemeal to test video, and the histograms of oriented gradients HistogramofOrientedGradient feature calculating each piece, be called for short HOG feature, generates HOG eigenmatrix;
Step 4: suspicious piece of position of every frame each space coordinates block that can be calculated by step 2, the HOG feature difference of computing block and suspicious piece, generates judgement matrix, this judgement matrix and actual value comparison is produced accuracy rate.
Preferably, described step 1 comprises the steps:
Step 1.1: spatial domain piecemeal is carried out to each frame of test video;
Step 1.2: adopt LK optical flow method to calculate the light stream of each pixel of each frame of video;
Step 1.3: the piecemeal obtained according to step 1.1, using the light stream value of the summation of light stream value in each piece as this block, and is put in light stream matrix, obtains light stream matrix computing formula is as follows:
In formula: bx, by represent block space X, Y-coordinate, t 0for the start frame at this light stream place, represent t 0the light stream value at frame (bx, by) block place, (x, y) in (bx, by) represents all pixels in block (bx, by), and of (x, y) represents the light stream vectors at pixel (x, y) place.
Preferably, in described step 1.1, spatial domain piecemeal is carried out to each frame of test video; Particularly, the block that selection block size is 16 × 16 pixel sizes also makes institute's piecemeal have overlap each other, and arranging the area covered mutually is 50%, then overlapping block area is the block of 8 × 16 pixel sizes.
Preferably, described step 1.2 comprises the steps:
Step 1.2.1: gaussian filtering is carried out to frame of video, obtains smoothed image;
Step 1.2.2: the pyramid model image extracting the different scale of smoothed image, and to the light stream of every one deck pyramid calculation;
Step 1.2.3: the light stream value of every one deck is merged into complete optical flow field.
Preferably, described step 2 comprises the steps:
Step 2.1: carry out time-domain burst to test video, makes each burst cover 25% frame number each other, if that is: be a burst size with 8 frames, then overlay length is 2 frames;
Step 2.2: take out as sample from light stream matrix using the block light stream after burst, calculation sample is relevant to the phase place of the complete light stream sequence of remaining locus block, judges whether to there is skew, if existed, then in the position of excursion matrix place writes down this offset parameter (x ', y ', t '), wherein, and bx, by, t 0be respectively space transverse and longitudinal coordinate and its place frame number of sample, x ', y ', t ' are respectively the space transverse and longitudinal coordinate and place frame number that there is the relevant offset blocks of phase place to sample; The burst light stream calculating each frame is relevant to the phase place of light stream matrix sequence O, if there is offset parameter, then in excursion matrix, this position is this offset parameter, otherwise is set to 0.
Preferably, the phase place related definition in described step 2.2 is as follows:
Using the time series of light stream size as signal, each locus has just had a length to be the sequence of N, and N is video frame number, if O is the light stream sequence calculating gained, then
Wherein, t is frame number, and (x, y) is the block space position calculating light stream, and F is totalframes, and W, H are respectively the abscissa of frame of video and the number of ordinate block, and of is the light stream size calculated, by space coordinates (x0, y0), and time coordinate t 0the light stream sequence of place Δ t length is as signal templates g (t 0);
Using complete light stream sequence O as primary signal, g (t 0) as measured signal, rated output spectrum δ (t m) as follows:
Wherein, O x,y(ω), G *(ω) O is respectively x,y, g (t 0) Fourier transform, represent inverse Fourier transform, O x,yfor the light stream sequence that space coordinates is the region unit of (x, y), V represents any block.Owing to existing from situation about comparing, maximum always with the migration result of self light stream alignment, another max is for getting Second Largest Value; By trying to achieve and g (t 0) the time offset t at identical light stream sequence place m, there is the sequence identical because of copy in tampered region, non-tampered region is then judged as original area because can not find same sequence;
Owing to there is another light stream sequence of coupling through copying the light stream sequence of pasting tampered region, by the light stream sequence of certain hour length is carried out phase place correlation computations as the light stream sequence inputted and space coordinates is complete, search out possible side-play amount namely to find and potential distort a little, excursion matrix computing formula is as follows:
In formula: δ (t m) represent the phase place correlation computations of the light stream sequence of bx, by position Δ t time span and the light stream sequence of all full time in a frame, t mrepresent the frame position obtaining second largest similarity in phase place correlation computations, t 0represent the frame number of present frame; If current block (bx, by, t 0) and every other piece all do not find side-play amount to be just set to 0, obtain the block excursion matrix P of each frame.
Preferably, described step 3 comprises the steps:
Step 3.1: the spatial domain method of partition adopted according to step 1, carries out piecemeal process by video;
Step 3.2: the HOG Feature Descriptor calculating the block in video, is placed in spacetime coordinate (x, y, the t) place residing for this block, generate HOG matrix, be expressed as H, computing formula is as follows:
In formula: BW, BH are respectively the sum of x, y-axis block, hog is the function calculating HOG.
Preferably, described step 4 comprises the steps:
Step 4.1: according to the time domain excursion matrix of step 2 gained, extracts the sequence pair that there is skew;
Step 4.2: the difference calculating the HOG descriptor of sequence pair in 4.1, adopts Euclidean distance to describe otherness; Wherein, decision process will produce trip current G,
In formula: ED represents the Euclidean distance of calculating two block HOG, T is default threshold value, if two block B, B' distances are less than threshold value T, then the judgement matrix value in this this moment of block is 1, otherwise is set to 0.
Compared with prior art, the present invention has following beneficial effect:
1, provided by the invention based on time inclined feature video homology Copy-Move detection method can not the mode of being distorted limit, all detect distorted region prospect and background parts, accuracy rate is high.
2, provided by the invention based on time inclined feature video homology Copy-Move detection method take full advantage of the time domain specification of video, compensate for the deficiency of the mode of space characteristics coupling, the copy-move tampering of uncared-for background or stationary object can be detected.
Accompanying drawing explanation
By reading the detailed description done non-limiting example with reference to the following drawings, other features, objects and advantages of the present invention will become more obvious:
Fig. 1 is calculating excursion matrix flow chart provided by the invention;
Fig. 2 is model framework figure provided by the invention.
Embodiment
Below in conjunction with specific embodiment, the present invention is described in detail.Following examples will contribute to those skilled in the art and understand the present invention further, but not limit the present invention in any form.It should be pointed out that to those skilled in the art, without departing from the inventive concept of the premise, some distortion and improvement can also be made.These all belong to protection scope of the present invention.
The present invention proposes a kind of based on copy-move altering detecting method in the frame of time domain offset characteristic.First the method adopts LK optical flow method to calculate the block region light stream sequence of test video, the offset parameter calculating them again by regional light stream sequence being carried out phase place correlometer builds excursion matrix, this excursion matrix describes the space-time position that each block may exist the pairing block of copy-move suspicion, finally utilize the Euclidean distance of the spatial texture feature between HOG feature detection suspicion block, if distance exceedes threshold value, then judge that these two blocks exist copy-move and distort operation.
Because this kind mode of distorting will certainly bring area pixel across the whole section of translation of time, also time domain skew is just created, and this character will bring the detection possibility that phase place is relevant in a frequency domain, so the present invention utilizes this characteristic, carry out tampering detection in conjunction with light stream is relevant with phase place.Compared with prior art, the method novelty employ video time domain characteristic, simultaneously also can ignore tampered region is that front and back scape detects, and improves the Detection accuracy of pixel scale.
Particularly, as shown in Figure 2, according to provided by the invention based on time inclined feature video homology copy-move detection method, comprise the steps:
Step 1: space-time partitioning pretreatment is carried out to test video, and adopt LK optical flow method to calculate its light stream sequence, obtain the region light stream matrix of this video;
Step 2: according to the region light stream matrix obtained in step 1, by the time domain excursion matrix of phase correlation zoning;
Step 3: carry out piecemeal to test video, and histograms of oriented gradients (HistogramofOrientedGradient, the HOG) feature calculating each piece, generate HOG eigenmatrix;
Step 4: the HOG characteristic similarity utilizing excursion matrix comparison two panel region obtained in step 2, generates judgement matrix, this judgement matrix and GroundTruth comparison is produced accuracy rate.
Described step 1 comprises the steps:
Step 1.1: carry out spatial domain piecemeal to each frame of test video, selection block size is the block of 16 × 16 pixel sizes herein.Meanwhile, ensure that institute's piecemeal has overlap each other, select to cover 50% area mutually, then overlapping block area is the block of 8 × 16 pixel sizes herein;
Step 1.2: adopt LK optical flow method to calculate the light stream of each each pixel of frame of video, first gaussian filtering is carried out to frame of video, obtain smoothed image, extract the pyramid model image of the different scale of smoothed image again, and to the light stream of every one deck pyramid calculation, be finally merged into complete optical flow field;
Step 1.3: according to the method for partition in step 1.1, conveniently calculate, ignores light stream direction attribute, to add up in each piece light stream value size summation as the light stream value of this block, and is put in light stream matrix, obtain light stream matrix computing formula is as follows:
In formula: bx, by represents block space X, Y-coordinate, t 0for the start frame at this light stream place, represent t 0the light stream value at frame (bx, by) block place, (x, y) in (bx, by) represents all pixels in block (bx, by), and of (x, y) represents the light stream vectors at pixel (x, y) place.
Described step 2 comprises the steps:
Step 2.1: time-domain burst is carried out to test video, and ensure that burst covers 25% frame number each other, herein according to feasibility Experiment, clip size is defined as 8 frames, overlap is 2 frames, and namely step-length is 6, clip size is 8 frames; According to a point leaf length, the block light stream sequence extracted is region light stream, characterizes the temporal signatures in this period of this block;
Step 2.2: the block light stream after burst is taken out as sample from light stream matrix, calculation sample is relevant to the phase place of the complete light stream sequence of remaining locus block, judges whether to there is skew, if existed, then at excursion matrix this offset parameter (x ', y ', t ') is write down in position, wherein, and bx, by, t 0be respectively space transverse and longitudinal coordinate and the place frame number of sample, x ', y ', t ' are respectively the space transverse and longitudinal coordinate and place frame number that there is the relevant offset blocks of phase place to sample; By that analogy, the burst light stream calculating each frame is relevant to the phase place of light stream matrix O ', if there is offset parameter, then in excursion matrix, this position is set to this offset parameter, otherwise is set to 0.
Phase place related definition in described step 2.2 is as follows:
Using the time series of light stream size as signal, each locus has just had a length to be the sequence of N, and N is video frame number, if O is the light stream sequence calculating gained,
Wherein, t is frame number, and (x, y) is the block space position calculating light stream, and F is totalframes, and W, H are respectively the abscissa of frame of video and the number of ordinate block, and of is the light stream size calculated, by space coordinates (x0, y0), and time coordinate t 0the light stream sequence of place Δ t length is as signal templates;
Using complete light stream sequence O as primary signal, g (t 0) as measured signal, rated output spectrum is as follows:
Wherein, O x,y(ω), G *(ω) O is respectively x,y, g (t 0) Fourier transform, represent inverse Fourier transform, owing to existing from situation about comparing, maximum always with the migration result of self light stream alignment, another max is for getting Second Largest Value; By trying to achieve and g (t 0) the time offset t at identical light stream sequence place m, tampered region find offset coordinates in video memory in same sequence, non-tampered region is then judged as original area because can not find same sequence;
There is another light stream sequence of coupling in the light stream sequence due to Copy-Move region, by using certain hour length light stream sequence carry out phase place correlation computations as input and the complete light stream sequence of space coordinates, can search out possible side-play amount namely to find and potential distort a little, excursion matrix computing formula is as follows:
In formula: δ (t m) represent t 0the phase place correlation computations of the light stream sequence of bx, by position Δ t time span and the light stream sequence of all full time in frame, t mrepresent the frame position obtaining second largest similarity in phase place correlation computations; If current block (bx, by, t 0) and every other piece all do not find side-play amount to be just set to zero, obtain the block excursion matrix P of each frame.
Concrete, as shown in Figure 1, be the extraction flow process of excursion matrix in figure:
The first step: according to the block position of present frame, extract light stream sequence from light stream matrix;
Second step: calculate phase place according to explanation before and be correlated with, judge whether the peak value of energy frequency spectrum exceedes threshold value, if exceed, then write down this offset parameter in excursion matrix---skew frame number, skew space coordinates; If do not exceed the peak value of threshold value, then excursion matrix position is set to 0;
3rd step: judge the block whether present frame does not calculate in addition, if exist, repeats the first step, otherwise calculates next frame.
Described step 3 comprises the steps:
Step 3.1: the spatial domain method of partition adopted according to step 1, carries out piecemeal process by video;
Step 3.2: calculate its HOG Feature Descriptor to the block in video, is placed in spacetime coordinate (x, y, the t) place residing for this block, and generate HOG matrix, be expressed as H, computing formula is as follows:
In formula: BW, BH are respectively x, y-axis sum, and hog is for calculating hog process.
Described step 4 comprises the steps:
Step 4.1: carry out spatial domain piecemeal to each frame of test video, selection block size is the block of 16 × 16 pixel sizes.Meanwhile, ensure that institute's piecemeal has overlap each other, select to cover 50% area mutually, then overlapping block area is the block of 8 × 16 pixel sizes;
Step 4.2: the difference calculating the HOG descriptor of sequence pair in 4.1, adopts Euclidean distance to describe otherness; Wherein, decision process will produce trip current G,
In formula: ED represents the Euclidean distance of calculating two block HOG, T is default threshold value, if two block B, B' distances are less than threshold value T, then the judgement matrix value in this this moment of block is 1, otherwise is set to 0.Herein, according to feasibility test, T is set to 0.5.
Above specific embodiments of the invention are described.It is to be appreciated that the present invention is not limited to above-mentioned particular implementation, those skilled in the art can make various distortion or amendment within the scope of the claims, and this does not affect flesh and blood of the present invention.

Claims (8)

1. based on time inclined feature a video homology Copy-Move detection method, it is characterized in that, comprise the steps:
Step 1: carry out space-time partitioning pretreatment to test video, obtains piecemeal video, and adopts LK optical flow method to calculate the light stream sequence of piecemeal video, obtains the region light stream matrix of this video;
Step 2: according to the region light stream matrix obtained in step 1, by the time domain excursion matrix in phase place correlation computations region, in excursion matrix, the value in each space is the space-time position coordinate with correspondence position, comprises locus and place frame number;
Step 3: carry out piecemeal to test video, and the histograms of oriented gradients HistogramofOrientedGradient feature calculating each piece, be called for short HOG feature, generates HOG eigenmatrix;
Step 4: suspicious piece of position of every frame each space coordinates block that can be calculated by step 2, the HOG feature difference of computing block and suspicious piece, generates judgement matrix, this judgement matrix and actual value comparison is produced accuracy rate.
2. according to claim 1 based on time inclined feature video homology Copy-Move detection method, it is characterized in that, described step 1 comprises the steps:
Step 1.1: spatial domain piecemeal is carried out to each frame of test video;
Step 1.2: adopt LK optical flow method to calculate the light stream of each pixel of each frame of video;
Step 1.3: the piecemeal obtained according to step 1.1, using the light stream value of the summation of light stream value in each piece as this block, and is put in light stream matrix, obtains light stream matrix computing formula is as follows:
In formula: bx, by represent block space X, Y-coordinate, t 0for the start frame at this light stream place, represent t 0the light stream value at frame (bx, by) block place, (x, y) in (bx, by) represents all pixels in block (bx, by), and of (x, y) represents the light stream vectors at pixel (x, y) place.
3. according to claim 2 based on time inclined feature video homology Copy-Move detection method, it is characterized in that, in described step 1.1, spatial domain piecemeal is carried out to each frame of test video; Particularly, the block that selection block size is 16 × 16 pixel sizes also makes institute's piecemeal have overlap each other, and arranging the area covered mutually is 50%, then overlapping block area is the block of 8 × 16 pixel sizes.
4. according to claim 2 based on time inclined feature video homology Copy-Move detection method, it is characterized in that, described step 1.2 comprises the steps:
Step 1.2.1: gaussian filtering is carried out to frame of video, obtains smoothed image;
Step 1.2.2: the pyramid model image extracting the different scale of smoothed image, and to the light stream of every one deck pyramid calculation;
Step 1.2.3: the light stream value of every one deck is merged into complete optical flow field.
5. according to claim 1 based on time inclined feature video homology Copy-Move detection method, it is characterized in that, described step 2 comprises the steps:
Step 2.1: carry out time-domain burst to test video, makes each burst cover 25% frame number each other, if that is: be a burst size with 8 frames, then overlay length is 2 frames;
Step 2.2: take out as sample from light stream matrix using the block light stream after burst, calculation sample is relevant to the phase place of the complete light stream sequence of remaining locus block, judges whether to there is skew, if existed, then in the position of excursion matrix place writes down this offset parameter (x ', y ', t '), wherein, and bx, by, t 0be respectively space transverse and longitudinal coordinate and its place frame number of sample, x ', y ', t ' are respectively the space transverse and longitudinal coordinate and place frame number that there is the relevant offset blocks of phase place to sample; The burst light stream calculating each frame is relevant to the phase place of light stream matrix sequence O ', if there is offset parameter, then in excursion matrix, this position is this offset parameter, otherwise is set to 0.
6. according to claim 5 based on time inclined feature video homology Copy-Move detection method, it is characterized in that, the phase place related definition in described step 2.2 is as follows:
Using the time series of light stream size as signal, each locus has just had a length to be the sequence of N, and N is video frame number, if O is the light stream sequence calculating gained, then
Wherein, t is frame number, and (x, y) is the block space position calculating light stream, and F is totalframes, and W, H are respectively the abscissa of frame of video and the number of ordinate block, and of is the light stream size calculated, by space coordinates (x0, y0), and time coordinate t 0the light stream sequence of place △ t length is as signal templates g (t 0);
Using complete light stream sequence O as primary signal, g (t 0) as measured signal, rated output spectrum δ (t m) as follows:
Wherein, O x,y(ω), G *(ω) O is respectively x,y, g (t 0) Fourier transform, represent inverse Fourier transform, O x,yfor the light stream sequence that space coordinates is the region unit of (x, y), V represents any block; Owing to existing from situation about comparing, maximum always with the migration result of self light stream alignment, another max is for getting Second Largest Value; By trying to achieve and g (t 0) the time offset t at identical light stream sequence place m, there is the sequence identical because of copy in tampered region, non-tampered region is then judged as original area because can not find same sequence;
Owing to there is another light stream sequence of coupling through copying the light stream sequence of pasting tampered region, by the light stream sequence of certain hour length is carried out phase place correlation computations as the light stream sequence inputted and space coordinates is complete, search out possible side-play amount namely to find and potential distort a little, excursion matrix computing formula is as follows:
In formula: δ (t m) represent the phase place correlation computations of the light stream sequence of bx, by position △ t time span and the light stream sequence of all full time in a frame, t mrepresent the frame position obtaining second largest similarity in phase place correlation computations, t 0represent the frame number of present frame; If current block (bx, by, t 0) and every other piece all do not find side-play amount to be just set to 0, obtain the block excursion matrix P of each frame.
7. according to claim 1 based on time inclined feature video homology Copy-Move detection method, it is characterized in that, described step 3 comprises the steps:
Step 3.1: the spatial domain method of partition adopted according to step 1, carries out piecemeal process by video;
Step 3.2: the HOG Feature Descriptor calculating the block in video, is placed in spacetime coordinate (x, y, the t) place residing for this block, generate HOG matrix, be expressed as H, computing formula is as follows:
In formula: BW, BH are respectively the sum of x, y-axis block, hog is the function calculating HOG.
8. according to claim 5 based on time inclined feature video homology Copy-Move detection method, it is characterized in that, described step 4 comprises the steps:
Step 4.1: according to the time domain excursion matrix of step 2 gained, extracts the sequence pair that there is skew;
Step 4.2: the difference calculating the HOG descriptor of sequence pair in 4.1, adopts Euclidean distance to describe otherness; Wherein, decision process will produce trip current G,
In formula: ED represents the Euclidean distance of calculating two block HOG, T is default threshold value, if two block B, B' distances are less than threshold value T, then the judgement matrix value in this this moment of block is 1, otherwise is set to 0.
CN201510934885.7A 2015-12-14 2015-12-14 Based on when inclined feature the homologous Copy-Move detection method of video Active CN105578198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510934885.7A CN105578198B (en) 2015-12-14 2015-12-14 Based on when inclined feature the homologous Copy-Move detection method of video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510934885.7A CN105578198B (en) 2015-12-14 2015-12-14 Based on when inclined feature the homologous Copy-Move detection method of video

Publications (2)

Publication Number Publication Date
CN105578198A true CN105578198A (en) 2016-05-11
CN105578198B CN105578198B (en) 2019-01-11

Family

ID=55887795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510934885.7A Active CN105578198B (en) 2015-12-14 2015-12-14 Based on when inclined feature the homologous Copy-Move detection method of video

Country Status (1)

Country Link
CN (1) CN105578198B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573269A (en) * 2017-10-24 2018-09-25 北京金山云网络技术有限公司 Image characteristic point matching method, coalignment, electronic equipment and storage medium
CN111083492A (en) * 2018-10-22 2020-04-28 北京字节跳动网络技术有限公司 Gradient computation in bi-directional optical flow
CN112584146A (en) * 2019-09-30 2021-03-30 复旦大学 Method and system for evaluating interframe similarity
CN113704551A (en) * 2021-08-24 2021-11-26 广州虎牙科技有限公司 Video retrieval method, storage medium and equipment
US11930165B2 (en) 2019-03-06 2024-03-12 Beijing Bytedance Network Technology Co., Ltd Size dependent inter coding
US11956449B2 (en) 2018-11-12 2024-04-09 Beijing Bytedance Network Technology Co., Ltd. Simplification of combined inter-intra prediction
US11956465B2 (en) 2018-11-20 2024-04-09 Beijing Bytedance Network Technology Co., Ltd Difference calculation based on partial position

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650830A (en) * 2009-08-06 2010-02-17 中国科学院声学研究所 Compressed domain video lens mutation and gradient union automatic segmentation method and system
CN102156702A (en) * 2010-12-17 2011-08-17 南方报业传媒集团 Fast positioning method for video events from rough state to fine state
EP2517176A1 (en) * 2009-12-21 2012-10-31 ST-Ericsson (France) SAS Method for regenerating the background of digital images of a video stream
CN103390040A (en) * 2013-07-17 2013-11-13 南京邮电大学 Video copy detection method
CN103945228A (en) * 2014-03-28 2014-07-23 上海交通大学 Video intra-frame copy-move tampering detection method based on space and time relevance
CN105141968A (en) * 2015-08-24 2015-12-09 武汉大学 Video same-source copy-move tampering detection method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650830A (en) * 2009-08-06 2010-02-17 中国科学院声学研究所 Compressed domain video lens mutation and gradient union automatic segmentation method and system
EP2517176A1 (en) * 2009-12-21 2012-10-31 ST-Ericsson (France) SAS Method for regenerating the background of digital images of a video stream
CN102156702A (en) * 2010-12-17 2011-08-17 南方报业传媒集团 Fast positioning method for video events from rough state to fine state
CN103390040A (en) * 2013-07-17 2013-11-13 南京邮电大学 Video copy detection method
CN103945228A (en) * 2014-03-28 2014-07-23 上海交通大学 Video intra-frame copy-move tampering detection method based on space and time relevance
CN105141968A (en) * 2015-08-24 2015-12-09 武汉大学 Video same-source copy-move tampering detection method and system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573269A (en) * 2017-10-24 2018-09-25 北京金山云网络技术有限公司 Image characteristic point matching method, coalignment, electronic equipment and storage medium
CN108573269B (en) * 2017-10-24 2021-02-05 北京金山云网络技术有限公司 Image feature point matching method, matching device, electronic device and storage medium
CN111083492A (en) * 2018-10-22 2020-04-28 北京字节跳动网络技术有限公司 Gradient computation in bi-directional optical flow
US11838539B2 (en) 2018-10-22 2023-12-05 Beijing Bytedance Network Technology Co., Ltd Utilization of refined motion vector
CN111083492B (en) * 2018-10-22 2024-01-12 北京字节跳动网络技术有限公司 Gradient computation in bidirectional optical flow
US11889108B2 (en) 2018-10-22 2024-01-30 Beijing Bytedance Network Technology Co., Ltd Gradient computation in bi-directional optical flow
US11956449B2 (en) 2018-11-12 2024-04-09 Beijing Bytedance Network Technology Co., Ltd. Simplification of combined inter-intra prediction
US11956465B2 (en) 2018-11-20 2024-04-09 Beijing Bytedance Network Technology Co., Ltd Difference calculation based on partial position
US11930165B2 (en) 2019-03-06 2024-03-12 Beijing Bytedance Network Technology Co., Ltd Size dependent inter coding
CN112584146A (en) * 2019-09-30 2021-03-30 复旦大学 Method and system for evaluating interframe similarity
CN112584146B (en) * 2019-09-30 2021-09-28 复旦大学 Method and system for evaluating interframe similarity
CN113704551A (en) * 2021-08-24 2021-11-26 广州虎牙科技有限公司 Video retrieval method, storage medium and equipment

Also Published As

Publication number Publication date
CN105578198B (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN105578198A (en) Video homologous Copy-Move detection method based on time offset characteristic
Yu et al. Unitbox: An advanced object detection network
Badenas et al. Motion-based segmentation and region tracking in image sequences
Pan et al. Robust abandoned object detection using region-level analysis
Santos et al. Orthogonal variant moments features in image analysis
Zhang et al. Visual tracking using Siamese convolutional neural network with region proposal and domain specific updating
Zhu et al. Shadow removal with background difference method based on shadow position and edges attributes
CN114419520B (en) Training method, device, equipment and storage medium of video-level target detection model
CN110378929A (en) A kind of across camera pedestrian track tracking of business place
CN106375773B (en) Altering detecting method is pasted in frame duplication based on dynamic threshold
Qin et al. A background extraction and shadow removal algorithm based on clustering for ViBe
Jodoin et al. Motion detection with an unstable camera
Pei et al. Visual explanations for exposing potential inconsistency of deepfakes
Cong-An et al. Feature aligned ship detection based on improved RPDet in SAR images
Chen et al. Content-aware video resizing based on salient visual cubes
Li et al. Research on road traffic moving target detection method based on sequential inter frame difference and optical flow method
Shao et al. An improved moving target detection method based on vibe algorithm
Fu et al. The Capture of Moving Object in Video Image.
Xiao et al. Robust model adaption for colour-based particle filter tracking with contextual information
Pu et al. Video tampering detection algorithm based on spatial constraints and stable feature
Huang et al. Block background subtraction method based on ViBe
Yang et al. Spatiotemporal salient object detection based on distance transform and energy optimization
Hassan et al. Adaptive foreground extraction for crowd analytics surveillance on unconstrained environments
Li et al. Design of video aerial target detection and tracking system based on AM5728
Zhou et al. Attacking the tracker with a universal and attractive patch as fake target

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant