CN105578198B - Based on when inclined feature the homologous Copy-Move detection method of video - Google Patents

Based on when inclined feature the homologous Copy-Move detection method of video Download PDF

Info

Publication number
CN105578198B
CN105578198B CN201510934885.7A CN201510934885A CN105578198B CN 105578198 B CN105578198 B CN 105578198B CN 201510934885 A CN201510934885 A CN 201510934885A CN 105578198 B CN105578198 B CN 105578198B
Authority
CN
China
Prior art keywords
light stream
video
block
matrix
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510934885.7A
Other languages
Chinese (zh)
Other versions
CN105578198A (en
Inventor
蒋兴浩
孙锬锋
王康
彭湃
彭瀚琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI RUNWU INFORMATION TECHNOLOGY Co Ltd
Shanghai Jiaotong University
Original Assignee
SHANGHAI RUNWU INFORMATION TECHNOLOGY Co Ltd
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI RUNWU INFORMATION TECHNOLOGY Co Ltd, Shanghai Jiaotong University filed Critical SHANGHAI RUNWU INFORMATION TECHNOLOGY Co Ltd
Priority to CN201510934885.7A priority Critical patent/CN105578198B/en
Publication of CN105578198A publication Critical patent/CN105578198A/en
Application granted granted Critical
Publication of CN105578198B publication Critical patent/CN105578198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder

Abstract

The present invention provides it is a kind of based on when inclined feature the homologous Copy-Move detection method of video, include the following steps: to carry out space-time partitioning pretreatment to test video, and the light stream sequence of piecemeal video is calculated using LK optical flow method, obtain the region light stream matrix of the video;According to obtained region light stream matrix, pass through the time domain excursion matrix in phase relevant calculation region;Piecemeal is carried out to test video, and calculates each piece of histograms of oriented gradients Histogram of Oriented Gradient feature, abbreviation HOG feature generates HOG eigenmatrix;The HOG characteristic similarity of two panel regions is compared using obtained time domain excursion matrix, generates judgement matrix, and the judgement matrix and Ground Truth are compared into generation accuracy rate.Novelty of the invention detects Copy-Move using video time domain characteristic, and improves Detection accuracy.

Description

Based on when inclined feature the homologous Copy-Move detection method of video
Technical field
The present invention relates to altering detecting methods in video frame, and in particular, to it is a kind of based on when inclined feature video it is homologous Copy-Move detection method.
Background technique
With the progress of video processing technique, the technical costs that video is distorted also is reduced therewith, causes to distort video now It is seen everywhere.It can be used as the monitor video of judicial expertise evidence always in the past, not as nowadays apparently also becoming before It is credible.Meanwhile the maturation of Internet technology, also allow the social effectiveness for distorting video to ramp, video distorts illegal criminal Guilty cost decline.In this case, on the one hand need to improve Web content supervision, on the other hand need to distort video into The effective detection of row.
It is distorted currently, video is distorted to be broadly divided into distort in frame with interframe, it is that interpolater is other to view in frame level that interframe, which is distorted, Frequency carries out including the atomic operations such as frame deletion, frame insertion.Distorted in frame be in video frame some object or some part (such as removing some object occurred in video) is distorted, specifically distorting operation may include insertion, covering, repairs Deng.Such operation object distorted is some region of video frame, is similar to distorted image, and interpolater needs to carry out to distorting area The processing at domain edge visible distorts trace to avoid leaving.Since object can continue a period of time occur in video, cause The work distorted may need to carry out in continuous multiframe.Briefly, it is to distort behaviour as unit of video frame that interframe, which is distorted, Make, and distorted in frame, be with pixel is to distort unit substantially.
It is found by the retrieval to Copy-Move tampering detection technology in existing frame, Publication No. CN103945228A, it is public It opens the Chinese patent that day is on July 23rd, 2014 and describes that a kind of " copy-move is distorted in the video frame based on space-time relationship Detection method ", this method determine that video is two-by-two between frame using whole frame phase correlation and the detection method of Block- matching It is no that there are copy-move to distort.But this method is directed to the copy-move behavior of moving target, and have ignored background or The copy-move tampering of stationary object, this has some limitations this method in tampering detection in frame.Pass through Retrieval discovery to existing literature, Singapore scholar Subramanyam carry out detection copy-move using HOG feature, Italy Scholar uses cross-correlation then to detect the block of copy-move in video, and no matter above which kind of mode, it is special all use space substantially Matched mode is levied to be detected and have ignored time domain characteristic.
Summary of the invention
For the defects in the prior art, the object of the present invention is to provide it is a kind of based on when inclined feature video it is homologous Copy-Move detection method.
There is provided according to the present invention based on when inclined feature the homologous Copy-Move detection method of video, include the following steps:
Step 1: space-time partitioning pretreatment being carried out to test video, obtains piecemeal video, and calculate and divide using LK optical flow method The light stream sequence of block video obtains the region light stream matrix of the video;
Step 2: the region light stream matrix according to obtained in step 1 deviates square by the time domain in phase relevant calculation region Gust, the value in each space is the space-time position coordinate most suspicious with corresponding position, including spatial position and place in excursion matrix Frame number;
Step 3: piecemeal being carried out to test video, and calculates each piece of histograms of oriented gradients Histogram of Oriented Gradient feature, abbreviation HOG feature generate HOG eigenmatrix;
Step 4: by step 2 can calculated every each space coordinate block of frame suspicious piece of position, calculation block with suspicious piece HOG feature difference, generate judgement matrix, the judgement matrix and true value compared into generation accuracy rate.
Preferably, the step 1 includes the following steps:
Step 1.1: space domain partitioning is carried out to each frame of test video;
Step 1.2: the light stream of each pixel of each frame of video is calculated using LK optical flow method;
Step 1.3: the piecemeal obtained according to step 1.1, using the summation of light stream value in each piece as the light stream value of the block, And be put in light stream matrix, obtain light stream matrixCalculation formula is as follows:
In formula: bx, by indicate block space X, Y coordinates, t0For the start frame where the light stream,Indicate t0Frame (bx, By) the light stream value at block, (x, y) in (bx, by) indicate that all pixels in block (bx, by), of (x, y) indicate pixel Light stream vectors at (x, y).
Preferably, space domain partitioning is carried out to each frame of test video in the step 1.1;Specifically, block size is selected For 16 × 16 pixel sizes block and make institute's piecemeal have overlapping between each other, it is 50% that the area covered mutually, which is arranged, then weighs Folded block area is the block of 8 × 16 pixel sizes.
Preferably, the step 1.2 includes the following steps:
Step 1.2.1: gaussian filtering is carried out to video frame, obtains smoothed image;
Step 1.2.2: the pyramid model image of the different scale of smoothed image is extracted, and to each layer of pyramid calculation Light stream;
Step 1.2.3: each layer of light stream value is merged into complete optical flow field.
Preferably, the step 2 includes the following steps:
Step 2.1: time-domain fragment being carried out to test video, so that each fragment covers 25% frame number between each other, it may be assumed that If overlay length is 2 frames with 8 frames for a fragment size;
Step 2.2: the block light stream after fragment being taken out as sample from light stream matrix, calculate sample and remaining space The phase of the complete light stream sequence of block of locations is related, offset is judged whether there is, if it is present in the position of excursion matrixWrite down the offset parameter (x ', y ', t ') in place, wherein bx, by, t0Respectively the space transverse and longitudinal coordinate of sample and its place Frame number, x ', y ', t ' are respectively that there are the space transverse and longitudinal coordinates and place frame number of the relevant offset blocks of phase to sample;It calculates every The fragment light stream of one frame is related to the phase of light stream matrix sequence O, if there is offset parameter, then position in excursion matrix For the offset parameter, it is otherwise set to 0.
Preferably, the phase related definition in the step 2.2 is as follows:
Using the time series of light stream size as signal, it is the sequence of N that, which just there is a length in each spatial position, and N is Video frame number, if O is to calculate resulting light stream sequence, then
Wherein, t is frame number, and (x, y) is the block space position for calculating light stream, and F is totalframes, and W, H are respectively video frame The number of abscissa and ordinate block, of is calculated light stream size, by space coordinate (x0, y0), time coordinate t0Locate Δ t The light stream sequence of length is as signal templates g (t0);
Using complete light stream sequence O as original signal, g (t0) it is used as measured signal, calculate power spectrum δ (tm) it is as follows:
Wherein, Ox,y(ω)、G*(ω) is respectively Ox,y、g(t0) Fourier transformation,Indicate inverse Fourier transform, Ox,yFor the light stream sequence for the region unit that space coordinate is (x, y), V indicates any piece.It is maximum due to there is the case where comparing certainly Always the migration result with itself light stream alignment, another max are to take Second Largest Value to value;By acquiring and g (t0) identical light Flow the time offset t where sequencem, tampered region is in the presence of because copy and identical sequence, and non-tampered region is then because looking for not Original area is judged as to same sequence;
Due to by replicating the light stream sequence for pasting tampered region there are another matched light stream sequence, by will be certain The light stream sequence of time span carries out phase relevant calculation as input and the complete light stream sequence of space coordinate, searches out possibility Offset find and potentially distort a little, excursion matrixCalculation formula is as follows:
In formula: δ (tm) indicate in a frame light stream sequence of Δ t time span and all full times at the position bx, by The phase relevant calculation of light stream sequence, tmIndicate the frame position that the second largest similarity is obtained in phase relevant calculation, t0Expression is worked as The frame number of previous frame;If current block (bx, by, t0) and every other piece all do not find offset and be just set to 0, obtain each frame Block excursion matrix P.
Preferably, the step 3 includes the following steps:
Step 3.1: video is carried out piecemeal processing by the spatial domain method of partition according to used by step 1;
Step 3.2: calculating the HOG Feature Descriptor of the block in video, be placed in spacetime coordinate locating for the block (x, y, t) Place generates HOG matrix, is expressed as H, calculation formula is as follows:
In formula: BW, BH are respectively the sum of x, y-axis block, and hog is the function for calculating HOG.
Preferably, the step 4 includes the following steps:
Step 4.1: according to the resulting time domain excursion matrix of step 2, extracting the sequence pair that there is offset;
Step 4.2: calculating the difference of HOG description of sequence pair in 4.1, otherness is described using Euclidean distance;Its In, decision process will generate trip current G,
In formula: ED indicates the Euclidean distance of two block HOG of calculating, and T is preset threshold value, if two block B, B' distances are less than Threshold value T, then the judgement matrix value at the block moment is 1, is otherwise set to 0.
Compared with prior art, the present invention have it is following the utility model has the advantages that
1, it is provided by the invention based on when inclined feature the homologous Copy-Move detection method of video being capable of mode against tampering It is limited, the region foreground and background part distorted is detected, accuracy rate is high.
2, it is provided by the invention based on when inclined feature the homologous Copy-Move detection method of video take full advantage of video Time domain specification compensates for the deficiency of the matched mode of space characteristics, is capable of detecting when ignored background or stationary object Copy-move tampering.
Detailed description of the invention
Upon reading the detailed description of non-limiting embodiments with reference to the following drawings, other feature of the invention, Objects and advantages will become more apparent upon:
Fig. 1 is calculating excursion matrix flow chart provided by the invention;
Fig. 2 is model framework figure provided by the invention.
Specific embodiment
The present invention is described in detail combined with specific embodiments below.Following embodiment will be helpful to the technology of this field Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that the ordinary skill of this field For personnel, without departing from the inventive concept of the premise, various modifications and improvements can be made.These belong to the present invention Protection scope.
The invention proposes copy-move altering detecting methods in a kind of frame based on time domain offset characteristic.This method is first The block region light stream sequence of test video is first calculated using LK optical flow method, then by the way that light stream sequence in each region is carried out phase Correlometer calculates their offset parameter building excursion matrix, this excursion matrix describes each piece, and there may be copy-move The space-time position of the pairing block of suspicion finally detects the Euclidean distance of the spatial texture feature between suspicion block using HOG feature, If distance is more than threshold value, determining the two blocks, there are copy-move to distort operation.
Since such mode of distorting will certainly bring whole section translation of the area pixel across the time, it is inclined also just to produce time domain It moves, and this property will bring the relevant detection possibility of phase in a frequency domain, so the present invention utilizes the characteristic, in conjunction with light stream Progress tampering detection related to phase.Compared with prior art, what this method was innovative has used video time domain characteristic, while It is that front and back scape is detected, and improves the Detection accuracy of pixel scale that tampered region, which can be ignored,.
Specifically, as shown in Fig. 2, provide according to the present invention based on when inclined feature the homologous copy-move detection of video Method includes the following steps:
Step 1: space-time partitioning pretreatment being carried out to test video, and its light stream sequence is calculated using LK optical flow method, is obtained The region light stream matrix of the video;
Step 2: the region light stream matrix according to obtained in step 1 is deviated by the time domain of phase correlation zoning Matrix;
Step 3: piecemeal being carried out to test video, and calculates each piece of histograms of oriented gradients (Histogram of Oriented Gradient, HOG) feature, generate HOG eigenmatrix;
Step 4: comparing the HOG characteristic similarity of two panel regions using excursion matrix obtained in step 2, generate judgement square The judgement matrix and Ground Truth are compared generation accuracy rate by battle array.
The step 1 includes the following steps:
Step 1.1: space domain partitioning being carried out to each frame of test video, selects block size for 16 × 16 pixel sizes herein Block.Meanwhile guarantee institute's piecemeal have overlapping between each other, herein selection mutually covering 50% area, then overlapping block area be 8 × The block of 16 pixel sizes;
Step 1.2: calculating the light stream of each each pixel of frame of video using LK optical flow method, video frame is carried out first high This filtering, obtains smoothed image, then extract the pyramid model image of the different scale of smoothed image, and to each layer of pyramid Light stream is calculated, complete optical flow field is finally merged into;
Step 1.3: according to the method for partition in step 1.1, calculating for convenience, ignore light stream direction attribute, statistics is every Light stream value of the light stream value size summation as the block in a block, and be put in light stream matrix, obtain light stream matrixIt calculates public Formula is as follows:
In formula: bx, by indicate block space X, Y coordinates, t0For the start frame where the light stream,Indicate t0Frame Light stream value at (bx, by) block, (x, y) in (bx, by) indicate that all pixels in block (bx, by), of (x, y) indicate picture Light stream vectors at vegetarian refreshments (x, y).
The step 2 includes the following steps:
Step 2.1: time-domain fragment being carried out to test video, and guarantees that fragment covers 25% frame number between each other, herein According to feasibility Experiment, clip size is determined as 8 frames, is laminated in 2 frames, i.e., it is 8 frames that step-length, which is 6, clip sizes,;According to Fragment length, the block light stream sequence extracted is region light stream, characterizes the temporal signatures in this time of the block;
Step 2.2: the block light stream after fragment being taken out as sample from light stream matrix, calculate sample and remaining space The phase of the complete light stream sequence of block of locations is related, offset is judged whether there is, if it is present in excursion matrixPosition Write down the offset parameter (x ', y ', t ') in the place of setting, wherein bx, by, t0The respectively space transverse and longitudinal coordinate and place frame number of sample, X ', y ', t ' are respectively that there are the space transverse and longitudinal coordinates and place frame number of the relevant offset blocks of phase to sample;And so on, meter The fragment light stream for calculating each frame is related to the phase of light stream matrix O ', if there is offset parameter, then position in excursion matrix It is set to the offset parameter, is otherwise set to 0.
Phase related definition in the step 2.2 is as follows:
Using the time series of light stream size as signal, it is the sequence of N that, which just there is a length in each spatial position, and N is Video frame number, if O is to calculate resulting light stream sequence,
Wherein, t is frame number, and (x, y) is the block space position for calculating light stream, and F is totalframes, and W, H are respectively video frame The number of abscissa and ordinate block, of is calculated light stream size, by space coordinate (x0, y0), time coordinate t0Locate Δ t The light stream sequence of length is as signal templates;
Using complete light stream sequence O as original signal, g (t0) it is used as measured signal, it is as follows to calculate power spectrum:
Wherein, Ox,y(ω)、G*(ω) is respectively Ox,y、g(t0) Fourier transformation,Indicate inverse Fourier transform, by In existing from the case where comparison, always the migration result with itself light stream alignment, another max are to take Second Largest Value to maximum value; By acquiring and g (t0) time offset t where identical light stream sequencem, tampered region in video memory in same sequence It arranges and finds offset coordinates, non-tampered region is then judged as original area due to can not find same sequence;
Since the light stream sequence in the region Copy-Move is there are another matched light stream sequence, by will certain time it is long The light stream sequence of degree carries out phase relevant calculation as input and the complete light stream sequence of space coordinate, can search out possibility Offset find and potentially distort a little, excursion matrix calculation formula is as follows:
In formula: δ (tm) indicate t0The light stream of the light stream sequence of the position bx, by Δ t time span and all full times in frame The phase relevant calculation of sequence, tmIndicate the frame position that the second largest similarity is obtained in phase relevant calculation;If current block (bx,by,t0) and every other piece all do not find offset and be just set to zero, obtain the block excursion matrix P of each frame.
Specifically, as shown in Figure 1, being the extraction process of excursion matrix in figure:
Step 1: extracting light stream sequence from light stream matrix according to the block position of present frame;
Step 2: according to before to illustrate to calculate phase related, judge whether the peak value of energy frequency spectrum is more than threshold value, if It is more than that the offset parameter is then write down in excursion matrix --- offset frame number, offset space coordinate;If being not above the peak of threshold value Value, then excursion matrix position is set as 0,;
Step 3: judging that present frame whether there are also uncalculated piece, then repeats the first step if it exists, otherwise calculates next Frame.
The step 3 includes the following steps:
Step 3.1: video is carried out piecemeal processing by the spatial domain method of partition according to used by step 1;
Step 3.2: its HOG Feature Descriptor being calculated to the block in video, is placed in spacetime coordinate locating for the block (x, y, t) Place generates HOG matrix, is expressed as H, calculation formula is as follows:
In formula: BW, BH are respectively x, y-axis sum, and hog is to calculate hog process.
The step 4 includes the following steps:
Step 4.1: space domain partitioning being carried out to each frame of test video, selects block size for 16 × 16 pixel sizes Block.Meanwhile guaranteeing that institute's piecemeal has overlapping between each other, 50% area of covering mutually is selected, then overlapping block area is 8 × 16 pixels The block of size;
Step 4.2: calculating the difference of HOG description of sequence pair in 4.1, otherness is described using Euclidean distance;Its In, decision process will generate trip current G,
In formula: ED indicates the Euclidean distance of two block HOG of calculating, and T is preset threshold value, if two block B, B' distances are less than Threshold value T, then the judgement matrix value at the block moment is 1, is otherwise set to 0.Herein, according to feasibility test, T is set as 0.5.
Specific embodiments of the present invention are described above.It is to be appreciated that the invention is not limited to above-mentioned Particular implementation, those skilled in the art can make various deformations or amendments within the scope of the claims, this not shadow Ring substantive content of the invention.

Claims (5)

1. it is a kind of based on when inclined feature the homologous Copy-Move detection method of video, which comprises the steps of:
Step 1: space-time partitioning pretreatment being carried out to test video, obtains piecemeal video, and piecemeal view is calculated using LK optical flow method The light stream sequence of frequency obtains the region light stream matrix of the video;
Step 2: the region light stream matrix according to obtained in step 1, by the time domain excursion matrix in phase relevant calculation region, partially The value in each space in matrix is moved as the space-time position coordinate with corresponding position, including spatial position and place frame number;
Step 3: piecemeal being carried out to test video, and calculates each piece of histograms of oriented gradients Histogram of Oriented Gradient feature, abbreviation HOG feature generate HOG eigenmatrix;
Step 4: by step 2 can calculated every each space coordinate block of frame suspicious piece of position, calculation block and suspicious piece of HOG Feature difference, generates judgement matrix, and the judgement matrix and true value are compared generation accuracy rate;
The step 1 includes the following steps:
Step 1.1: space domain partitioning is carried out to each frame of test video;
Step 1.2: the light stream of each pixel of each frame of video is calculated using LK optical flow method;
Step 1.3: the piecemeal obtained according to step 1.1 using the summation of light stream value in each piece as the light stream value of the block, and is put In light stream matrix, light stream matrix is obtainedCalculation formula is as follows:
In formula: bx, by indicate block space X, Y coordinates, t0For the start frame where the light stream,Indicate t0Frame (bx, by) block The light stream value at place, (x, y) in (bx, by) indicate that all pixels in block (bx, by), of (x, y) indicate pixel (x, y) The light stream vectors at place;
The step 2 includes the following steps:
Step 2.1: time-domain fragment being carried out to test video, so that each fragment covers 25% frame number between each other, it may be assumed that if with 8 frames are a fragment size, then overlay length is 2 frames;
Step 2.2: the block light stream after fragment being taken out as sample from light stream matrix, calculate sample and remaining spatial position The phase of the complete light stream sequence of block is related, offset is judged whether there is, if it is present in the position of excursion matrixPlace Write down the offset parameter (x ', y ', t '), wherein bx, by, t0Respectively the space transverse and longitudinal coordinate of sample and frame number where it, X ', y ', t ' are respectively that there are the space transverse and longitudinal coordinates and place frame number of the relevant offset blocks of phase to sample;Calculate each frame Fragment light stream is related to the phase of light stream matrix sequence O ', and if there is offset parameter, then the position is to be somebody's turn to do in excursion matrix Otherwise offset parameter is set to 0;
Phase related definition in the step 2.2 is as follows:
Using the time series of light stream size as signal, it is the sequence of N that, which just there is a length in each spatial position, and N is video Frame number, if O is to calculate resulting light stream sequence, then
Wherein, t is frame number, and (x, y) is the block space position for calculating light stream, and F is totalframes, and W, H are respectively the horizontal seat of video frame The number of block and ordinate block is marked, of is calculated light stream size, by space coordinate (x0, y0), time coordinate t0Locate Δ t long The light stream sequence of degree is as signal templates g (t0);
Using complete light stream sequence O as original signal, g (t0) it is used as measured signal, calculate power spectrum δ (tm) it is as follows:
Wherein, Ox,y(ω)、G*(ω) is respectively Ox,y、g(t0) Fourier transformation,Indicate inverse Fourier transform, Ox,yFor sky Between coordinate be (x, y) region unit light stream sequence, V indicates any piece;Light stream sequence to be detected can with it is all possible its He is compared light stream sequence, therefore maximum value always enables f with the migration result of itself light stream alignmentoptTo take second Big value, the case where exclusion compared with itself;By acquiring and g (t0) time offset t where identical light stream sequencem, distort Region exist because copy and identical sequence, non-tampered region are then judged as region of initiation because can not find same sequence Domain;
Due to by replicating the light stream sequence for pasting tampered region there are another matched light stream sequence, by by certain time The light stream sequence of length carries out phase relevant calculation as input and the complete light stream sequence of space coordinate, searches out possible inclined Shifting amount finds and potentially distorts a little, excursion matrixCalculation formula is as follows:
In formula: δ (tm) indicate the light stream sequence of the light stream sequence of Δ t time span and all full times at the position bx, by a frame The phase relevant calculation of column, tmIndicate the frame position that the second largest similarity is obtained in phase relevant calculation, t0Indicate present frame Frame number;If current block (bx, by, t0) and every other piece all do not find offset and be just set to 0, the block for obtaining each frame is inclined Move matrix P.
2. it is according to claim 1 based on when inclined feature the homologous Copy-Move detection method of video, which is characterized in that Space domain partitioning is carried out to each frame of test video in the step 1.1;Specifically, select block size for 16 × 16 pixel sizes Block and make institute's piecemeal have overlapping between each other, it is 50% that the area covered mutually, which is arranged, then overlapping block area is 8 × 16 pictures The block of plain size.
3. it is according to claim 1 based on when inclined feature the homologous Copy-Move detection method of video, which is characterized in that The step 1.2 includes the following steps:
Step 1.2.1: gaussian filtering is carried out to video frame, obtains smoothed image;
Step 1.2.2: the pyramid model image of the different scale of smoothed image is extracted, and to each layer of pyramid calculation light Stream;
Step 1.2.3: each layer of light stream value is merged into complete optical flow field.
4. it is according to claim 1 based on when inclined feature the homologous Copy-Move detection method of video, which is characterized in that The step 3 includes the following steps:
Step 3.1: video is carried out piecemeal processing by the spatial domain method of partition according to used by step 1;
Step 3.2: the HOG Feature Descriptor of the block in video is calculated, is placed at spacetime coordinate locating for the block (x, y, t), it is raw At HOG matrix, it is expressed as H, calculation formula is as follows:
In formula: BW, BH are respectively the sum of x, y-axis block, and hog is the function for calculating HOG.
5. it is according to claim 1 based on when inclined feature the homologous Copy-Move detection method of video, which is characterized in that The step 4 includes the following steps:
Step 4.1: according to the resulting time domain excursion matrix of step 2, extracting the sequence pair that there is offset;
Step 4.2: calculating the difference of HOG description of sequence pair in 4.1, otherness is described using Euclidean distance;Wherein, sentence Trip current G will be generated by determining process,
In formula: ED indicates the Euclidean distance of two block HOG of calculating, and T is preset threshold value, if two block B, B' distances are less than threshold value T, then the judgement matrix value at the B block moment is 1, is otherwise set to 0.
CN201510934885.7A 2015-12-14 2015-12-14 Based on when inclined feature the homologous Copy-Move detection method of video Active CN105578198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510934885.7A CN105578198B (en) 2015-12-14 2015-12-14 Based on when inclined feature the homologous Copy-Move detection method of video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510934885.7A CN105578198B (en) 2015-12-14 2015-12-14 Based on when inclined feature the homologous Copy-Move detection method of video

Publications (2)

Publication Number Publication Date
CN105578198A CN105578198A (en) 2016-05-11
CN105578198B true CN105578198B (en) 2019-01-11

Family

ID=55887795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510934885.7A Active CN105578198B (en) 2015-12-14 2015-12-14 Based on when inclined feature the homologous Copy-Move detection method of video

Country Status (1)

Country Link
CN (1) CN105578198B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573269B (en) * 2017-10-24 2021-02-05 北京金山云网络技术有限公司 Image feature point matching method, matching device, electronic device and storage medium
CN111083489A (en) 2018-10-22 2020-04-28 北京字节跳动网络技术有限公司 Multiple iteration motion vector refinement
WO2020098647A1 (en) 2018-11-12 2020-05-22 Beijing Bytedance Network Technology Co., Ltd. Bandwidth control methods for affine prediction
EP3861742A4 (en) 2018-11-20 2022-04-13 Beijing Bytedance Network Technology Co., Ltd. Difference calculation based on patial position
EP3915259A4 (en) 2019-03-06 2022-03-30 Beijing Bytedance Network Technology Co., Ltd. Usage of converted uni-prediction candidate
CN112584146B (en) * 2019-09-30 2021-09-28 复旦大学 Method and system for evaluating interframe similarity
CN113704551A (en) * 2021-08-24 2021-11-26 广州虎牙科技有限公司 Video retrieval method, storage medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650830A (en) * 2009-08-06 2010-02-17 中国科学院声学研究所 Compressed domain video lens mutation and gradient union automatic segmentation method and system
CN102156702A (en) * 2010-12-17 2011-08-17 南方报业传媒集团 Fast positioning method for video events from rough state to fine state
EP2517176A1 (en) * 2009-12-21 2012-10-31 ST-Ericsson (France) SAS Method for regenerating the background of digital images of a video stream
CN103390040A (en) * 2013-07-17 2013-11-13 南京邮电大学 Video copy detection method
CN103945228A (en) * 2014-03-28 2014-07-23 上海交通大学 Video intra-frame copy-move tampering detection method based on space and time relevance
CN105141968A (en) * 2015-08-24 2015-12-09 武汉大学 Video same-source copy-move tampering detection method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650830A (en) * 2009-08-06 2010-02-17 中国科学院声学研究所 Compressed domain video lens mutation and gradient union automatic segmentation method and system
EP2517176A1 (en) * 2009-12-21 2012-10-31 ST-Ericsson (France) SAS Method for regenerating the background of digital images of a video stream
CN102156702A (en) * 2010-12-17 2011-08-17 南方报业传媒集团 Fast positioning method for video events from rough state to fine state
CN103390040A (en) * 2013-07-17 2013-11-13 南京邮电大学 Video copy detection method
CN103945228A (en) * 2014-03-28 2014-07-23 上海交通大学 Video intra-frame copy-move tampering detection method based on space and time relevance
CN105141968A (en) * 2015-08-24 2015-12-09 武汉大学 Video same-source copy-move tampering detection method and system

Also Published As

Publication number Publication date
CN105578198A (en) 2016-05-11

Similar Documents

Publication Publication Date Title
CN105578198B (en) Based on when inclined feature the homologous Copy-Move detection method of video
CN106096577B (en) A kind of target tracking method in camera distribution map
CN105528619B (en) SAR remote sensing image variation detection method based on wavelet transformation and SVM
Pan et al. Robust abandoned object detection using region-level analysis
CN104517095B (en) A kind of number of people dividing method based on depth image
Ju et al. RETRACTED ARTICLE: Moving object detection based on smoothing three frame difference method fused with RPCA
Li et al. Abnormal event detection based on sparse reconstruction in crowded scenes
Li et al. Fast cat-eye effect target recognition based on saliency extraction
Wang et al. Prohibited items detection in baggage security based on improved YOLOv5
Akyüz Photographically Guided Alignment for HDR Images.
CN104504162B (en) A kind of video retrieval method based on robot vision platform
Hassan et al. Illumination invariant stationary object detection
Bostanci et al. Feature coverage for better homography estimation: an application to image stitching
Heng et al. High accuracy flashlight scene determination for shot boundary detection
Sankar et al. Feature based classification of computer graphics and real images
Khatoon et al. A robust and enhanced approach for human detection in crowd
Qin et al. Gesture recognition from depth images using motion and shape features
CN105141968A (en) Video same-source copy-move tampering detection method and system
Chittapur et al. Video forgery detection using motion extractor by referring block matching algorithm
CN105205826B (en) A kind of SAR image azimuth of target method of estimation screened based on direction straight line
CN104933739A (en) Flame detection method based on I1I2I3 color space
Chen et al. Early fire detection using HEP and space-time analysis
Yu Localization and extraction of the four clock-digits using the knowledge of the digital video clock
Lu et al. Research on an improved visual background extraction algorithm
Osborne et al. Temporally stable feature clusters for maritime object tracking in visible and thermal imagery

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant