CN104469361A - Video frame deletion evidence obtaining method with motion self-adaptability - Google Patents

Video frame deletion evidence obtaining method with motion self-adaptability Download PDF

Info

Publication number
CN104469361A
CN104469361A CN201410843795.2A CN201410843795A CN104469361A CN 104469361 A CN104469361 A CN 104469361A CN 201410843795 A CN201410843795 A CN 201410843795A CN 104469361 A CN104469361 A CN 104469361A
Authority
CN
China
Prior art keywords
frame
video
motion
sequence
hist
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410843795.2A
Other languages
Chinese (zh)
Other versions
CN104469361B (en
Inventor
徐正全
冯春晖
张文婷
贾姗
徐彦彦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201410843795.2A priority Critical patent/CN104469361B/en
Publication of CN104469361A publication Critical patent/CN104469361A/en
Application granted granted Critical
Publication of CN104469361B publication Critical patent/CN104469361B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a video frame deletion evidence obtaining method with the motion self-adaptability. The method comprises the steps that an encoder for removing intra-frame prediction modes of frames P is generated; the improved encoder is utilized for carrying out intra-frame encoding removing preprocessing on a video to be detected to obtain a preprocessed video sequence; fluctuation intensity of motion residual error data of the frames P in the preprocessed video sequence is quantized to obtain a fluctuation intensity sequence; according to location of the fluctuation intensity sequence, a set of candidate frames with frame deletion points is obtained, and color balance and average gradients are utilized for removing illuminating mutation interference frames and focusing shaking interference frames to obtain the final detection result. According to the method, the residual error fluctuation degree is quantized for removing the main interference frames at the medium and low motion positions and locating and falsifying the frames; the modified video encoder is utilized for carrying out intra-frame prediction removing preprocessing on the video to be detected, and therefore the robustness of the fluctuation characteristic at the position of the strong motion is improved; the frame deletion positions can be located accurately, the ubiquitous shaking interference frames in the video are removed, and the method is high in practicality.

Description

A kind of video with Motion Adaptive deletes frame evidence collecting method
Technical field
The present invention relates to visual media information security field, more specifically, relate to a kind of video with Motion Adaptive and delete frame evidence collecting method.
Background technology
The malice of visual media is distorted and may cause serious society and legal issue.The image of Prof. Du Yucang or video can be used as the insurance claim proof etc. of the false evidence in court, misleading or agitative news report and forgery.Digital watermark technology detects one of effective ways of distorting of visual media, but it requires embed watermark information while shooting, and this makes the application of digital watermarking be limited to special capture apparatus.With digital watermarking differently, visual media forensic technologies does not need to embed any additional information in media, analyze by means of only to the inherent feature existed in media data, the reduction acquisition of visual media and the history of post-processed, determine the confidence level of media data.Visual media forensic technologies is research field emerging nearly ten years, comprises digital image evidence collecting and digital video is collected evidence.Wherein the research of image forensics is made a start comparatively early and comparatively ripe; And along with digital video application extensively with in-depth, digital video evidence obtaining starts the focus becoming visual media evidence obtaining research.It is delete the some frames in video sequence that a kind of typical video distorts operation, the content recorded with hiding respective frame.Such as, add in penalty video at football, delete the scene of sportsman's foul; Or by some frame deletion in monitor video, cause the evidence that suspect is absent from the scene.Situation like this makes video delete frame detection and becomes quite important.
I frame, P frame and B frame three class image at main definitions H.264 and in early stage video encoding standard.Wherein I frame is key frame, only uses intraframe coding; P frame carries out inter prediction encoding with reference to key frame before or P frame; B frame carries out interframe encode with reference to front and back frame simultaneously.For improving code efficiency, the macro block of interframe and intraframe coding in P frame and B frame, may be comprised simultaneously.In the basic class of encryption algorithm, I frame and P frame is generally only used to encode.
According to existing coding standard, current delete frame evidence collecting method and can be divided into two classes.First kind method utilizes deletes the foundation of seondary effect as detection that frame distorts generation, and the statistical nature of frame point self is deleted in another kind of analysis.In first kind method, delete frame and distort seondary effect and refer to I frame when compressing first after deleting frame, be reset residual error active effects for producing during P frame again after second compression.It is reorientation I frame by such frame definition herein.Reorientation I frame effect is effective in some circumstances, then can lose efficacy: first in following situation, and when video content motion is stronger, reorientation I frame effect can weaken and even disappears; Secondly, when when video sequence frame number is less, image sets (Group of Pictures, GOP) length is larger, reorientation I frame effect can become comparatively sparse, thus makes the phase estimate precise decreasing of reorientation I frame effect; 3rd, when integral multiple gop structure deleted and first and the gop structure of second-compressed is identical time, namely the I frame compressed first can't be repositioned to P frame, and also reorientation I frame effect does not exist in this case.Compared to first kind method, Equations of The Second Kind method is not subject to length, the GOP length of video sequence or deletes frame position etc. to affect, the essential characteristic that frame point place frame (frame point deleted in abbreviation) and its reference frame have larger time difference is deleted in utilization, will have the frame alignment of stronger frame difference conspicuousness for deleting frame point.Its specific algorithm comprises analyzes the light stream of video sequence or the continuity etc. of sports ground.These class methods do not analyze the exercise intensity change of video sequence content to the impact of evidence obtaining algorithm, and can not remove and ubiquitously in video sequence non-ly delete frame transition noise spot.
Summary of the invention
The present invention for overcoming above-mentioned the deficiencies in the prior art, provide a kind of automatically can adapt to different motion intensity video content delete frame evidence collecting method.
For achieving the above object, the technical solution used in the present invention is that a kind of video with Motion Adaptive deletes frame evidence collecting method, comprises the steps,
Step 1, generates the encoder removing P frame intra prediction mode;
Step 2, utilizes the encoder improved to video S to be detected tAMcarry out intraframe coding preliminary treatment, obtain pretreated video sequence S tAM';
Step 3, to pretreated video sequence S tAM' in the cymomotive force of P frame motion residuals data quantize, obtain cymomotive force sequence;
Step 4, obtains according to cymomotive force sequence location the set C deleting frame point candidate frame fDP, realize as follows,
To the kth frame in gained cymomotive force sequence, calculate the average of the cymomotive force of adjacent 2W frame w is default length of window value;
By cymomotive force r (k) of kth frame and average compare, obtain ratio y (k) as follows,
y ( k ) = r ( k ) r ‾ ( k ) - 1
If y (k) is >THR_R, then present frame orientated as and delete frame point candidate frame;
Step 5, utilizes color balancing and average gradient to remove set C fDPin illuminance abrupt variation interference frame and focus jitter interference frame, obtain final testing result, realize as follows,
Color balancing is utilized to remove set C fDPin illuminance abrupt variation interference frame, comprise C fDPin each frame be analyzed as follows respectively as frame to be detected,
A) set certain frame to be detected and reference frame thereof as I and I-1, if the grey level histogram of I and I-1 is hist iand hist i-1; Color balance process is carried out to I and I-1, obtains I ' and I-1 ', and set corresponding grey level histogram as hist i 'and hist i-1 ';
B) the grey level histogram discrepancy delta of I and I-1 before and after color balance is asked iand Δ i-1it is as follows,
Δ I=|hist I-hist I'|
Δ I-1=|hist I-1-hist I-1'|
C) Pasteur's distance of the grey level histogram difference of I and I-1 before and after color balance is asked for, if
d BhattacharyyaI,Δ I-1)>α
Then I is positioned as illumination shake frame, wherein d bhattacharyyathe function of Pasteur's distance is asked in representative, and α is corresponding predetermined threshold value;
Average gradient is utilized to remove set C fDPin focus jitter interference frame, comprise C fDPin each frame be analyzed as follows respectively as frame to be detected,
If the average gradient of two field picture I to be detected is G iif, G i< β, then orientate I as focus jitter frame, and wherein β is corresponding predetermined threshold value.
And step 3 comprises following sub-step,
Step 3.1, by video sequence S tAM' partial decoding of h, extract motion residuals matrix sequence;
Step 3.2, to the motion residuals matrix of each P frame, first in units of encoding block, calculates the standard deviation of motion residuals:
e n=e n,1,e n,2,…,e n,Cn∈[1,N]
s n=σ(e n)
s=s 1,s 2…,s N
Wherein, e nbe the motion residuals matrix of the n-th encoding block, C is the motion residuals number that encoding block comprises, e n, 1, e n, 2..., e n,Cbe the 1st of the n-th encoding block the, 2 ..., C motion residuals, N is the encoding block number comprised in a frame; s nbe the n-th encoding block motion residuals matrix e nstandard deviation sigma (e n), s is the vector of the standard deviation composition of all encoding blocks in a frame;
Step 3.3, to each P frame, the relative smooth degree of all elements in compute vector s, the quantized value r obtaining frame motion residuals cymomotive force is as follows,
R = 1 - 1 1 + &sigma; 2
r=R(s)
Wherein, σ is the functional symbol asking for standard deviation, and R is the functional symbol asking for relative smooth degree, and R (s) represents the relative smooth degree asking for all elements in vectorial s;
And, in step 4, to the kth frame in gained cymomotive force sequence, calculate the average of the cymomotive force of adjacent 2W frame realize as follows,
r &OverBar; ( k ) = r ( 3 ) + r ( 4 ) 2 , k = 1
r &OverBar; ( k ) = r ( k - 1 ) + r ( k + 1 ) 2 , k &Element; ( 1 , W + 1 ) &cup; ( T - W , T )
r &OverBar; ( k ) = &Sigma; i = 1 W r ( k + i ) + r ( k - i ) 2 W , k &Element; [ W + 1 , T - W ] ,
r &OverBar; ( k ) = r ( T - 1 ) + r ( T - 2 ) 2 , k = T
Wherein, k is positive integer, and T is the frame number of video sequence, and W is default length of window value.
Compared with prior art, the invention has the beneficial effects as follows: the present invention deletes frame point from the angle location of frame difference, the interference frame that multianalysis may exist, proposes the fluctuation feature utilizing frame motion residuals, distinguishes and delete frame point and the main interference at middle harmonic motion place thereof; Propose the preprocess method removing P infra-frame prediction, the robustness of fluctuation feature in sharp movement region is strengthened, thus is adapted to the video of exercise intensity change; Color balance and average gradient are combined with detection method as additional forensic tools, remove secondary interference frame, reduce further false drop rate.The present invention is used in the time dependent video of exercise intensity accurately to locate and deletes frame position, and removes ubiquitous shaking interference frame in video, has stronger practicality.
Accompanying drawing explanation
Fig. 1 is the general flow chart of the embodiment of the present invention;
Fig. 2 be the embodiment of the present invention delete frame point and it mainly disturbs the motion residuals average statistical nature comparison diagram of the former I frame of frame;
Fig. 3 be the embodiment of the present invention former I frame with delete the motion residuals distribution characteristics comparison diagram of frame point, wherein Fig. 3 a is the motion residuals distribution map of former I frame, Fig. 3 b be there is close motion residuals average delete frame point motion residuals distribution map;
Fig. 4 is the motion residuals average statistical nature deleting frame point and former I frame and the motion residuals cymomotive force statistical nature comparison diagram of the embodiment of the present invention, wherein Fig. 4 a is that certain deletes the equal value histogram of motion residuals that frame distorts video, and Fig. 4 b is the motion residuals fluctuation histogram of same video sequence;
Fig. 5 is that the embodiment of the present invention is removed and deleted frame point cymomotive force feature comparison diagram before and after infra-frame prediction preliminary treatment, and wherein Fig. 5 a is the train wave fatigue resistance histogram before preliminary treatment, and Fig. 5 a is the cymomotive force histogram of pretreated identical sequence.
Embodiment
Below in conjunction with drawings and Examples, technical scheme of the present invention is described further.
Technical solution of the present invention institute supplying method can adopt computer software technology to realize automatic operational process.As shown in Figure 1, a kind of video with Motion Adaptive that the embodiment of the present invention provides deletes frame evidence collecting method, and idiographic flow comprises:
S1: generate the encoder removing P frame intra prediction mode, comprise P frame macro-block skip mode in amendment video encoder and select mechanism, forbidding intra prediction mode wherein, generate the encoder after improving, embodiment employing method is:
(1) in the source code of generating video encoder (wherein video encoding standard can comprise H.264 and P frame comparatively early adopts the standard of interframe and intra prediction mode simultaneously), the selection course of amendment P frame macro-block skip mode, the Coding cost getting rid of intra prediction mode calculates, and the selection of coding mode is limited in different inter-frame forecast modes.
(2) compile amended source code, generate the video encoder after improving.
S2: utilize the encoder improved to video S to be detected tAMcarry out intraframe coding preliminary treatment, obtain S tAM', the concrete steps that embodiment adopts are:
(1) decode video sequence S to be detected tAM.During concrete enforcement, the video sequence S to be detected obtained adopting video encoder of the prior art tAM, can conventionally middle respective decoder decoding.
(2) video encoder improved is utilized again to encode to decoding sequence.Coding parameter is set to: basic class, forbidding self adaptation GOP length options, and regulates GOP length for maximum (i.e. encoder itself can reach maximum GOP length), obtains the pretreated video sequence S only containing inter-frame forecast mode in P frame tAM'.
Coding parameter is set to basic class, and video image can be made only to comprise I frame and P frame.Forbidding adaptive-length option and regulate GOP length to be maximum, can reduce as far as possible delete frame point place and generate the probability of I frame, because detection algorithm is not suitable for I frame and the situation of deleting frame point and overlapping.
In video encoding standard H.264 and more early, may comprise in frame in P frame and the macro block of inter-frame forecast mode coding simultaneously.When video content exercise intensity strengthens, for improving code efficiency, more macro block adopts intra prediction mode coding, thus inter prediction macro block is reduced, and causes motion residuals can not reflect the time difference of each frame and its reference frame completely.Use said method to improve encoder, make the frame difference conspicuousness of deleting frame point and its reference frame obtain complete characterization by motion residuals, strengthen the robustness of motion residuals fluctuation feature in sharp movement region.As shown in Figure 5, wherein Fig. 5 a is the sequence before preliminary treatment, and Fig. 5 a is pretreated sequence, and abscissa is frame number, and ordinate is cymomotive force, deletes frame point conspicuousness after the pre-treatment as seen and obviously strengthens.
S3: the motion residuals cymomotive force sequence asking preliminary treatment rear video, is called for short cymomotive force sequence.Comprise video sequence S tAM' in the cymomotive force of P frame motion residuals data quantize, obtain cymomotive force sequence, the method that embodiment adopts is:
(1) by video sequence S tAM' partial decoding of h, extract motion residuals matrix sequence, specific implementation can adopt prior art, and it will not go into details in the present invention.
(2) to the motion residuals matrix of each P frame, first in units of encoding block, the standard deviation of motion residuals is calculated:
e n=e n,1,e n,2,…,e n,Cn∈[1,N]
s n=σ(e n)
s=s 1,s 2…,s N
Wherein, e nbe the motion residuals matrix of the n-th encoding block, C is the motion residuals number that encoding block comprises, e n, 1, e n, 2..., e n,Cbe the 1st of the n-th encoding block the, 2 ..., C motion residuals, N is the encoding block number comprised in a frame.S nbe the n-th encoding block motion residuals matrix e nstandard deviation sigma (e n), s is the vector of the standard deviation composition of all encoding blocks in a frame.(3) to each P frame, the relative smooth degree of all elements in compute vector s, the r obtained is the quantized value of frame motion residuals cymomotive force:
R = 1 - 1 1 + &sigma; 2
r=R(s)
Wherein, σ is the functional symbol asking for standard deviation, and R is the functional symbol asking for relative smooth degree, and R (s) namely asks for the relative smooth degree of all elements in vectorial s.
(4) according to the r value calculating each P frame in gained video sequence, motion residuals cymomotive force sequence is obtained.
Distort in video sequence deleting frame, exist more to delete frame point and there is similar the non-of residual error average conspicuousness distort frame, form the interference of deleting frame and detecting.Reorientation I frame is a wherein the most general class interference.Low-speed motion region in video, the motion residuals average of deleting frame point and former I frame all has stronger conspicuousness, and as shown in Figure 2, wherein abscissa is frame number, and ordinate is motion residuals average.But when motion residuals average is close, the motion residuals cymomotive force of former I frame is significantly less than deletes frame point.As shown in Figure 3, abscissa is encoding block sequence number, and ordinate is motion residuals value, wherein Fig. 3 a is the motion residuals distribution map of former I frame, motion residuals average to be 1.52, Fig. 3 b be there is close motion residuals average delete frame point motion residuals distribution map, motion residuals average is 1.55.Visible, former I frame with delete the motion residuals average of frame point all about 1.5, the motion residuals of former I frame is evenly gently distributed in x-axis both sides, and delete frame point motion residuals distribution fluctuation obviously strengthen.Said method is used to quantize residual error cymomotive force, the conspicuousness removing former I frame while frame point conspicuousness is deleted in reservation can be had, effective both differentiations, as shown in Figure 4, wherein Fig. 4 a is that certain deletes the equal value histogram of motion residuals that frame distorts video, and abscissa is frame number, ordinate is average, Fig. 4 b is the motion residuals fluctuation histogram of same video sequence, and abscissa is frame number, and ordinate is cymomotive force.S4: utilize adaptive threshold detection algorithm to locate the frame in cymomotive force sequence with stronger conspicuousness, obtain the set C deleting frame point candidate frame fDP, its method is:
(1) to the kth frame in gained cymomotive force sequence in S3, the average of the cymomotive force of its adjacent 2W (W is default length of window value, can be preset as empirical value voluntarily, be preferably 3 by those skilled in the art) frame is calculated
r &OverBar; ( k ) = r ( 3 ) + r ( 4 ) 2 , k = 1
r &OverBar; ( k ) = r ( k - 1 ) + r ( k + 1 ) 2 , k &Element; ( 1 , W + 1 ) &cup; ( T - W , T )
r &OverBar; ( k ) = &Sigma; i = 1 W r ( k + i ) + r ( k - i ) 2 W , k &Element; [ W + 1 , T - W ] ,
r &OverBar; ( k ) = r ( T - 1 ) + r ( T - 2 ) 2 , k = T
Wherein, k is positive integer, and T is the frame number of video sequence.
(2) by cymomotive force r (k) of kth frame and cymomotive force average compare, k=1,2 ... T, obtains ratio y (k):
y ( k ) = r ( k ) r &OverBar; ( k ) - 1
(3) if y (k) is >THR_R, (THR_R is the threshold value of cymomotive force conspicuousness, those skilled in the art can choose empirical value according to actual conditions in the scope of 0.2 ~ 2.0), then think that the conspicuousness of fluctuation characteristic of present frame is apparently higher than consecutive frame, present frame is orientated as and deletes frame point candidate frame, and set and allly delete the set of frame point candidate frame as C fDP.
S5: utilize color balance (ACE) and average gradient to set C fDPin each frame be further processed, remove illuminance abrupt variation wherein and focus jitter interference frame, obtain final testing result, its concrete grammar comprises the following steps:
(1) to C fDPin each frame analyze, utilize color balance to locate illuminance abrupt variation frame, comprise C fDPin each frame be analyzed as follows respectively as frame to be detected:
A) set certain frame to be detected and reference frame thereof as I and I-1, if the grey level histogram of I and I-1 is hist iand hist i-1; Color balance process is carried out to I and I-1 two field picture, obtains I ' and I-1 ', and set its grey level histogram as hist i 'and hist i-1 '.
B) the grey level histogram discrepancy delta of I and I-1 frame before and after color balance is asked iand Δ i-1, wherein
Δ I=|hist I-hist I'|
Δ I-1=|hist I-1-hist I-1'|
C) Pasteur's distance of the grey level histogram difference of I and I-1 frame before and after color balance is asked for, if
d BhattacharyyaI,Δ I-1)>α
Then I is positioned as illuminance abrupt variation frame.Wherein d bhattacharyyathe function of Pasteur's distance is asked in representative, and α is corresponding predetermined threshold value (can be preset as empirical value voluntarily by those skilled in the art, suggestion value is between 0.03 ~ 0.02).
(2) to C fDPin each frame analyze, utilize AVERAGE GRADIENT METHOD WITH position of focusing facula shake frame: because the definition of video non-shake frame maintains in constant scope, and the definition of focus jitter frame is obviously on the low side compared with normal frame, therefore first asks for C fDPin get the average gradient G of two field picture I to be detected i(average gradient can be used to the definition weighing image), if its average gradient value is less than respective threshold β, namely
G I<β,
Then by I, it orientates focus jitter frame as.Wherein β is corresponding predetermined threshold value, (can be preset as empirical value voluntarily by those skilled in the art, suggestion value 0.027).
(3) by illumination and focus jitter frame all from candidate frame set C fDPmiddle removal, obtains final deleting frame tampering detection result.
Specific embodiment described herein is only illustrate spirit of the present invention.Person skilled in the art of the present invention can make various amendment and supplements or adopt similar mode to substitute to described specific embodiment, but can't depart from the present invention's spirit or surmount the scope that appended claims defines.

Claims (3)

1. the video with Motion Adaptive deletes a frame evidence collecting method, it is characterized in that, comprises the steps,
Step 1, generates the encoder removing P frame intra prediction mode;
Step 2, utilizes the encoder improved to video S to be detected tAMcarry out intraframe coding preliminary treatment, obtain pretreated video sequence S tAM';
Step 3, to pretreated video sequence S tAM' in the cymomotive force of P frame motion residuals data quantize, obtain cymomotive force sequence;
Step 4, obtains according to cymomotive force sequence location the set C deleting frame point candidate frame fDP, realize as follows,
To the kth frame in gained cymomotive force sequence, calculate the average of the cymomotive force of adjacent 2W frame w is default length of window value;
By cymomotive force r (k) of kth frame and average compare, obtain ratio y (k) as follows,
If y (k) is >THR_R, then present frame orientated as and delete frame point candidate frame;
Step 5, utilizes color balancing and average gradient to remove set C fDPin illuminance abrupt variation interference frame and focus jitter interference frame, obtain final testing result, realize as follows,
Color balancing is utilized to remove set C fDPin illuminance abrupt variation interference frame, comprise C fDPin each frame be analyzed as follows respectively as frame to be detected,
A) set certain frame to be detected and reference frame thereof as I and I-1, if the grey level histogram of I and I-1 is hist iand hist i-1; Color balance process is carried out to I and I-1, obtains I ' and I-1 ', and set corresponding grey level histogram as hist i 'and hist i-1 ';
B) the grey level histogram discrepancy delta of I and I-1 before and after color balance is asked iand Δ i-1it is as follows,
Δ I=|hist I-hist I'|
Δ I-1=|hist I-1-hist I-1'|
C) Pasteur's distance of the grey level histogram difference of I and I-1 before and after color balance is asked for, if
d BhattacharyyaI,Δ I-1)>α
Then I is positioned as illumination shake frame, wherein d bhattacharyyathe function of Pasteur's distance is asked in representative, and α is corresponding predetermined threshold value;
Average gradient is utilized to remove set C fDPin focus jitter interference frame, comprise C fDPin each frame be analyzed as follows respectively as frame to be detected,
If the average gradient of two field picture I to be detected is G iif, G i< β, then orientate I as focus jitter frame, and wherein β is corresponding predetermined threshold value.
2. the video according to claim 1 with Motion Adaptive deletes frame evidence collecting method, it is characterized in that: step 3 comprises following sub-step,
Step 3.1, by video sequence S tAM' partial decoding of h, extract motion residuals matrix sequence;
Step 3.2, to the motion residuals matrix of each P frame, first in units of encoding block, calculates the standard deviation of motion residuals:
e n=e n,1,e n,2,…,e n,Cn∈[1,N]
s n=σ(e n)
s=s 1,s 2…,s N
Wherein, e nbe the motion residuals matrix of the n-th encoding block, C is the motion residuals number that encoding block comprises, e n, 1, e n, 2..., e n,Cbe the 1st of the n-th encoding block the, 2 ..., C motion residuals, N is the encoding block number comprised in a frame; s nbe the n-th encoding block motion residuals matrix e nstandard deviation sigma (e n), s is the vector of the standard deviation composition of all encoding blocks in a frame;
Step 3.3, to each P frame, the relative smooth degree of all elements in compute vector s, the quantized value r obtaining frame motion residuals cymomotive force is as follows,
r=R(s)
Wherein, σ is the functional symbol asking for standard deviation, and R is the functional symbol asking for relative smooth degree, and R (s) represents the relative smooth degree asking for all elements in vectorial s.
3. the video according to claim 1 or 2 with Motion Adaptive deletes frame evidence collecting method, it is characterized in that: in step 4, to the kth frame in gained cymomotive force sequence, calculates the average of the cymomotive force of adjacent 2W frame realize as follows,
k=1
k∈(1,W+1)∪(T-W,T)
k∈[W+1,T-W],
k=T
Wherein, k is positive integer, and T is the frame number of video sequence, and W is default length of window value.
CN201410843795.2A 2014-12-30 2014-12-30 A kind of video with Motion Adaptive deletes frame evidence collecting method Expired - Fee Related CN104469361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410843795.2A CN104469361B (en) 2014-12-30 2014-12-30 A kind of video with Motion Adaptive deletes frame evidence collecting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410843795.2A CN104469361B (en) 2014-12-30 2014-12-30 A kind of video with Motion Adaptive deletes frame evidence collecting method

Publications (2)

Publication Number Publication Date
CN104469361A true CN104469361A (en) 2015-03-25
CN104469361B CN104469361B (en) 2017-06-09

Family

ID=52914634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410843795.2A Expired - Fee Related CN104469361B (en) 2014-12-30 2014-12-30 A kind of video with Motion Adaptive deletes frame evidence collecting method

Country Status (1)

Country Link
CN (1) CN104469361B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105141968A (en) * 2015-08-24 2015-12-09 武汉大学 Video same-source copy-move tampering detection method and system
CN109561316A (en) * 2018-10-26 2019-04-02 西安科锐盛创新科技有限公司 A kind of VR three dimensional image processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060159172A1 (en) * 2005-01-18 2006-07-20 Canon Kabushiki Kaisha Video Signal Encoding Apparatus and Video Data Encoding Method
WO2010057027A1 (en) * 2008-11-14 2010-05-20 Transvideo, Inc. Method and apparatus for splicing in a compressed video bitstream
CN101835040A (en) * 2010-03-17 2010-09-15 天津大学 Digital video source evidence forensics method
CN103533377A (en) * 2013-09-23 2014-01-22 中山大学 Frame deletion manipulation detection method based on H.264/AVC (advanced video coding) video
CN104093033A (en) * 2014-06-12 2014-10-08 中山大学 H264/AVC video frame deletion identification method and deleted frame quantity estimation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060159172A1 (en) * 2005-01-18 2006-07-20 Canon Kabushiki Kaisha Video Signal Encoding Apparatus and Video Data Encoding Method
WO2010057027A1 (en) * 2008-11-14 2010-05-20 Transvideo, Inc. Method and apparatus for splicing in a compressed video bitstream
CN101835040A (en) * 2010-03-17 2010-09-15 天津大学 Digital video source evidence forensics method
CN103533377A (en) * 2013-09-23 2014-01-22 中山大学 Frame deletion manipulation detection method based on H.264/AVC (advanced video coding) video
CN104093033A (en) * 2014-06-12 2014-10-08 中山大学 H264/AVC video frame deletion identification method and deleted frame quantity estimation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐俊瑜: "数字视频被动取证技术研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105141968A (en) * 2015-08-24 2015-12-09 武汉大学 Video same-source copy-move tampering detection method and system
CN105141968B (en) * 2015-08-24 2016-08-17 武汉大学 A kind of video homology copy-move altering detecting method and system
CN109561316A (en) * 2018-10-26 2019-04-02 西安科锐盛创新科技有限公司 A kind of VR three dimensional image processing method

Also Published As

Publication number Publication date
CN104469361B (en) 2017-06-09

Similar Documents

Publication Publication Date Title
Chen et al. Automatic detection of object-based forgery in advanced video
Sitara et al. Digital video tampering detection: An overview of passive techniques
Feng et al. Motion-adaptive frame deletion detection for digital video forensics
Su et al. A practical design of digital video watermarking in H. 264/AVC for content authentication
CN103369349B (en) A kind of digital video-frequency quality control method and device thereof
CN103533367B (en) A kind of no-reference video quality evaluating method and device
US20130208942A1 (en) Digital video fingerprinting
Al-Sanjary et al. Detection of video forgery: A review of literature
CN103384331A (en) Video inter-frame forgery detection method based on light stream consistency
CN103067702B (en) Video concentration method used for video with still picture
Feng et al. Automatic location of frame deletion point for digital video forensics
JP4951521B2 (en) Video fingerprint system, method, and computer program product
CN101859440A (en) Block-based motion region detection method
CN101835040A (en) Digital video source evidence forensics method
Akbari et al. A new forensic video database for source smartphone identification: Description and analysis
CN104853186A (en) Improved video steganalysis method based on motion vector reply
Bakas et al. Mpeg double compression based intra-frame video forgery detection using cnn
Tan et al. GOP based automatic detection of object-based forgery in advanced video
CN104469361A (en) Video frame deletion evidence obtaining method with motion self-adaptability
CN102016879A (en) Flash detection
Xu et al. Detection of video transcoding for digital forensics
Sharma et al. A review of passive forensic techniques for detection of copy-move attacks on digital videos
Ouyang et al. The comparison and analysis of extracting video key frame
CN104735409A (en) Single-optical-path surveillance video watermark physical hiding device and digital detection method thereof
US8553995B2 (en) Method and device for embedding a binary sequence in a compressed video stream

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170609

Termination date: 20201230