CN103618899A - Video frame interpolation detecting method and device based on light intensity information - Google Patents

Video frame interpolation detecting method and device based on light intensity information Download PDF

Info

Publication number
CN103618899A
CN103618899A CN201310651629.8A CN201310651629A CN103618899A CN 103618899 A CN103618899 A CN 103618899A CN 201310651629 A CN201310651629 A CN 201310651629A CN 103618899 A CN103618899 A CN 103618899A
Authority
CN
China
Prior art keywords
brightness
field picture
average
sigma
euclidean distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310651629.8A
Other languages
Chinese (zh)
Other versions
CN103618899B (en
Inventor
黄添强
吴铁浩
卓华
邱源峰
陈云锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FUJIAN SANAO INFORMATION TECHNOLOGY Co Ltd
Fujian Normal University
Original Assignee
FUJIAN SANAO INFORMATION TECHNOLOGY Co Ltd
Fujian Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FUJIAN SANAO INFORMATION TECHNOLOGY Co Ltd, Fujian Normal University filed Critical FUJIAN SANAO INFORMATION TECHNOLOGY Co Ltd
Priority to CN201310651629.8A priority Critical patent/CN103618899B/en
Publication of CN103618899A publication Critical patent/CN103618899A/en
Application granted granted Critical
Publication of CN103618899B publication Critical patent/CN103618899B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a video frame interpolation detecting method and device based on light intensity information. 6 illumination brightness characters of each frame of a video are extracted, standardized and fused, and finally suspicious inserted frames are separated by means of the correlation between illumination intensity distances and mean distances of the video frames according to a threshold value. Thus, homologous and heterochronic frame interpolation, heterogenous and simultaneous frame interpolation and heterogenous and heterochronic frame interpolation are detected effectively.

Description

Video interleave altering detecting method and device based on intensity signal
Technical field
The present invention relates to video image analysis field, refer in particular to a kind of video interleave altering detecting method and device based on intensity signal.
Background technology
In the more universal epoch of digital multimedia, various imaging devices become and let nature take its course in the use of every field.Along with the appearance of advanced video editing software and constantly upgrading, revise a pictures or one section of video becomes more and more easier.Once this technology is utilized by illegal person, may produce not expected impact to society.Therefore the research that, digital video is distorted detection technique becomes the fast-developing important technology of current urgent need.
At present, the research that Chinese scholars is distorted evidence obtaining to digital video mainly contains two classes: a class is the feature for Video coding (such as mpeg encoded) process.Because the digital video being tampered often has the process through weight contracting, and the video contracting through weight, whether the inherent law of its frame sequence can be destroyed, so can be destroyed to detect by analyzing the inherent law of specific format sequence of frames of video.The video sequence for mpeg format coding that the people such as WANG propose, utilizes the periodically variable feature of prediction residual before and after distorting to analyze.In addition, yellow wait people on the basis of WANG, and for the problem of its existence, the digital video proposing based on bi-directional motion vector is distorted evidence obtaining algorithm, has greatly improved accuracy of detection.Another kind of is basis from existing digital image tampering evidence obtaining algorithm.According to the modal noise of introducing in video capture process, or the continuity of sequence of frames of video and the feature of other statistical properties are analyzed.The video that the people such as Hsu have proposed based on modal noise correlation is distorted detection algorithm, first extract the modal noise of each frame of video, then the frame of video piecemeal of modal noise will only be remained, finally by going up adjacent piece level correlation computing time, and build gauss hybrid models, identify whether digital video is tampered.The people such as king utilize video mode noise equally, by the noise of frame more to be identified and the correlation between modal noise, utilize experiment experience threshold decision, positioning tampering region.In addition, a kind of video altering detecting method that utilizes modal noise cluster analysis of yellow proposition.This method is applied to video by the related algorithm of data mining and distorts detection research, obtains good effect in accuracy of detection.
Yet above-mentioned two class algorithms have some limitation.First, for the algorithm of video coding process, although can obtain good detection effect, be limited to video format, cannot reach versatility; In addition, although broken through the restriction of video format for the algorithm of modal noise evidence obtaining, this algorithm can only be taken once insertion frame is same portion video camera for the detection of distorting of allos, and so this algorithm just cannot accurately detect.
Summary of the invention
The object of the invention is to overcome above-mentioned defect, a kind of video interleave altering detecting method and device based on intensity signal is provided.
The object of the present invention is achieved like this: a kind of video interleave altering detecting method based on intensity signal, and it comprises step:
A), input digital video to be measured;
B), digital video to be detected is converted into frame image sequence;
C), each two field picture is extracted respectively to its brightness information; Described brightness information comprises saturation average, saturation variance, brightness average and the brightness variance of the corresponding HSV color space of brightness average, brightness variance, two field picture of two field picture;
D), brightness information is processed and generated a fusion brightness;
E), calculate the Euclidean distance between the fusion brightness of each two field picture and the fusion brightness of all the other two field pictures;
F), calculate the Euclidean distance average of all two field pictures, contrast the irrelevance of Euclidean distance and the Euclidean distance average of each two field picture, when irrelevance surpasses threshold value, this two field picture is orientated as to abnormal insertion frame;
In said method, in described step C, in brightness information, the brightness average of setting two field picture is that EH, brightness variance are VarH 2, the corresponding HSV color space of two field picture saturation average be that ES, saturation variance are VarS 2, brightness average is that EV, brightness variance are VarV 2; In above-mentioned,
EH = Σ x = 1 N Σ y = 1 M H ( x , y ) NM ,
VarH 2 = Σ x = 1 N Σ y = 1 M ( H ( x , y ) - EH ) 2 NM ,
ES = Σ x = 1 N Σ y = 1 M HSV ( : , : , 2 ) ( x , y ) NM ,
VarS 2 = Σ x = 1 N Σ y = 1 M ( HSV ( : , : , 2 ) ( x , y ) - ES ) 2 NM ,
EV = Σ x = 1 N Σ y = 1 M HSV ( : , : , 3 ) ( x , y ) NM ,
VarV 2 = Σ x = 1 N Σ y = 1 M ( HSV ( : , : , 3 ) ( x , y ) - EV ) 2 NM ,
H(x, y in formula) be the maximum of brightness in the R, the G that locate at (x, y) of two field picture, B three primary colors, N, M is respectively length and the wide pixel number of two field picture; HSV(::, 2) represent from the saturation of rgb color space extraction HSV color space HSV(::, 3) represent from the brightness of rgb color space extraction HSV color space;
In said method, it is LC that described step merges brightness, and the frame number of frame image sequence is i, so the fusion brightness LC(i of each two field picture) can be asked by following formula:
LC ( i ) = ( EH ( i ) + ES ( i ) + EV ( i ) ) * e - ( Var H 2 ( i ) 2 + VarS 2 ( i ) 2 + VarV 2 ( i ) 2 ) , In formula:
EH(i) be the brightness average of the two field picture of i two field picture;
VarH 2(i) be the brightness variance of the two field picture of i two field picture;
ES(i) be the saturation average of HSV color space of the two field picture of i two field picture;
VarS 2(i) be the saturation variance of HSV color space of the two field picture of i two field picture;
EV(i) be the brightness average of HSV color space of the two field picture of i two field picture;
VarV 2(i) be the brightness variance of HSV color space of the two field picture of i two field picture;
In said method, in described step e, establishing Euclidean distance is LDur, can be tried to achieve by following formula,
LDur ( i ) = Σ j = 1 N ( LC ( i ) - LC ( j ) ) 2 .
In said method, in described step F, establishing Euclidean distance average is ELDur, can be tried to achieve by following formula,
ELDur = Σ i = 1 N LDur ( i ) N ;
In said method, the described step F continuously Euclidean distance of two field picture and the irrelevance of Euclidean distance average of the default frame number of contrast is orientated those sequential frame images as abnormal insertion frame over threshold value;
In said method, in described step F, the frame number of described default two field picture is no less than 12 frames; The threshold value of described irrelevance is for being not less than 2.8.
The present invention also provides a kind of video interleave tampering detection apparatus based on intensity signal, and it comprises:
Input module, for inputting digital video to be measured, then forwards conversion module to;
Conversion module, for digital video to be detected is converted into frame image sequence, then forwards extraction module to;
Extraction module, for respectively each two field picture being extracted to its brightness information, then forwards Fusion Module to; Described brightness information comprises saturation average, saturation variance, brightness average and the brightness variance of the corresponding HSV color space of brightness average, brightness variance, two field picture of two field picture;
Fusion Module, for brightness information is processed, generates a fusion brightness and then forwards computing module to;
Computing module, then forwards abnormal locating module to for the Euclidean distance of calculating between the fusion brightness of each two field picture and the fusion brightness of all the other two field pictures;
Abnormal locating module, for calculating the Euclidean distance average of all two field pictures, contrasts the irrelevance of Euclidean distance and the Euclidean distance average of each two field picture, when irrelevance surpasses threshold value, this two field picture is orientated as to abnormal insertion frame;
In above-mentioned, described abnormal locating module is specially, and for the continuously Euclidean distance of two field picture and the irrelevance of Euclidean distance average of the default frame number of contrast, when all irrelevances are during all over threshold value, those sequential frame images is orientated as to abnormal insertion frame;
In above-mentioned, in described abnormal locating module, the frame number of described default two field picture is no less than 12 frames; The threshold value of described irrelevance is for being not less than 2.8.
Beneficial effect of the present invention is to distort a kind of detection algorithm based on illumination monochrome information is provided for digital video interleave, by extracting 6 illumination brightness of the every frame of video, then will after its standardization, merge, finally by the intensity of illumination distance between frame of video and the relativeness of average distance, according to threshold value, isolate suspicious insertion frame.Thereby realized, the interleave in homology different time, allos while and allos different time is distorted effectively and detected.
Accompanying drawing explanation
Below in conjunction with accompanying drawing in detail concrete structure of the present invention is described in detail
Fig. 1 is the schematic flow sheet of the inventive method;
Fig. 2 is HSV color space model;
Fig. 3 (a) is the source frame of video in application example;
The frame of video of Fig. 3 (b) for inserting;
Fig. 4 is the LDur curve obtaining after application example is processed by the inventive method.
Embodiment
By describing technology contents of the present invention, structural feature in detail, being realized object and effect, below in conjunction with execution mode and coordinate accompanying drawing to be explained in detail.
With regard to the shortcoming of this two classes algorithm, propose a kind of digital video interleave based on intensity signal and distort detection algorithm herein.
A video interleave altering detecting method based on intensity signal, it comprises step:
A), input digital video to be measured;
B), digital video to be detected is converted into frame image sequence;
C), each two field picture is extracted respectively to its brightness information; Described brightness information comprises saturation average, saturation variance, brightness average and the brightness variance of the corresponding HSV color space of brightness average, brightness variance, two field picture of two field picture;
D), brightness information is processed and generated a fusion brightness;
E), calculate the Euclidean distance between the fusion brightness of each two field picture and the fusion brightness of all the other two field pictures;
F), calculate the Euclidean distance average of all two field pictures, contrast the irrelevance of Euclidean distance and the Euclidean distance average of each two field picture, when irrelevance surpasses threshold value, this two field picture is orientated as to abnormal insertion frame.
For the video of open air shooting, due to a variety of causes such as environmental factors, the illumination brightness of the video of a shot by camera is smooth change.The video of taking two different time periods, even same portion video camera, captured video, its brightness also can be different.Different cameras is taken, and is also like this.In addition, due to the duality principle that light has, within one period of continuous time, the variation of intensity of illumination is not straight line, but has within the specific limits fluctuation.Therefore, the intensity of illumination of the same video content that different time points is taken can fluctuate in different range, and likely has lap.Technical solution of the present invention makes full use of the important information of this video scene of intensity of illumination, in order to express better the integrality of video and to isolate abnormal frame, the present invention program utilizes Euclidean distance to measure the intensity of illumination distance between frame of video and other frames, the location that then can realize abnormal insertion frame according to the dependent thresholds limiting.Because the intensity signal feature of each video scene can be different along with the transfer of shooting time, therefore for one section, meet with frame and insert the video of distorting, no matter insert frame, be and former video homology, so long as take in different time points, the scene intensity signal feature of synthetic video will there will be discontinuous feature so.Technical solution of the present invention around this principle realizes just distorts detection.
As an embodiment, during in above-mentioned steps C, in brightness information, each is worth, the brightness average of establishing two field picture is EH, and brightness variance is VarH 2, the saturation average of the corresponding HSV color space of two field picture is ES, the saturation variance of the corresponding HSV color space of two field picture is VarS 2, the brightness average of the corresponding HSV color space of two field picture is EV, the brightness variance of the corresponding HSV color space of two field picture is VarV 2.
In order to realize the brightness of above-mentioned 6 dimensions, calculate, need to use the monochrome information calculating of RGB color space and the monochrome information in hsv color space.At this, be described in detail one by one:
RGB monochrome information:
Generally speaking, image can be expressed as the product of object reflection coefficient and illumination, namely:
I(x,y)=R(x,y)L(x,y),
In formula, R(x, y) be reflection coefficient, mainly by the material of object, shape, the factors such as attitude determine, irrelevant with illumination, L(x, y) represent intensity of illumination.
From original image I(x, y) middle reflection R (x, y) and intensity of illumination L(x, the y of extracting) difficult.Yet, suppose to remove completely illumination, object is placed in dark space, what shooting obtained so will be the complete black image of a width.Therefore, certainly, the illumination feature of image exists certain associated with the gray value of image.For the illumination feature of Description Image, can carry out approximate representation by the Luminance Distribution situation of image.
Brightness H(x, y that definition frame image is located at (x, y)) be the maximum in R, G, B three primary colors:
H(x,y)=max{R(x,y),G(x,y),B(x,y)}
Wherein, R(x, y), G(x, y), B(x, y) respectively representative frame image i at (x, y), locate the gray value of corresponding red, green, blue passage.Thus, by this H(x, y) can obtain EH, by EH, can obtain VarH 2:
EH = Σ x = 1 N Σ y = 1 M H ( x , y ) NM
Figure BDA0000430997700000072
in formula, N and M are respectively the length of two field picture and wide.
HSV monochrome information
HSV is a kind of color space creating according to the characteristic directly perceived of color.Different from RGB color space, in this model, the parameter of color is respectively: tone (H), and saturation (S), brightness (V),
Tone (H): by angle tolerance, span is 0 °~360 °, as can be seen from Figure 1, starts by counterclockwise calculating from redness, and redness is 0 °, and green is 120 °, and blueness is 240 °.Their complementary color is: yellow is 60 °, and cyan is 180 °, and magenta is 300 °.
Saturation S: refer to the purity of color, higher color is purer, the low ash that becomes gradually, span is 0.0~1.0.
Brightness V: refer to the bright degree of color, span is 0.0 (black)~1.0 (white).Brightness itself does not represent intensity of illumination.
It is worth mentioning that, these three parameters are components independently.HSV color space model is as shown in Figure 2:
In HSV color space, only have color harmony saturation to comprise colouring information.Saturation is relevant with the brightness of certain tone, and spectrally pure color is completely saturated, along with the saturation that adds of white light will reduce gradually.
By the above-mentioned relevant introduction about HSV color space model, in HSV color space, tone can be used for representing the color of light to a certain extent.By what discussed above, tone should not be as illumination information.Saturation and brightness can reflect the brightness in image exactly.It is contemplated that, for one section of video that continues steady shooting, the brightness of video content also can change along with the variation of intensity of illumination.A video camera is taken same scene at two different time points, constant even if device parameter arranges, but because the intensity of illumination of scene itself changes, finally also will cause the brightness of taking video content out also to change.Therefore,, for two field picture, although the brightness of HSV and saturation can not represent intensity of illumination, for one section of video of taking out of doors, the continuity of brightness and saturation can change along with the change of intensity of illumination.So the brightness of HSV and saturation can be used as brightness and detect for the video true and false.Extract respectively saturation and the brightness of the HSV color space model of two field picture herein, then using they average and variance as monochrome information.Computing formula is as follows:
ES = Σ x = 1 N Σ y = 1 M HSV ( : , : , 2 ) ( x , y ) NM
VarS 2 = Σ x = 1 N Σ y = 1 M ( HSV ( : , : , 2 ) ( x , y ) - ES ) 2 NM
EV = Σ x = 1 N Σ y = 1 M HSV ( : , : , 3 ) ( x , y ) NM
VarV 2 = Σ x = 1 N Σ y = 1 M ( HSV ( : , : , 3 ) ( x , y ) - EV ) 2 NM
HSV(in formula::, 2) represent to extract from rgb color space the saturation of HSV color space, HSV(::, 3) represent to extract from rgb color space that the brightness of HSV color space, N and M are respectively the length of two field picture and wide.
As an embodiment, establishing above-mentioned fusion brightness is LC, and the frame number of frame image sequence is i, so the fusion brightness LC(i of each two field picture) can be asked by following formula:
LC ( i ) = ( EH ( i ) + ES ( i ) + EV ( i ) ) * e - ( Var H 2 ( i ) 2 + VarS 2 ( i ) 2 + VarV 2 ( i ) 2 ) , In formula:
EH(i) be the brightness average of the two field picture of i two field picture;
VarH 2(i) be the brightness variance of the two field picture of i two field picture;
ES(i) be the saturation average of HSV color space of the two field picture of i two field picture;
VarS 2(i) be the saturation variance of HSV color space of the two field picture of i two field picture;
EV(i) be the brightness average of HSV color space of the two field picture of i two field picture;
VarV 2(i) be the brightness variance of HSV color space of the two field picture of i two field picture.
As an embodiment, in above-mentioned steps E, establishing Euclidean distance is LDur, can be tried to achieve by following formula,
LDur ( i ) = Σ j = 1 N ( LC ( i ) - LC ( j ) ) 2
Visible by this formula, LDur represents is that each frame of sequence of frames of video and all the other frames are about the relativeness of intensity of illumination.
In said method, in described step F, establishing Euclidean distance average is ELDur, can be tried to achieve by following formula,
ELDur = Σ i = 1 N LDur ( i ) N ;
Owing to meeting with and inserting the video that frame is distorted for one section, the frame number inserting is more (more than at least ten frames) conventionally, therefore, through great many of experiments, as an embodiment, the step F continuously Euclidean distance of two field picture and the irrelevance of Euclidean distance average of the default frame number of contrast is orientated those sequential frame images as abnormal insertion frame over threshold value.
Then, with the average ELDur of LDur, as with reference to value, if continuous a plurality of consecutive values all depart from the threshold value that ELDur reaches irrelevance in the LDur sequence of one section of video to be detected, can judge that these are worth corresponding frame and insert exactly frame.
As an embodiment, continuously the frame number number of the default two field picture of contrast is not less than 12, and the threshold value k of irrelevance should meet and is not less than 2.8.
In conjunction with the setting of above-mentioned two threshold values, the decision method of deviation value is specially:
With the threshold value k comparison of the irrelevance of LDur (i)/ELDur and setting, if n consecutive frame all meets and be greater than the threshold value k that irrelevance is set continuously, from i, to i+n two field picture, be all judged to be insertion frame.
The present invention also provides a kind of video interleave tampering detection apparatus based on intensity signal, and it comprises:
Input module, for inputting digital video to be measured, then forwards conversion module to;
Conversion module, for digital video to be detected is converted into frame image sequence, then forwards extraction module to;
Extraction module, for respectively each two field picture being extracted to its brightness information, then forwards Fusion Module to; Described brightness information comprises saturation average, saturation variance, brightness average and the brightness variance of the corresponding HSV color space of brightness average, brightness variance, two field picture of two field picture;
Fusion Module, for brightness information is processed, generates a fusion brightness and then forwards computing module to;
Computing module, then forwards abnormal locating module to for the Euclidean distance of calculating between the fusion brightness of each two field picture and the fusion brightness of all the other two field pictures;
Abnormal locating module, for calculating the Euclidean distance average of all two field pictures, contrasts the irrelevance of Euclidean distance and the Euclidean distance average of each two field picture, when irrelevance surpasses threshold value, this two field picture is orientated as to abnormal insertion frame.
In this device, the account form of each parameter is identical with said method, at this, does not do superfluous stating.
As an embodiment, above-mentioned abnormal locating module is specially, and for the continuously Euclidean distance of two field picture and the irrelevance of Euclidean distance average of the default frame number of contrast, when all irrelevances are during all over threshold value, those sequential frame images is orientated as to abnormal insertion frame.
As an embodiment, in above-mentioned abnormal locating module, the frame number of described default two field picture is no less than 12 frames; The threshold value of described irrelevance is for being not less than 2.8.
Application example:
Select an experiment video (size for 720*576), if Fig. 3 (a) is the segment condense of being taken Same Scene by same portion video camera two different time sections as shown in (b).This section of video has 433 frames, and wherein, the 1st to 150 frames and the 151st to 433 were taken at 9: 14, and the 151st to 180 frames were taken at 10: 44.Adopt the art of this patent scheme to detect this video, the ELDur finally obtaining is 3.5112*10-6, LDur curve as shown in Figure 4:
Seeing clearly, it is abnormal protruding that experimental result shows that 51 to 60 places have LDur value to present.And ELDur is 3.5112*10-6, the ratio of bossing and ELDur is all greater than threshold value 2.8.By experiment video counts, be it is reported, the 151st to 180 frames of this video are with the fragment of taking with a video camera in the different periods.Because adopt the method every frame sampling experiment in experimentation, and be 3 every frame rate.As calculated, just should occur extremely at sequence number 51 to 60 places.Experimental result with predict the outcome unanimously, verification and measurement ratio is 100%.
The foregoing is only embodiments of the invention; not thereby limit the scope of the claims of the present invention; every equivalent structure or conversion of equivalent flow process that utilizes specification of the present invention and accompanying drawing content to do; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. the video interleave altering detecting method based on intensity signal, is characterized in that: it comprises step,
A), input digital video to be measured;
B), digital video to be detected is converted into frame image sequence;
C), respectively each two field picture is extracted to its brightness information, described brightness information comprises brightness average, the brightness variance of two field picture, saturation average, saturation variance, brightness average and the brightness variance of the corresponding HSV color space of two field picture;
D), brightness information is processed, generate one and merge brightness;
E), calculate the Euclidean distance between the fusion brightness of each two field picture and the fusion brightness of all the other two field pictures;
F), calculate the Euclidean distance average of all two field pictures, contrast the irrelevance of Euclidean distance and the Euclidean distance average of each two field picture, when irrelevance surpasses threshold value, this two field picture is orientated as to abnormal insertion frame.
2. the video interleave altering detecting method based on intensity signal as claimed in claim 1, is characterized in that: in described step C, in brightness information, the brightness average of setting two field picture is that EH, brightness variance are VarH 2, the corresponding HSV color space of two field picture saturation average be that ES, saturation variance are VarS 2, brightness average is that EV, brightness variance are VarV 2; In above-mentioned,
EH = Σ x = 1 N Σ y = 1 M H ( x , y ) NM ,
VarH 2 = Σ x = 1 N Σ y = 1 M ( H ( x , y ) - EH ) 2 NM ,
ES = Σ x = 1 N Σ y = 1 M HSV ( : , : , 2 ) ( x , y ) NM ,
VarS 2 = Σ x = 1 N Σ y = 1 M ( HSV ( : , : , 2 ) ( x , y ) - ES ) 2 NM ,
EV = Σ x = 1 N Σ y = 1 M HSV ( : , : , 3 ) ( x , y ) NM ,
VarV 2 = Σ x = 1 N Σ y = 1 M ( HSV ( : , : , 3 ) ( x , y ) - EV ) 2 NM ,
H(x, y in formula) be the maximum of brightness in the R, the G that locate at (x, y) of two field picture, B three primary colors, N, M is respectively length and the wide pixel number of two field picture; HSV(::, 2) represent from the saturation of rgb color space extraction HSV color space HSV(::, 3) represent from the brightness of rgb color space extraction HSV color space.
3. the video interleave altering detecting method based on intensity signal as claimed in claim 2, it is characterized in that: setting described fusion brightness is LC, the frame number of frame image sequence is i, so the fusion brightness LC(i of each two field picture) can be asked by following formula:
LC ( i ) = ( EH ( i ) + ES ( i ) + EV ( i ) ) * e - ( Var H 2 ( i ) 2 + VarS 2 ( i ) 2 + VarV 2 ( i ) 2 ) , In formula:
EH(i) be the brightness average of the two field picture of i two field picture;
VarH 2(i) be the brightness variance of the two field picture of i two field picture;
ES(i) be the saturation average of HSV color space of the two field picture of i two field picture;
VarS 2(i) be the saturation variance of HSV color space of the two field picture of i two field picture;
EV(i) be the brightness average of HSV color space of the two field picture of i two field picture;
VarV 2(i) be the brightness variance of HSV color space of the two field picture of i two field picture.
4. the video interleave altering detecting method based on intensity signal as claimed in claim 3, is characterized in that: in described step e, establishing Euclidean distance is LDur, can be tried to achieve by following formula,
LDur ( i ) = Σ j = 1 N ( LC ( i ) - LC ( j ) ) 2 .
5. the video interleave altering detecting method based on intensity signal as claimed in claim 4, is characterized in that: in described step F, establishing Euclidean distance average is ELDur, can be tried to achieve by following formula,
ELDur = Σ i = 1 N LDur ( i ) N .
6. the video interleave altering detecting method based on intensity signal as described in claim 1-5, is characterized in that: described step F is specially:
The continuously Euclidean distance of two field picture and the irrelevance of Euclidean distance average of the default frame number of contrast, when all irrelevances are during all over threshold value, orientate those sequential frame images as abnormal insertion frame.
7. the video interleave altering detecting method based on intensity signal as described in claim 1-5 any one, is characterized in that: in described step F, the frame number of described default two field picture is no less than 12 frames; The threshold value of described irrelevance is for being not less than 2.8.
8. the video interleave tampering detection apparatus based on intensity signal, is characterized in that: it comprises,
Input module, for inputting digital video to be measured, then forwards conversion module to;
Conversion module, for digital video to be detected is converted into frame image sequence, then forwards extraction module to;
Extraction module, for respectively each two field picture being extracted to its brightness information, then forwards Fusion Module to; Described brightness information comprises saturation average, saturation variance, brightness average and the brightness variance of the corresponding HSV color space of brightness average, brightness variance, two field picture of two field picture;
Fusion Module, for brightness information is processed, generates a fusion brightness and then forwards computing module to;
Computing module, then forwards abnormal locating module to for the Euclidean distance of calculating between the fusion brightness of each two field picture and the fusion brightness of all the other two field pictures;
Abnormal locating module, for calculating the Euclidean distance average of all two field pictures, contrasts the irrelevance of Euclidean distance and the Euclidean distance average of each two field picture, when irrelevance surpasses threshold value, this two field picture is orientated as to abnormal insertion frame.
9. the video interleave tampering detection apparatus based on intensity signal as claimed in claim 8, it is characterized in that: described abnormal locating module is specially, for contrasting continuously the Euclidean distance of two field picture and the irrelevance of Euclidean distance average of default frame number, when all irrelevances are during all over threshold value, those sequential frame images are orientated as to abnormal insertion frame.
10. video interleave tampering detection apparatus as claimed in claim 8 or 9, is characterized in that: in described abnormal locating module, the frame number of described default two field picture is no less than 12 frames; The threshold value of described irrelevance is for being not less than 2.8.
CN201310651629.8A 2013-12-05 2013-12-05 Video interleave altering detecting method based on intensity signal and device Expired - Fee Related CN103618899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310651629.8A CN103618899B (en) 2013-12-05 2013-12-05 Video interleave altering detecting method based on intensity signal and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310651629.8A CN103618899B (en) 2013-12-05 2013-12-05 Video interleave altering detecting method based on intensity signal and device

Publications (2)

Publication Number Publication Date
CN103618899A true CN103618899A (en) 2014-03-05
CN103618899B CN103618899B (en) 2016-08-17

Family

ID=50169603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310651629.8A Expired - Fee Related CN103618899B (en) 2013-12-05 2013-12-05 Video interleave altering detecting method based on intensity signal and device

Country Status (1)

Country Link
CN (1) CN103618899B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878704A (en) * 2017-02-14 2017-06-20 福建师范大学 Turn altering detecting method on video frame rate based on light stream cyclophysis
CN110049205A (en) * 2019-04-26 2019-07-23 湖南科技大学 The detection method that video motion compensation frame interpolation based on Chebyshev matrix is distorted
CN110418129A (en) * 2019-07-19 2019-11-05 长沙理工大学 Digital video interframe altering detecting method and system
CN116546269A (en) * 2023-05-12 2023-08-04 应急管理部大数据中心 Network traffic cleaning method, system and equipment for media stream frame insertion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101835040B (en) * 2010-03-17 2012-07-04 天津大学 Digital video source evidence forensics method
CN103313142B (en) * 2013-05-26 2016-02-24 中国传媒大学 The video content safety responsibility identification of triple play oriented

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878704A (en) * 2017-02-14 2017-06-20 福建师范大学 Turn altering detecting method on video frame rate based on light stream cyclophysis
CN110049205A (en) * 2019-04-26 2019-07-23 湖南科技大学 The detection method that video motion compensation frame interpolation based on Chebyshev matrix is distorted
CN110418129A (en) * 2019-07-19 2019-11-05 长沙理工大学 Digital video interframe altering detecting method and system
CN110418129B (en) * 2019-07-19 2021-03-02 长沙理工大学 Digital video interframe tampering detection method and system
CN116546269A (en) * 2023-05-12 2023-08-04 应急管理部大数据中心 Network traffic cleaning method, system and equipment for media stream frame insertion
CN116546269B (en) * 2023-05-12 2024-01-30 应急管理部大数据中心 Network traffic cleaning method, system and equipment for media stream frame insertion

Also Published As

Publication number Publication date
CN103618899B (en) 2016-08-17

Similar Documents

Publication Publication Date Title
CN103400150B (en) A kind of method and device that road edge identification is carried out based on mobile platform
CN110322522B (en) Vehicle color recognition method based on target recognition area interception
WO2005066896A3 (en) Detection of sky in digital color images
CN103839255B (en) Video keying altering detecting method and device
CN103747271B (en) Video tamper detection method and device based on mixed perceptual hashing
CN106412619A (en) HSV color histogram and DCT perceptual hash based lens boundary detection method
CN106951898B (en) Vehicle candidate area recommendation method and system and electronic equipment
CN103618899A (en) Video frame interpolation detecting method and device based on light intensity information
Singh et al. Detection and localization of copy-paste forgeries in digital videos
CN110009621B (en) Tamper video detection method, tamper video detection device, tamper video detection equipment and readable storage medium
CN104766071A (en) Rapid traffic light detection algorithm applied to pilotless automobile
CN101493937A (en) Method for detecting content reliability of digital picture by utilizing gradient local entropy
CN104143077B (en) Pedestrian target search method and system based on image
Bagiwa et al. Chroma key background detection for digital video using statistical correlation of blurring artifact
CN103544703A (en) Digital image stitching detecting method
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
CN110263693A (en) In conjunction with the traffic detection recognition method of inter-frame difference and Bayes classifier
CN102567987A (en) Method for detecting manual fuzzy operation trace in image synthesis tampering
TW201032180A (en) Method and device for keeping image background by multiple gauss models
CN112104869A (en) Video big data storage and transcoding optimization system
CN102088539A (en) Method and system for evaluating pre-shot picture quality
CN104021518B (en) Reversible data hiding method for high dynamic range image
Cavallaro et al. Accurate video object segmentation through change detection
CN103544692A (en) Blind detection method for tamper with double-compressed JPEG (joint photographic experts group) images on basis of statistical judgment
CN106559714A (en) A kind of extraction method of key frame towards digital video copyright protection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160817

Termination date: 20161205