CN109740546A - A kind of forgery video detecting method of tampered region Jing Guo geometric transformation - Google Patents

A kind of forgery video detecting method of tampered region Jing Guo geometric transformation Download PDF

Info

Publication number
CN109740546A
CN109740546A CN201910014466.XA CN201910014466A CN109740546A CN 109740546 A CN109740546 A CN 109740546A CN 201910014466 A CN201910014466 A CN 201910014466A CN 109740546 A CN109740546 A CN 109740546A
Authority
CN
China
Prior art keywords
frame
point
video
characteristic point
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910014466.XA
Other languages
Chinese (zh)
Inventor
苏立超
王石平
罗欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201910014466.XA priority Critical patent/CN109740546A/en
Publication of CN109740546A publication Critical patent/CN109740546A/en
Pending legal-status Critical Current

Links

Abstract

A kind of forgery video detecting method the present invention relates to tampered region Jing Guo geometric transformation, mainly include extract characteristic point using modified MI-SIFT algorithm, characteristic matching point determined using arest neighbors and the ratio of time neighbour Euclidean distance, distort inherent characteristics using region eliminate Mismatching point, further determined that using space-time contextual target track algorithm subsequent frame tampered region and etc. method.Effective detection that mode is distorted to region duplication-stickup in video by various geometric transformations such as translation, rotation, flexible and mirror images can be achieved in the present invention, and the accuracy rate and time efficiency detected significantly improves, to distort evidence obtaining for video area, a kind of effective detection means is provided.

Description

A kind of forgery video detecting method of tampered region Jing Guo geometric transformation
Technical field
The present invention relates to internet and multimedia technology field, forgery of especially a kind of tampered region Jing Guo geometric transformation Video detecting method.
Background technique
With flourishing for internet and multimedia technology, the extensive use of digital video, digital video content it is true Reality problem is increasingly serious, and many powerful Video processing softwares continue to bring out, so that layman also can be convenient Land productivity modifies to video content with tool, achievees the effect that mix the spurious with the genuine.Its content of video and meaning after distorting It would generally change, if these videos are illegally used, may pervert the course of justice, or even influence social stability. Therefore the source of digital video and the true sex determination of content are increasingly urgent to, digital video evidence obtaining also becomes information security and leads Particularly important one of the research topic in domain.
It is that mode is commonly distorted during video is distorted that video area duplication, which is pasted, and this mode of distorting is by increasing or hiding Existing region in video is kept off, to achieve the purpose that change content expressed by video.Under normal conditions, interpolater can be according to need Certain transformation (such as translate, rotate, amplification, reducing, mirror image) is carried out to tampered region, more forced with promoting to distort effect Very, and the probability being found is distorted in reduction.
The magnanimity of digital video data, higher-dimension, it is non-linear the features such as brought to the research that video distorts evidence collecting method it is huge Big challenge.Although having many scholars both at home and abroad has carried out correlative study at present, and proposes a variety of different tampering detections Method, but these methods remain the problems such as Detection accuracy is low, complexity is high, limitation is strong, with application request There is also biggish gaps.Therefore, it how under the premise of guaranteeing detection Detection accuracy and algorithm robustness, reduces feature and mentions Data volume, the reduction detection algorithm complexity taken, improves algorithm detection efficiency, it has also become the passive evidence collecting method research of digital video Key core problem.
Summary of the invention
In view of this, the forgery video detection side the purpose of the present invention is to propose to a kind of tampered region Jing Guo geometric transformation Method is, it can be achieved that the region duplication-stickup side of distorting for passing through the various geometric transformations such as translation, rotation, flexible and mirror image in video Effective detection of formula.
The present invention is realized using following scheme: a kind of forgery video detecting method of tampered region Jing Guo geometric transformation has Body the following steps are included:
Step S1: characteristic point is extracted to tested video present frame;
Step S2: Feature Points Matching collection is obtained;
Step S3: the Mismatching point that Feature Points Matching is concentrated is eliminated;
Step S4: judge whether it is and distort frame;If then entering step S5, otherwise, return step S1;
Step S5: the suspicious tampered region of position subsequent frame, and the suspicious tampered region of each subsequent frame is successively verified;
Step S6: judging whether present frame is last frame, if then stopping detecting, otherwise return step S1.
Further, in step S1, using have preferable translation invariance, rotational invariance, scale invariability and The modified MI-SIFT feature extraction algorithm of mirror invariant performance, a variety of post-processings behaviour of tampered region can be resisted by making it not only Make, and effectively reduce intrinsic dimensionality, improves the time efficiency of algorithm.
Further, step S1 specifically includes the following steps:
Step S11: diminution processing first is carried out to video frame before extracting feature, video frame is narrowed down into full size 80%;
Step S12: remember that video present frame to be measured is Icurrent, I is extracted using MI-SIFT algorithmcurrentCharacteristic point obtains Eigenmatrix MX, each characteristic point include 128 dimension datas, and wherein eigenmatrix MX is shown below:
In formula, fnbIndicate n-th of characteristic point mxnIn b-th of component;
Step S13: dimension-reduction treatment is carried out to eigenmatrix MX using Principal Component Analysis, by the feature dimensions of each characteristic point Number drops to 32 dimensions.
Further, in step S2, characteristic matching point is determined using arest neighbors and the ratio of time neighbour, overcomes use Single global threshold determines the bad problem of existing matching effect, specifically:
Step S21: for any one characteristic point, calculating the Euclidean distance of the point Yu other feature point, and by its according to Ascending order arrangement, is written as follow form:
MDn={ mdn1,mdn2,...,mdni... } and 0 < n < b-1,1 < i < b;
In formula, MDnIndicate the vector after n-th of characteristic point and other feature point Euclidean distance are arranged according to ascending order, mdn1Table Show the Euclidean distance of n-th of characteristic point Yu nearest characteristic point, mdn2Indicate the Euclidean distance of n-th of characteristic point and time nearly characteristic point, And so on;
Step S22: assuming that mdn1For n-th point and the Euclidean distance of its nearest characteristic point, mdn2For n-th point and next The Euclidean distance of nearly characteristic point must satisfy if the Euclidean distance of n-th point and its nearest characteristic point is a pair of match point:
mdn1/mdn2< TmdWherein Tmd∈(0,1);
In formula, TmdCompare threshold value for the Euclidean distance of setting;
Step S23: repeating step S21 and step S22, until each characteristic point of present frame passes through above-mentioned matching, most The Feature Points Matching collection MQ and MW of present frame, i.e. MQ={ mq are obtained eventually1,mq2,...,mqi... }, MW={ mw1,mw2,..., mwi... }, wherein mqiAnd mwiFor corresponding a pair of of match point.
Further, step S3 specifically includes the following steps:
Step S31: minimum range is arranged in the characteristics of usually having certain distance using source region and tampered region, will be away from It is determined as Mismatching point from the matching double points for being less than the minimum range;
Step S32: being usually to have the characteristics that significant size using tampered region, for being less than in the region of default size Characteristic point be determined as Mismatching point;
Step S33: classifying to the characteristic point retained after step S31 to step S32 using clustering algorithm, deletes same When appear in matching double points in same class.
Further, step S4 specifically: count retained match point logarithm, if quantity is greater than 5, determine present frame To distort frame;Otherwise determine that present frame is true frames.
Further, in step S5, using distorting, frame is continuous appearance and the adjacent tampered region variation for distorting frame is smaller The characteristics of, using the suspicious tampered region of space-time contextual target track algorithm position subsequent frame.
Further, step S5 specifically includes the following steps:
Step S51: it is assumed that present frame be video sequence in kth frame, using a Feature Points Matching collection MQ of kth frame as Target initial position t;
Step S52: spatial context model is calculated according to the following formula:
In formula, F indicates fast Flourier operation, F-1Indicate fast Flourier inverse operation, ωσFor gaussian weighing function, t* For target position, b is normaliztion constant, and α is scale parameter, α=2.25, β=1, and I (t) indicates the gray value of image at t;
Step S53: space-time context model needed for updating next frame target:
In formula, ρ is learning parameter, ρ=0.075;In the first frame for being tampered video sequence, space-time context modelWith Spatial context modelBe initialized as:
Step S54: confidence map is calculated according to the following formula:
Step S55: maximizing according to the following formula:
The position of the maximum value is target position;
Step S56: sequence is distorted when using Feature Points Matching collection MQ as target initial position, tracking finishes and obtains video Suspected locations α 1 after, use another Feature Points Matching collection MW as target initial position, repeat step S52 to step S55, Obtain another suspected locations α 2 that video distorts sequence;
Step S57: carrying out Feature Points Matching to two suspected locations of each frame, if characteristic matching point quantity is greater than 5, Determine that the frame is to distort frame;Otherwise, which is judged as true frames.
Further, when encountering any one or more in following several situations, operation stops, and when writing down stopping The frame number stop at place:
Situation one: the region tracked exceeds video frame range;
Situation two: the zone radius tracked is less than threshold value Tnei
Three: two tracing area shortest straight line distances of situation are less than threshold value Td
Situation four: more than the last frame of video.
Compared with prior art, the invention has the following beneficial effects: the present invention can be achieved in video by translation, rotation Turn, effective detection of mode is distorted in region duplication-stickup of the various geometric transformations such as flexible and mirror image, and the accuracy rate detected It is significantly improved with time efficiency, to distort evidence obtaining for video area, provides a kind of effective detection means.
Detailed description of the invention
Fig. 1 is the flow diagram of the embodiment of the present invention.
Fig. 2 is the subsequent frame tampered region overhaul flow chart of the embodiment of the present invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
It is noted that described further below be all exemplary, it is intended to provide further instruction to the application.Unless another It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
As shown in Figure 1, present embodiments providing a kind of forgery video detecting method of tampered region Jing Guo geometric transformation, tool Body the following steps are included:
Step S1: characteristic point is extracted to tested video present frame;
Step S2: Feature Points Matching collection is obtained;
Step S3: the Mismatching point that Feature Points Matching is concentrated is eliminated;
Step S4: judge whether it is and distort frame;If then entering step S5, otherwise, return step S1;
Step S5: the suspicious tampered region of position subsequent frame, and the suspicious tampered region of each subsequent frame is successively verified;
Step S6: judging whether present frame is last frame, if so, stopping detection, otherwise return step S1.
In the present embodiment, in step S1, using having preferable translation invariance, rotational invariance, Scale invariant Modified MI-SIFT (the Mirror and Inversion Invariant Generalization of property and mirror invariant performance For SIFT Descriptor) feature extraction algorithm, a variety of post-processing operations of tampered region can be resisted by making it not only, and And intrinsic dimensionality is effectively reduced, improve the time efficiency of algorithm.
In the present embodiment, step S1 specifically includes the following steps:
Step S11: diminution processing first is carried out to video frame before extracting feature, video frame is narrowed down into full size 80%;
Step S12: remember that video present frame to be measured is Icurrent, I is extracted using MI-SIFT algorithmcurrentCharacteristic point obtains Eigenmatrix MX, each characteristic point include 128 dimension datas, and wherein eigenmatrix MX is shown below:
In formula, fnbIndicate n-th of characteristic point mxnIn b-th of component;
Step S13: dimension-reduction treatment is carried out to eigenmatrix MX using Principal Component Analysis, by the feature dimensions of each characteristic point Number drops to 32 dimensions.
Preferably, carrying out dimension-reduction treatment to it using Principal Component Analysis, detailed process is as follows:
Step S131:(1) correlation matrix PR is calculated according to the following formula:
Wherein,
In formula, mxiAnd mxjI-th and j-th of characteristic point, mx in respectively MXkiIndicate k-th in ith feature point Component;
prij(i, j=1,2 ..., b) it is mxiAnd mxjRelated coefficient, and prij=prji
Step S132: solution characteristic equation | λ I-PR |=0, calculate PR eigenvalue λ and its corresponding feature vector l;
Step S133: the contribution rate and contribution rate of accumulative total of each ingredient are calculated separately according to the following formula:
Step S134: carrying out descending arrangement for the contribution rate of each ingredient, chooses the corresponding characteristic value of preceding 32 contribution rates λ12,...,λmAnd its feature vector, and calculate corresponding principal component.By this method, finally by the spy of each characteristic point Sign dimension drops to 32 dimensions, to achieve the purpose that the detection efficiency for improving algorithm.
In the measurement of characteristic matching, the similarity of any two characteristic point, calculation method are measured using Euclidean distance It is as follows:
Euclidean distance is smaller, shows that the similitude between two characteristic points is bigger, and then shows that the two characteristic points are located at A possibility that source region in video frame and tampered region, is also bigger.But two spies are simply determined using a global threshold Whether sign point is a pair of of match point, is unable to reach satisfactory effect, because of the video of different content, optimal threshold value is often Also not identical.The present invention determines characteristic matching point using arest neighbors and the ratio of time neighbour.Method particularly includes: in step S2, Characteristic matching point is determined using arest neighbors and the ratio of time neighbour, overcomes and determines existing matching using single global threshold Ineffective problem, specifically:
Step S21: for any one characteristic point, calculating the Euclidean distance of the point Yu other feature point, and by its according to Ascending order arrangement, is written as follow form:
MDn={ mdn1,mdn2,...,mdni... } and 0 < n < b-1,1 < i < b;
In formula, MDnIndicate the vector after n-th of characteristic point and other feature point Euclidean distance are arranged according to ascending order, mdn1Table Show the Euclidean distance of n-th of characteristic point Yu nearest characteristic point, mdn2Indicate the Euclidean distance of n-th of characteristic point and time nearly characteristic point, And so on;
Step S22: assuming that mdn1For n-th point and the Euclidean distance of its nearest characteristic point, mdn2For n-th point and next The Euclidean distance of nearly characteristic point must satisfy if the Euclidean distance of n-th point and its nearest characteristic point is a pair of match point:
mdn1/mdn2< TmdWherein Tmd∈(0,1);
In formula, TmdCompare threshold value for the Euclidean distance of setting;The present embodiment chooses T after many experimentsmdValue be 0.6;
Step S23: repeating step S21 and step S22, until each characteristic point of present frame passes through above-mentioned matching, most The Feature Points Matching collection MQ and MW of present frame, i.e. MQ={ mq are obtained eventually1,mq2,...,mqi... }, MW={ mw1,mw2,..., mwi... }, wherein mqiAnd mwiFor corresponding a pair of of match point.Loc(mqi) and Loc (mwi) respectively indicate mqiAnd mwiWorking as The position of previous frame.
In the present embodiment, since the factors such as noise influence, MQ and MW may include part Mismatching point.To reject mistake With point, the Detection accuracy of algorithm is further increased, what the present embodiment determined present frame using following three steps distorts area Domain: step S3 specifically includes the following steps:
Step S31: distorting in the stickup of actual video region duplication, source region and tampered region usually have centainly away from From if the two hypotelorism, video viewers are easy to distort area by the intuitive visual signature discovery such as shade, illumination conflict Domain.Minimum range is arranged in the characteristics of usually having certain distance using source region and tampered region, will be apart from less than the most narrow spacing From matching double points be determined as Mismatching point;Specifically, for any pair of match point mq in MQ and MWiAnd mwiUnder need to meeting Formula:
dis(mqi,mwi) > TdWherein
Wherein, the present embodiment is according to value T after many experimentsd=20, mq is then determined if it does not meet the above formulaiAnd mwiFor accidentally Matching double points, and it is deleted from MQ and MW;
Step S32: pasting video area duplication and distort, and tampered region is usually the region with significant size, If tampered region is too small, distorting will become meaningless.It is usually to have the characteristics that significant size using tampered region, for It is determined as Mismatching point less than the characteristic point in the region of default size;Specifically: for any one point m in MQ and MWi(mi ∈ MQ ∪ MW), it is T in its radiusneiOther 3 match points must be included at least in the range of=30, otherwise delete miIt is right with its The match point answered.
Step S33: classifying to the characteristic point retained after step S31 to step S32 using kmeans clustering algorithm, It deletes while appearing in the matching double points in same class, it may be assumed that obtain set MQ ', MW ' after classification.To ensure in MQ ', MW ' Element number is equal, for any match point mqiAnd its corresponding points mwiIf (mqi,mwi) meet following formula:
mqi∈MQ'and mwi∈MQ'
Or;
mqi∈MW'and mwi∈MW'
Then by (mqi,mwi) delete, otherwise retain.
By above three step, it is believed that the Mismatching point in MQ and MW has been removed.
In the present embodiment, step S4 specifically: count the quantity of retained matching double points, if quantity is greater than 5, sentence Settled previous frame is to distort frame;Otherwise determine that present frame is true frames.
Due to the visual effect of human eye, video only seems to be only smoothness when frame per second is greater than 16fps, therefore, right For most of video, it can be assumed that the frame number being tampered at least will be more than 16 frames, be otherwise just not achieved the purpose distorted and Effect.Based on this judgement, in order to improve detection efficiency, further definition detects spacing parameter MT, if present frame is judged as True frames then enable current=current+MT return front that is, using (current+MT) frame as new present frame The detection of a step progress new round;If present frame is judged as distorting frame, enter below step, carries out subsequent frame tampered region Positioning.
In the present embodiment, in step S5, if present frame is judged as distorting frame, subsequent video frame has higher Probability is also to distort frame, and the adjacent tampered region variation for distorting frame is smaller.According to this characteristic, algorithm is without frame by frame to usurping Video sequence after changing frame carries out the detection and judgement of above-mentioned steps.Space-time contextual target track algorithm (STC) can be used Come the tampered region of position subsequent frame, in favor of improving detection efficiency.
In the present embodiment, step S5 specifically includes the following steps:
Step S51: it is assumed that present frame be video sequence in kth frame, using a Feature Points Matching collection MQ of kth frame as Target initial position t;
Step S52: spatial context model is calculated according to the following formula:
In formula, F indicates fast Flourier operation, F-1Indicate fast Flourier inverse operation, ωσFor gaussian weighing function, t* For target position, b is normaliztion constant, and α is scale parameter, α=2.25, β=1, and I (t) indicates the gray value of image at t;
Step S53: space-time context model needed for updating next frame target:
In formula, ρ is learning parameter, ρ=0.075;In the first frame for being tampered video sequence, space-time context modelWith Spatial context modelBe initialized as:
Step S54: confidence map is calculated according to the following formula:
Step S55: maximizing according to the following formula:
The position of the maximum value is target position;
Step S56: sequence is distorted when using Feature Points Matching collection MQ as target initial position, tracking finishes and obtains video Suspected locations α 1 after, use another Feature Points Matching collection MW as target initial position, repeat step S52 to step S55, Obtain another suspected locations α 2 that video distorts sequence;
Step S57: carrying out Feature Points Matching to two suspected locations of each frame, if characteristic matching point quantity is greater than 5, Determine that the frame is to distort frame;Otherwise, which is judged as true frames.
In the present embodiment, when encountering any one or more in following several situations, operation stops, and writes down and stop Frame number stop where when only:
Situation one: the region tracked exceeds video frame range;
Situation two: the zone radius tracked is less than threshold value Tnei
Three: two tracing area shortest straight line distances of situation are less than threshold value Td
Situation four: more than the last frame of video.
In the present embodiment, the judgement of last frame are as follows: when stop is greater than the serial number of video last frame, entirely detected Journey stops;Otherwise it returns and repeats detection process.
Fig. 2 gives subsequent frame tampered region overhaul flow chart.
The foregoing is merely presently preferred embodiments of the present invention, all equivalent changes done according to scope of the present invention patent with Modification, is all covered by the present invention.

Claims (9)

1. a kind of forgery video detecting method of tampered region Jing Guo geometric transformation, it is characterised in that: the following steps are included:
Step S1: characteristic point is extracted to tested video present frame;
Step S2: Feature Points Matching collection is obtained;
Step S3: the Mismatching point that Feature Points Matching is concentrated is eliminated;
Step S4: judge whether it is and distort frame;If then entering step S5, otherwise, return step S1;
Step S5: the suspicious tampered region of position subsequent frame, and the suspicious tampered region of each subsequent frame is successively verified;
Step S6: judging whether present frame is last frame, if then stopping detecting, otherwise return step S1.
2. a kind of forgery video detecting method of the tampered region according to claim 1 Jing Guo geometric transformation, feature exist In: in step S1, characteristic point is extracted using modified MI-SIFT algorithm.
3. a kind of forgery video detecting method of the tampered region according to claim 2 Jing Guo geometric transformation, feature exist In: step S1 specifically includes the following steps:
Step S11: first carrying out diminution processing to video frame before extracting feature, and video frame is narrowed down to the 80% of full size;
Step S12: remember that video present frame to be measured is Icurrent, I is extracted using MI-SIFT algorithmcurrentCharacteristic point obtains feature Matrix MX, each characteristic point include 128 dimension datas, and wherein eigenmatrix MX is shown below:
In formula, fnbIndicate n-th of characteristic point mxnIn b-th of component;
Step S13: carrying out dimension-reduction treatment to eigenmatrix MX using Principal Component Analysis, will be under the intrinsic dimensionality of each characteristic point It is down to 32 dimensions.
4. a kind of forgery video detecting method of the tampered region according to claim 1 Jing Guo geometric transformation, feature exist In: in step S2, characteristic matching point is determined using arest neighbors and the ratio of time neighbour, specifically:
Step S21: for any one characteristic point, the Euclidean distance of the point Yu other feature point is calculated, and by it according to ascending order Arrangement, is written as follow form:
MDn={ mdn1,mdn2,...,mdni... } and 0 < n < b-1,1 < i < b;
In formula, MDnIndicate the vector after n-th of characteristic point and other feature point Euclidean distance are arranged according to ascending order, mdn1Indicate the The Euclidean distance of n characteristic point and nearest characteristic point, mdn2The Euclidean distance of n-th of characteristic point and time nearly characteristic point is indicated, with this Analogize;
Step S22: assuming that mdn1For n-th point and the Euclidean distance of its nearest characteristic point, mdn2For n-th point of feature close with next The Euclidean distance of point must satisfy if n-th point is a pair of match point with the Euclidean distance of its nearest characteristic point:
mdn1/mdn2< TmdWherein Tmd∈(0,1);
In formula, TmdCompare threshold value for the Euclidean distance of setting;
Step S23: repeating step S21 and step S22, until each characteristic point of present frame passes through above-mentioned matching, final To Feature Points Matching the collection MQ and MW of present frame, i.e. MQ={ mq1,mq2,...,mqi... }, MW={ mw1,mw2,..., mwi... }, wherein mqiAnd mwiFor corresponding a pair of of match point.
5. a kind of forgery video detecting method of the tampered region according to claim 1 Jing Guo geometric transformation, feature exist In: step S3 specifically includes the following steps:
Step S31: the matching double points that distance is less than the minimum range are determined as Mismatching point by setting minimum range;
Step S32: the characteristic point in region for being less than default size is determined as Mismatching point;
Step S33: classifying to the characteristic point retained after step S31 to step S32 using clustering algorithm, deletes while going out Matching double points in present same class.
6. a kind of forgery video detecting method of the tampered region according to claim 1 Jing Guo geometric transformation, feature exist In: step S4 specifically: count retained match point logarithm, if quantity is greater than 5, determine that present frame is to distort frame;Otherwise Determine that present frame is true frames.
7. a kind of forgery video detecting method of the tampered region according to claim 1 Jing Guo geometric transformation, feature exist In: in step S5, using the suspicious tampered region of space-time contextual target track algorithm position subsequent frame.
8. a kind of forgery video detecting method of the tampered region according to claim 7 Jing Guo geometric transformation, feature exist In: step S5 specifically includes the following steps:
Step S51: it is assumed that present frame is the kth frame in video sequence, using a Feature Points Matching collection MQ of kth frame as target Initial position t;
Step S52: spatial context model is calculated according to the following formula:
In formula, F indicates fast Flourier operation, F-1Indicate fast Flourier inverse operation, ωσFor gaussian weighing function, t* is mesh Cursor position, b are normaliztion constant, and α is scale parameter, α=2.25, β=1, and I (t) indicates the gray value of image at t;
Step S53: space-time context model needed for updating next frame target:
In formula, ρ is learning parameter, ρ=0.075;In the first frame for being tampered video sequence, space-time context modelThe space and Context modelBe initialized as:
Step S54: confidence map is calculated according to the following formula:
Step S55: maximizing according to the following formula:
The position of the maximum value is target position;
Step S56: when using Feature Points Matching collection MQ as target initial position, tracking finish and obtain that video distorts sequence can After doubting position alpha 1, uses another Feature Points Matching collection MW as target initial position, repeat step S52 to step S55, obtain Video distorts another suspected locations α 2 of sequence;
Step S57: Feature Points Matching is carried out to two suspected locations of each frame, if characteristic matching point quantity is greater than 5, is determined The frame is to distort frame;Otherwise, which is judged as true frames.
9. a kind of forgery video detecting method of the tampered region according to claim 7 Jing Guo geometric transformation, feature exist In: when encountering any one or more in following several situations, operation stops, and the frame number where when writing down stopping Stop:
Situation one: the region tracked exceeds video frame range;
Situation two: the zone radius tracked is less than threshold value Tnei
Three: two tracing area shortest straight line distances of situation are less than threshold value Td
Situation four: more than the last frame of video.
CN201910014466.XA 2019-01-07 2019-01-07 A kind of forgery video detecting method of tampered region Jing Guo geometric transformation Pending CN109740546A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910014466.XA CN109740546A (en) 2019-01-07 2019-01-07 A kind of forgery video detecting method of tampered region Jing Guo geometric transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910014466.XA CN109740546A (en) 2019-01-07 2019-01-07 A kind of forgery video detecting method of tampered region Jing Guo geometric transformation

Publications (1)

Publication Number Publication Date
CN109740546A true CN109740546A (en) 2019-05-10

Family

ID=66363747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910014466.XA Pending CN109740546A (en) 2019-01-07 2019-01-07 A kind of forgery video detecting method of tampered region Jing Guo geometric transformation

Country Status (1)

Country Link
CN (1) CN109740546A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200001A (en) * 2020-09-11 2021-01-08 南京星耀智能科技有限公司 Depth-forged video identification method in specified scene
CN112614111A (en) * 2020-12-24 2021-04-06 南开大学 Video tampering operation detection method and device based on reinforcement learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010081214A (en) * 2008-09-25 2010-04-08 Hitachi Ltd Document feature extraction apparatus and method
CN106060568A (en) * 2016-06-28 2016-10-26 电子科技大学 Video tampering detecting and positioning method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010081214A (en) * 2008-09-25 2010-04-08 Hitachi Ltd Document feature extraction apparatus and method
CN106060568A (en) * 2016-06-28 2016-10-26 电子科技大学 Video tampering detecting and positioning method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LICHAO SU ET AL.: "A novel passive forgery detection algorithm for video region duplication", 《MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING》 *
苏立超 等: "一种基于特征提取与追踪的视频复制粘贴篡改检测方法", 《福州大学学报(自然科学版)》 *
苏立超: "数字视频帧内篡改被动取证方法研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112200001A (en) * 2020-09-11 2021-01-08 南京星耀智能科技有限公司 Depth-forged video identification method in specified scene
CN112614111A (en) * 2020-12-24 2021-04-06 南开大学 Video tampering operation detection method and device based on reinforcement learning
CN112614111B (en) * 2020-12-24 2023-09-05 南开大学 Video tampering operation detection method and device based on reinforcement learning

Similar Documents

Publication Publication Date Title
CN105095856B (en) Face identification method is blocked based on mask
CN104517104B (en) A kind of face identification method and system based under monitoring scene
Chen et al. Face-mask recognition for fraud prevention using Gaussian mixture model
CN104866829B (en) A kind of across age face verification method based on feature learning
CN101980242B (en) Human face discrimination method and system and public safety system
CN108229330A (en) Face fusion recognition methods and device, electronic equipment and storage medium
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN109255289B (en) Cross-aging face recognition method based on unified generation model
CN104504383B (en) A kind of method for detecting human face based on the colour of skin and Adaboost algorithm
CN110070090A (en) A kind of logistic label information detecting method and system based on handwriting identification
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN104156690B (en) A kind of gesture identification method based on image space pyramid feature bag
CN105117708A (en) Facial expression recognition method and apparatus
CN108537143B (en) A kind of face identification method and system based on key area aspect ratio pair
CN107220598A (en) Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN106599834A (en) Information pushing method and system
CN107784263A (en) Based on the method for improving the Plane Rotation Face datection for accelerating robust features
CN109740546A (en) A kind of forgery video detecting method of tampered region Jing Guo geometric transformation
Ryu et al. Adversarial attacks by attaching noise markers on the face against deep face recognition
CN103605993B (en) Image-to-video face identification method based on distinguish analysis oriented to scenes
CN108509825A (en) A kind of Face tracking and recognition method based on video flowing
Sharma et al. Deep convolutional neural network with ResNet-50 learning algorithm for copy-move forgery detection
Qin et al. Multi-scaling detection of singular points based on fully convolutional networks in fingerprint images
Zhang et al. Face occlusion detection using cascaded convolutional neural network
Méndez-Llanes et al. On the use of local fixations and quality measures for deep face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190510