CN110457996A - Moving Objects in Video Sequences based on VGG-11 convolutional neural networks distorts evidence collecting method - Google Patents

Moving Objects in Video Sequences based on VGG-11 convolutional neural networks distorts evidence collecting method Download PDF

Info

Publication number
CN110457996A
CN110457996A CN201910561127.3A CN201910561127A CN110457996A CN 110457996 A CN110457996 A CN 110457996A CN 201910561127 A CN201910561127 A CN 201910561127A CN 110457996 A CN110457996 A CN 110457996A
Authority
CN
China
Prior art keywords
vgg
neural networks
convolutional neural
frame
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910561127.3A
Other languages
Chinese (zh)
Other versions
CN110457996B (en
Inventor
甘艳芬
钟君柳
杨继翔
赖文达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southland Business College Of Guangdong University Of Foreign Studies
Original Assignee
Southland Business College Of Guangdong University Of Foreign Studies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southland Business College Of Guangdong University Of Foreign Studies filed Critical Southland Business College Of Guangdong University Of Foreign Studies
Priority to CN201910561127.3A priority Critical patent/CN110457996B/en
Publication of CN110457996A publication Critical patent/CN110457996A/en
Application granted granted Critical
Publication of CN110457996B publication Critical patent/CN110457996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention discloses the Moving Objects in Video Sequences based on VGG-11 convolutional neural networks to distort evidence collecting method, forges frame in video comprising steps of being calculated using aminated polyepichlorohydrin and does not forge the motion residuals between frame, classifies to forging frame and not forging frame;Based on the motion residuals, motion residuals figure feature is extracted;Construct the convolutional neural networks based on VGG-11;Using the motion residuals figure feature, the training convolutional neural networks based on VGG-11;Determine whether Moving Objects in Video Sequences is tampered using the convolutional neural networks based on VGG-11.Compared with prior art, the present invention preferably automatic identification can distort the forgery frame in video.

Description

Moving Objects in Video Sequences based on VGG-11 convolutional neural networks distorts evidence collecting method
Technical field
The present invention relates to video tampering detection technologies, more particularly to the video motion based on VGG-11 convolutional neural networks Object distorts evidence collecting method.
Background technique
Current Internet era, with the continuous development of Computer Multimedia Technology, more and more images, audio, view Frequency becomes the Internet resources that netizens share.Especially digital video, it is intuitive with it, conveniently, the information content is abundant and becomes The main information of network carries form, also becomes the significant data source of many social networks softwares.If necessary, these are regarded Frequency file will try the evidence of many material particulars in field as news, politics, insurance claim, defense and law.However function Powerful multimedia editing tool such as Adobe Photoshop, Adobe Premiere, Lightworks, Video Edit Magic and Cinelerra's etc. is widely used, so that some layman can easily repair video content Change, and some of video of forging enables brainstrust all it is difficult to distinguish the true from the false.This causes people to produce the credibility of digital video content It is raw to suspect.Therefore, the authenticity, originality and integrality of video content are verified there is an urgent need to effective forensic technologies.
Digital video, which is distorted, to be broadly divided into interframe and distorts and distort two kinds in frame, interframe distort refer to be with picture frame Unit modification video content is distorted, the common interframe mode of distorting has frame deletion, frame insertion and frame duplication;Finger is distorted in frame It is that the partial region of video frame is tampering objects to video time domain and spatial domain while that modifies distort mode, mainly The mode of distorting has in frame copy-paste distort, object deletion is distorted and distorted with Video Composition.The number distorted for both Video forensic technologies are divided into actively evidence obtaining and passive two kinds of evidence obtaining, and actively evidence obtaining refers to pre- in digital video to be collected evidence It is first embedded in verification information such as digital finger-print or digital watermarking, is by verifying embedded verification information during evidence obtaining It is no completely to be distorted to judge whether video passes through.And passive forensic technologies, with active forensic technologies on the contrary, without insertion in advance Authentication information, the otherness of the characteristic values such as main coding characteristic, statistical nature according to digital video itself, Lai Shixian logarithm The detection that word video is distorted, this technical application are more extensive.With the continuous deepening of research, it is proposed there are many scholar For the passive evidence collecting method distorted in video interframe or frame.
Interframe is distorted, Zhang Wei, grandson's lance cutting edge of a knife or a sword, video tamper detection method of the Jiang Xinghao based on P, B frame MBNV feature [J] information technology, 2016 (141): 1-4. and Zhang Xueli, Huang Tianqiang, the such as Lin Jing are distorted based on the video of non-negative tensor resolution Detection method [J] Networks and information security journal, 2017 (06): 46-53. grinds the forgery feature based on interframe Study carefully;Passive evidence obtaining to being distorted based on object in frame, Bagiwa M A, Wahab A W A, Idris M Y I, et al.Digital Video Inpainting Detection Using Correlation of Hessian Matrix[J] .Malaysian Journal of Computer Science, 2016,29 (3): 179-195, Wang Bin, Wang Rangding, Li Qian, Et al. changes detection algorithm [J] data communication, 2017 (1): 23-28 based on the video object Yi Chu Arouses of high fdrequency component diversity factor And Chen, Shengda, et al.Automatic Detection of Object-Based Forgery in Advanced Video[J].IEEE Transactions on Circuits&Systems for Video Technology 26.11 (2016): 2138-2151 is studied.
However, above-described distort evidence obtaining algorithm based on target object video, be mostly based on traditional images processing and The methods of classifier carries out, and the method for being not involved with deep learning is tampered object the reason is that the object in video frame is numerous Body is not suitable for the network using deep learning directly to the method for carrying out feature learning, therefore combining deep learning not yet Carry out the research of video evidence obtaining aspect in frame.
Summary of the invention
To overcome the problems, such as that Moving Objects in Video Sequences altering detecting method exists in the prior art, present invention proposition is based on The Moving Objects in Video Sequences of VGG-11 convolutional neural networks distorts evidence collecting method, can detect and identify automatically and is based on target object The forgery frame distorted.
The technical scheme of the present invention is realized as follows:
Moving Objects in Video Sequences based on VGG-11 convolutional neural networks distorts evidence collecting method, including step
S1: it is calculated using aminated polyepichlorohydrin and forges frame in video and do not forge the motion residuals between frame, to forgery frame and not Frame is forged to classify;
S2: being based on the motion residuals, extracts motion residuals figure feature;
S3: convolutional neural networks of the building based on VGG-11;
S4: the motion residuals figure feature, the training convolutional neural networks based on VGG-11 are used;
S5: determine whether Moving Objects in Video Sequences is tampered using the convolutional neural networks based on VGG-11.
Further, motion residuals figure feature extraction described in the step S2 includes the CC-PEV for extracting 548 dimensions, 686 The SPAM of dimension, 2510 dimensions the dimension of CC-JRM and 7850 CF these four features.
Further, step S3 further includes step S31: one layer of full articulamentum being added before inputting VGG-11 network, is used for It converts the feature of different dimensions size to the feature of fixed dimension size, is conveniently constructed the characteristic pattern of identical size, with convenient VGG-11 network is trained and tests.
Further, step S3 comprising steps of
S31: characteristic is randomly selected from feature set and is passed to the full articulamentum of first layer, obtains the spy of one 1024 dimension Sign, constructing a size is 32 × 32 × 1 characteristic image;
S32: using 32 × 32 × 1 image as input, successively passes through at convolutional layer and pond layer in convolution block Reason, output result are 1 × 1 × 512 images;
S33: 1 × 1 × 512 images that convolution sequence of layer is finally exported are as input, successively by two full connections Layer, finally by SoftMax classification layer output category result.
Further, the convolutional neural networks based on VGG-11 of training described in step S4, using under stochastic gradient Drop method optimizes, and sets momentum parameter as fixed value 0.8, initial learning rate is 0.01, and learning rate Dynamic gene is set as 0.96, the number of iterations is set as 1000, is carried out by parameter of the random device to full articulamentum and SoftMax classification layer initial Change, selects evaluation index of the recognition accuracy as model training.
The beneficial effects of the present invention are compared with prior art, the present invention can preferably automatic identification distort in video Forgery frame.
Detailed description of the invention
Fig. 1 is that the present invention is based on the Moving Objects in Video Sequences of VGG-11 convolutional neural networks to distort evidence collecting method flow chart;
Fig. 2 is the structural schematic diagram of VGG-11 convolutional neural networks in one embodiment of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts it is all its His embodiment, shall fall within the protection scope of the present invention.
Referring to Figure 1, the Moving Objects in Video Sequences based on VGG-11 convolutional neural networks distorts evidence collecting method, including step
S1: it is calculated using aminated polyepichlorohydrin and forges frame in video and do not forge the motion residuals between frame, to forgery frame and not Frame is forged to classify;
S2: being based on the motion residuals, extracts motion residuals figure feature;
S3: convolutional neural networks of the building based on VGG-11;
S4: the motion residuals figure feature, the training convolutional neural networks based on VGG-11 are used;
S5: determine whether Moving Objects in Video Sequences is tampered using the convolutional neural networks based on VGG-11.
In step sl, the content that only will affect in video in partial frame is distorted due to moving target object, thus Will cause and forging frame and do not forging one of content between these successive frames of frame mutation, the statistical nature of this mutation with it is hidden The statistical nature for writing analysis is similar, can extract in motion residuals figure, can be to forgery using these statistical natures Frame and frame is not forged classify.
The sequence of frames of video for being N by a segment length, is defined as
Seq={ F1, F2, F3..., FN-1, FN, N ∈ Z (1)
So, k-th of decompressed video frame are as follows:It and is that size is n1×n28-bit gray scale static image.With FkCentered on frame, window size is L=2 × Lh+ 1, local time's window In aminated polyepichlorohydrin (LhFor FkThe quantity of left or right neighbour's frame of frame), it is defined as:
Agg in formula are as follows: aggregate function, aggregate function by time window it is all neighbour frames corresponding coordinate (i, j) pixel Minimum value (or maximum value, median) conduct of gapCol is obtained by formula (2), (3)k。 ColkIndicate movement The motion conditions of object kth frame in time aggregation window, MRkIt is the measurement of motion residuals.Therefore, FkThe motion residuals of frame It can be defined as:
MRk=| Fk-Cok| (4)
I.e. in respective coordinates (i, j)Is defined as:
Therefore, least residual figure is asked by formula (5)Needing will whereinIs defined as:
According to formula (6) (7):It is so available Therefore,It is also contemplated that least residual figure MRkIt is gray value For 8-bit static image.
In step S2, the embodiment of the present invention using 548 dimension CC-PEV, 686 dimension SPAM, 2510 dimension CC-JRM and These four feature extraction algorithms of the CF of 7850 dimensions carry out the feature extraction for motion residuals figure respectively.
The convolutional neural networks based on VGG-11 constructed in step S3 are as shown in Figure 2.VGG-11 convolutional neural networks packet Containing 11 weight layers, respectively 8 convolutional layers and 3 full articulamentums.Furthermore VGG-11 network is not after each convolutional layer Face all then a pond layer (totally 5 pond layers) and be distributed across under different convolutional layers.The pond layer window of pond layer is big Small is 2 × 2, step-length 2, the size of the characteristic image after being used to reduce convolution, and ensures the translation invariant of model Property.Finally classified by SoftMax classifier.
Tagsort part uses VGG-11 convolutional neural networks, which is divided into convolutional layer, pond layer, Quan Lian Connect layer, SoftMax classification layer.One layer of full articulamentum is wherein added before inputting VGG-11 network, is used for different dimensions size Feature be converted into the feature of fixed dimension size, the characteristic pattern of identical size is conveniently constructed, to facilitate VGG-11 network to carry out The activation primitive of training and test, model is ReLU function, and design parameter is as shown in table 1.
Table 1:VGG-11 network structure
Step S3 comprising steps of
S31: characteristic is randomly selected from feature set and is passed to the full articulamentum of first layer, obtains the spy of one 1024 dimension Sign.Constructing a size with it is 32 × 32 × 1 characteristic image;
S32: using 32 × 32 × 1 image obtained in step S31 as input, successively by by the convolution block of table 1 The processing of convolution sum pond layer, output result are 1 × 1 × 512 images;
S33;1 × 1 × 512 images that convolution sequence of layer is finally exported again are as input, successively by two layers of full connection Layer, finally by SoftMax classification layer output category result.
In one embodiment of the invention, 100 sections of original videos and 100 sections of forgery views are included inside tranining database Frequently, wherein 50% video clip is randomly selected from original video, they constitute instruction together with version with corresponding forge Practice collection, remaining 50% video clip is for testing.All experiments repeat 50 times, and report average result.All original views Frequency segment all extracts from static business monitoring camera, and every section of video is 3Mbit/s, 1280 × 720 (720P), H.264/ MPEG4 coding, frame rate is 25frames/s, and every segment length is about 11 seconds, about 300 frame of every segment length or so, wherein 100 The video of a forgery is the video clip distorted 1-2 segment length on the basis of original video and be 1-5 seconds.The puppet of the database Making video almost can not find the trace that any surface can be seen.
When training Forensics Model, is optimized using stochastic gradient descent method, sets momentum parameter as fixed value 0.8, Initial learning rate is 0.01, and learning rate Dynamic gene is set as 0.96, and the number of iterations is set as 1000, passes through random device pair The parameter of full articulamentum and SoftMax classification layer is initialized, and recognition accuracy is selected to refer to as the evaluation of model training Mark.By by the four of steganography feature extraction kinds of feature samples collection, input model is trained respectively, for filter out it is most suitable should The steganography feature of model tests the performance results of the data sample of different characteristic using classification accuracy as target.
Compared with prior art, the present invention is based on the Forensics Model discrimination synthesis of distorting of steganalysis feature extraction to comment Valence is able to satisfy the requirement that Moving Objects in Video Sequences distorts evidence obtaining, and the present invention is suitable for object in the frame of monitor video and distorts evidence obtaining.
The present invention is to four kinds of features: CC-JRM (2510 dimension), CCPEV (548 dimension) feature, SPAM (686 dimension) and CF (7850 dimension) can effectively improve classification accuracy, and especially CC-JRM (2510 dimension) is preferably to be taken for what the present invention was constructed Model of a syndrome can effectively improve classification accuracy.
The above is a preferred embodiment of the present invention, it is noted that for those skilled in the art For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also regard For protection scope of the present invention.

Claims (5)

1. the Moving Objects in Video Sequences based on VGG-11 convolutional neural networks distorts evidence collecting method, which is characterized in that including step
S1: it is calculated using aminated polyepichlorohydrin and forges frame in video and do not forge the motion residuals between frame, do not forged to forgery frame and Frame is classified;
S2: being based on the motion residuals, extracts motion residuals figure feature;
S3: convolutional neural networks of the building based on VGG-11;
S4: the motion residuals figure feature, the training convolutional neural networks based on VGG-11 are used;
S5: determine whether Moving Objects in Video Sequences is tampered using the convolutional neural networks based on VGG-11.
2. the Moving Objects in Video Sequences as described in claim 1 based on VGG-11 convolutional neural networks distorts evidence collecting method, special Sign is, motion residuals figure feature extraction described in the step S2 include the CC-PEV for extracting 548 dimensions, the SPAM of 686 dimensions, These four features of the CF of the dimension of CC-JRM and 7850 of 2510 dimensions.
3. the Moving Objects in Video Sequences as described in claim 1 based on VGG-11 convolutional neural networks distorts evidence collecting method, special Sign is that step S3 further includes step S31: one layer of full articulamentum being added before inputting VGG-11 network, is used for different dimensions The feature of size is converted into the feature of fixed dimension size, is conveniently constructed the characteristic pattern of identical size, to facilitate VGG-11 network It is trained and tests.
4. the Moving Objects in Video Sequences as described in claim 1 based on VGG-11 convolutional neural networks distorts evidence collecting method, special Sign is, step S3 comprising steps of
S31: characteristic is randomly selected from feature set and is passed to the full articulamentum of first layer, obtains the feature of one 1024 dimension, structure Making a size is 32 × 32 × 1 characteristic image;
S32: using 32 × 32 × 1 image as input, successively passes through the convolutional layer in convolution block and the processing of pond layer, defeated Result is 1 × 1 × 512 images out;
S33: 1 × 1 × 512 images that convolution sequence of layer is finally exported successively pass through two full articulamentums, finally as input By SoftMax classification layer output category result.
5. the Moving Objects in Video Sequences as described in claim 1 based on VGG-11 convolutional neural networks distorts evidence collecting method, special Sign is that the convolutional neural networks based on VGG-11 of training described in step S4 are carried out using stochastic gradient descent method Optimization sets momentum parameter as fixed value 0.8, and initial learning rate is 0.01, and learning rate Dynamic gene is set as 0.96, iteration time Number is set as 1000, is initialized by parameter of the random device to full articulamentum and SoftMax classification layer, selection identification is quasi- True evaluation index of the rate as model training.
CN201910561127.3A 2019-06-26 2019-06-26 Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network Active CN110457996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910561127.3A CN110457996B (en) 2019-06-26 2019-06-26 Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910561127.3A CN110457996B (en) 2019-06-26 2019-06-26 Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network

Publications (2)

Publication Number Publication Date
CN110457996A true CN110457996A (en) 2019-11-15
CN110457996B CN110457996B (en) 2023-05-02

Family

ID=68481071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910561127.3A Active CN110457996B (en) 2019-06-26 2019-06-26 Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network

Country Status (1)

Country Link
CN (1) CN110457996B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144314A (en) * 2019-12-27 2020-05-12 北京中科研究院 Method for detecting tampered face video
CN111325687A (en) * 2020-02-14 2020-06-23 上海工程技术大学 Smooth filtering evidence obtaining method based on end-to-end deep network
WO2021097771A1 (en) * 2019-11-21 2021-05-27 Suzhou Aqueti Technology Co., Ltd. Ics-frame transformation method and apparatus for cv analysis
CN113627285A (en) * 2021-07-26 2021-11-09 长沙理工大学 Video forensics method, system, and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527337A (en) * 2017-08-07 2017-12-29 杭州电子科技大学 A kind of object video based on deep learning removes altering detecting method
CN107622489A (en) * 2017-10-11 2018-01-23 广东工业大学 A kind of distorted image detection method and device
CN108985165A (en) * 2018-06-12 2018-12-11 东南大学 A kind of video copy detection system and method based on convolution and Recognition with Recurrent Neural Network
CN109191444A (en) * 2018-08-29 2019-01-11 广东工业大学 Video area based on depth residual error network removes altering detecting method and device
CN109348211A (en) * 2018-08-06 2019-02-15 中国科学院声学研究所 The general information of interframe encode hides detection method in a kind of video frame
CN109446923A (en) * 2018-10-10 2019-03-08 北京理工大学 Depth based on training characteristics fusion supervises convolutional neural networks Activity recognition method
CN109635791A (en) * 2019-01-28 2019-04-16 深圳大学 A kind of video evidence collecting method based on deep learning
CN109754393A (en) * 2018-12-19 2019-05-14 众安信息技术服务有限公司 A kind of tampered image identification method and device based on deep learning
CN109902202A (en) * 2019-01-08 2019-06-18 国家计算机网络与信息安全管理中心 A kind of video classification methods and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527337A (en) * 2017-08-07 2017-12-29 杭州电子科技大学 A kind of object video based on deep learning removes altering detecting method
CN107622489A (en) * 2017-10-11 2018-01-23 广东工业大学 A kind of distorted image detection method and device
CN108985165A (en) * 2018-06-12 2018-12-11 东南大学 A kind of video copy detection system and method based on convolution and Recognition with Recurrent Neural Network
CN109348211A (en) * 2018-08-06 2019-02-15 中国科学院声学研究所 The general information of interframe encode hides detection method in a kind of video frame
CN109191444A (en) * 2018-08-29 2019-01-11 广东工业大学 Video area based on depth residual error network removes altering detecting method and device
CN109446923A (en) * 2018-10-10 2019-03-08 北京理工大学 Depth based on training characteristics fusion supervises convolutional neural networks Activity recognition method
CN109754393A (en) * 2018-12-19 2019-05-14 众安信息技术服务有限公司 A kind of tampered image identification method and device based on deep learning
CN109902202A (en) * 2019-01-08 2019-06-18 国家计算机网络与信息安全管理中心 A kind of video classification methods and device
CN109635791A (en) * 2019-01-28 2019-04-16 深圳大学 A kind of video evidence collecting method based on deep learning

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021097771A1 (en) * 2019-11-21 2021-05-27 Suzhou Aqueti Technology Co., Ltd. Ics-frame transformation method and apparatus for cv analysis
CN113170160A (en) * 2019-11-21 2021-07-23 无锡安科迪智能技术有限公司 ICS frame transformation method and device for computer vision analysis
CN111144314A (en) * 2019-12-27 2020-05-12 北京中科研究院 Method for detecting tampered face video
CN111144314B (en) * 2019-12-27 2020-09-18 北京中科研究院 Method for detecting tampered face video
CN111325687A (en) * 2020-02-14 2020-06-23 上海工程技术大学 Smooth filtering evidence obtaining method based on end-to-end deep network
CN111325687B (en) * 2020-02-14 2022-10-14 上海工程技术大学 Smooth filtering evidence obtaining method based on end-to-end deep network
CN113627285A (en) * 2021-07-26 2021-11-09 长沙理工大学 Video forensics method, system, and medium

Also Published As

Publication number Publication date
CN110457996B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
Wu et al. Busternet: Detecting copy-move image forgery with source/target localization
Zheng et al. A survey on image tampering and its detection in real-world photos
Alkawaz et al. Detection of copy-move image forgery based on discrete cosine transform
Maze et al. Iarpa janus benchmark-c: Face dataset and protocol
Guo et al. Fake colorized image detection
CN110457996A (en) Moving Objects in Video Sequences based on VGG-11 convolutional neural networks distorts evidence collecting method
Iakovidou et al. Content-aware detection of JPEG grid inconsistencies for intuitive image forensics
Kang et al. Robust median filtering forensics using an autoregressive model
Mushtaq et al. Digital image forgeries and passive image authentication techniques: a survey
CN106610969A (en) Multimodal information-based video content auditing system and method
Yang et al. Spatiotemporal trident networks: detection and localization of object removal tampering in video passive forensics
Su et al. A novel forgery detection algorithm for video foreground removal
Bayar et al. Towards order of processing operations detection in jpeg-compressed images with convolutional neural networks
Chen et al. SNIS: A signal noise separation-based network for post-processed image forgery detection
Alamro et al. Copy-move forgery detection using integrated DWT and SURF
CN113536972A (en) Self-supervision cross-domain crowd counting method based on target domain pseudo label
Singh et al. Copy move forgery detection on digital images
Conotter et al. Detecting photographic and computer generated composites
Li et al. An improved PCB defect detector based on feature pyramid networks
Abdulqader et al. Detection of tamper forgery image in security digital mage
Lu et al. Digital image forensics using statistical features and neural network classifier
CN111563531A (en) Video tampering detection method, system, storage medium, computer program, and terminal
Wilscy Pretrained convolutional neural networks as feature extractor for image splicing detection
Zhang et al. Face occlusion detection using cascaded convolutional neural network
Yu et al. A multi-scale feature selection method for steganalytic feature GFR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant