CN110457996B - Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network - Google Patents

Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network Download PDF

Info

Publication number
CN110457996B
CN110457996B CN201910561127.3A CN201910561127A CN110457996B CN 110457996 B CN110457996 B CN 110457996B CN 201910561127 A CN201910561127 A CN 201910561127A CN 110457996 B CN110457996 B CN 110457996B
Authority
CN
China
Prior art keywords
vgg
video
frame
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910561127.3A
Other languages
Chinese (zh)
Other versions
CN110457996A (en
Inventor
甘艳芬
钟君柳
杨继翔
赖文达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Business College of Guangdong University of Foreign Studies
Original Assignee
South China Business College of Guangdong University of Foreign Studies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Business College of Guangdong University of Foreign Studies filed Critical South China Business College of Guangdong University of Foreign Studies
Priority to CN201910561127.3A priority Critical patent/CN110457996B/en
Publication of CN110457996A publication Critical patent/CN110457996A/en
Application granted granted Critical
Publication of CN110457996B publication Critical patent/CN110457996B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video moving object tampering evidence obtaining method based on a VGG-11 convolutional neural network, which comprises the following steps: calculating motion residual errors between the forged frames and the non-forged frames in the video by adopting aggregation operation, and classifying the forged frames and the non-forged frames; extracting motion residual diagram features based on the motion residual; constructing a convolutional neural network based on VGG-11; training the VGG-11 based convolutional neural network using the motion residual map features; and determining whether the video moving object is tampered or not by using the convolutional neural network based on VGG-11. Compared with the prior art, the invention can better and automatically identify the fake frame in the tampered video.

Description

Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network
Technical Field
The invention relates to a video tampering detection technology, in particular to a video moving object tampering evidence obtaining method based on a VGG-11 convolutional neural network.
Background
In the present internet age, with the continuous development of computer multimedia technology, more and more images, audio and video become network resources shared by netizens. Particularly, digital video is an important data source of many social network software because of its intuitiveness, convenience and abundant information content, which becomes a main information bearing form of the network. If necessary, these video files will serve as evidence of many important matters in the news, politics, insurance claims, forensics and legal judgment fields. However, the widespread use of powerful multimedia editing tools such as Adobe Photoshop, adobe premier, lightworks, video Edit Magic and Cinelerra, etc. makes it easy for some non-professionals to modify video content, and some counterfeited videos are difficult for professionals to identify. This has led to doubt about the trustworthiness of digital video content. Thus, there is an urgent need for effective forensic techniques to verify the authenticity, originality, and integrity of video content.
Digital video tampering is mainly divided into two types of inter-frame tampering and intra-frame tampering, wherein the inter-frame tampering refers to modifying video content by taking an image frame as a tampering unit, and common inter-frame tampering modes include frame deletion, frame insertion and frame copying; the intra-frame tampering refers to a tampering mode that a part area of a video frame is a tampering object and modifies a video time domain and a video space domain at the same time, and the main tampering modes include intra-frame copy-paste tampering, object deletion tampering and video synthesis tampering. The digital video evidence obtaining technology aiming at the two kinds of tampering is divided into active evidence obtaining and passive evidence obtaining, wherein the active evidence obtaining refers to embedding verification information such as digital fingerprints or digital watermarks in the digital video to be evidence obtained in advance, and whether the video is tampered or not is judged by verifying whether the embedded verification information is complete or not in the evidence obtaining process. In contrast to the active evidence obtaining technology, the passive evidence obtaining technology does not need to embed authentication information in advance, and is widely applied to detection of digital video tampering mainly according to the differences of characteristic values such as coding characteristics, statistical characteristics and the like of the digital video. With the continued depth of research, many scholars have proposed passive evidence methods for video frame-to-frame or intra-frame tampering.
For inter-frame tampering, zhang Wei, sunfeng, jiang Xinghao. Video tampering detection method based on P, B frame MBNV feature [ J ]. Information technology, 2016 (141): 1-4. And Zhang Xueli, huang Tianjiang, linning, etc.. Video tampering detection method based on non-negative tensor decomposition [ J ]. Network and information security journal, 2017 (06): 46-53. Inter-frame based forgery features were studied; passive evidence based on intra-frame object tampering was studied by Bagiwa M A, wahab A W A, idris M Y I, et al digital Video Inpainting Detection Using Correlation of Hessian Matrix [ J ]. Malaysian Journal of Computer Science,2016,29 (3): 179-195, wang, wang Rangding, li Qian, et al. Video object removal based on high frequency component variance, change detection algorithm [ J ]. Data communication, 2017 (1): 23-28 and Chen, shaping da, et al Automic Detection of Object-Based Forgery in Advanced Video [ J ]. IEEE Transactions on Circuits & Systems for Video Technology 26.11.11 (2016): 2138-2151.
However, the above-mentioned target object-based video tampering evidence obtaining algorithm is mostly performed based on methods such as traditional image processing and a classifier, and does not relate to a deep learning method, because objects in a video frame are numerous, and the tampered objects are not suitable for directly performing feature learning by using a deep learning network, so that research on the aspect of intra-frame video evidence obtaining by combining the deep learning method is not performed yet.
Disclosure of Invention
In order to solve the problems of the video moving object tampering detection method in the prior art, the invention provides a video moving object tampering evidence obtaining method based on a VGG-11 convolutional neural network, which can automatically detect and identify fake frames based on target object tampering.
The technical scheme of the invention is realized as follows:
video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network, comprising the following steps of
S1: calculating motion residual errors between the forged frames and the non-forged frames in the video by adopting aggregation operation, and classifying the forged frames and the non-forged frames;
s2: extracting motion residual diagram features based on the motion residual;
s3: constructing a convolutional neural network based on VGG-11;
s4: training the VGG-11 based convolutional neural network using the motion residual map features;
s5: and determining whether the video moving object is tampered or not by using the convolutional neural network based on VGG-11.
Further, the motion residual map feature extraction in the step S2 includes extracting four features of 548-dimensional CC-PEV, 686-dimensional SPAM, 2510-dimensional CC-JRM, and 7850-dimensional CF.
Further, step S3 further includes step S31: before the VGG-11 network is input, a full connection layer is added for converting the features with different dimensions into the features with fixed dimensions, so that feature diagrams with the same dimensions can be conveniently constructed, and training and testing of the VGG-11 network can be facilitated.
Further, step S3 includes the steps of:
s31: randomly selecting feature data from the feature set, transmitting the feature data to a first full-connection layer to obtain 1024-dimensional features, and constructing a feature image with the size of 32 multiplied by 1;
s32: using the 32×32×1 image as an input, sequentially performing processing of a convolution layer and a pooling layer in a convolution block, and outputting a result of 1×1×512 image;
s33: and taking a finally output 1 multiplied by 512 image of the convolution layer sequence as input, sequentially passing through two full connection layers, and finally outputting a classification result by a softMax classification layer.
Further, in step S4, the convolutional neural network based on VGG-11 is trained, a random gradient descent method is used for optimization, a momentum parameter is set to be a fixed value of 0.8, an initial learning rate is set to be 0.01, a learning rate adjustment factor is set to be 0.96, iteration times are set to be 1000, parameters of a full-connection layer and a SoftMax classification layer are initialized through a random method, and recognition accuracy is selected as an evaluation index of model training.
Compared with the prior art, the invention can better and automatically identify the falsified frames in the falsified video.
Drawings
FIG. 1 is a flow chart of a video moving object tampering evidence obtaining method based on a VGG-11 convolutional neural network;
FIG. 2 is a schematic diagram of the structure of a VGG-11 convolutional neural network in one embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a video moving object tampering evidence obtaining method based on a VGG-11 convolutional neural network includes the steps of
S1: calculating motion residual errors between the forged frames and the non-forged frames in the video by adopting aggregation operation, and classifying the forged frames and the non-forged frames;
s2: extracting motion residual diagram features based on the motion residual;
s3: constructing a convolutional neural network based on VGG-11;
s4: training the VGG-11 based convolutional neural network using the motion residual map features;
s5: and determining whether the video moving object is tampered or not by using the convolutional neural network based on VGG-11.
In step S1, since the tampering of the moving object only affects the content in part of the frames in the video, a sudden change of the content between the consecutive frames of the forged frame and the non-forged frame is caused, and the statistical characteristics of the sudden change are similar to those of steganography, the sudden change can be extracted from the motion residual map, and the forged frame and the non-forged frame can be classified by using the statistical characteristics.
A video frame sequence with a length of N is defined as
Seq={F 1 ,F 2 ,F 3 ,...,F N-1 ,F N },N∈Z (1)
Then the kth decompressed video frame is:
Figure BDA0002108308390000051
and is of size n 1 ×n 2 8-bit gray scale still images of (c). In F th k The frame is centered and the window size is l=2×l h +1, aggregation operation in local time window (L h Is F. Th k The number of left or right neighbor frames of a frame), defined as:
Figure BDA0002108308390000061
Figure BDA0002108308390000062
wherein agg is: an aggregation function that takes as a minimum (or maximum, median) of pixel differences for corresponding coordinates (i, j) of all neighboring frames in the time window
Figure BDA0002108308390000063
Namely, col is obtained by the formulas (2) and (3) k 。Col k Representing the motion of a moving object in the k frame of a time aggregation window, MR k Is a measure of motion residual. Thus, F. Sup.th k The motion residual of a frame can be defined as:
MR k =|F k -Co k | (4)
i.e. in the corresponding coordinates (i, j)
Figure BDA0002108308390000064
The definition is as follows:
Figure BDA0002108308390000065
therefore, the minimum residual map is obtained from equation (5)
Figure BDA0002108308390000066
(FIG. 3) it is necessary to add +.>
Figure BDA0002108308390000067
The definition is as follows:
Figure BDA0002108308390000068
Figure BDA0002108308390000069
from the formulas (6) and (7), it can be seen that:
Figure BDA00021083083900000610
then +.>
Figure BDA00021083083900000611
/>
Figure BDA00021083083900000612
Thus (S)>
Figure BDA00021083083900000613
The minimum residual map MR can also be considered k Is a still image with a gray value of 8-bit.
In step S2, the embodiment of the invention adopts four feature extraction algorithms of 548-dimensional CC-PEV, 686-dimensional SPAM, 2510-dimensional CC-JRM and 7850-dimensional CF to perform feature extraction for the motion residual map respectively.
The VGG-11 based convolutional neural network constructed in step S3 is shown in FIG. 2. The VGG-11 convolutional neural network comprises 11 weight layers, namely 8 convolutional layers and 3 full-connection layers. Furthermore the VGG-11 network is not followed by one pooling layer (5 pooling layers total) after each convolution layer but is distributed under different convolution layers. The pooling layer window size of the pooling layer is 2 x 2, the step size is 2, which is used to reduce the size of the convolved feature image and ensure translational invariance of the model. And finally classifying by a softMax classifier.
The characteristic classification part adopts VGG-11 convolutional neural network, and the network framework is divided into a convolutional layer, a pooling layer, a full connection layer and a softMax classification layer. Before the VGG-11 network is input, a full-connection layer is added for converting the features with different dimensions into features with fixed dimensions, so that feature graphs with the same dimensions can be conveniently constructed, training and testing of the VGG-11 network can be conveniently carried out, an activation function of a model is a ReLU function, and specific parameters are shown in table 1.
Table 1: VGG-11 network structure
Figure BDA0002108308390000071
/>
Figure BDA0002108308390000081
/>
Figure BDA0002108308390000091
The step S3 comprises the steps of:
s31: and randomly selecting feature data from the feature set, and transmitting the feature data to the first full-connection layer to obtain 1024-dimensional features. Constructing a feature image with the size of 32 multiplied by 1;
s32: the 32×32×1 image obtained in step S31 is input, sequentially subjected to convolution and pooling layer processing in the convolution block according to table 1, and output as a result of 1×1×512 images;
s33, performing S33; and taking the finally output 1 multiplied by 512 image of the convolution layer sequence as input, sequentially passing through two full-connection layers, and finally outputting a classification result by a softMax classification layer.
In one embodiment of the invention, the training database contains 100 original videos and 100 forged videos, wherein 50% of the video segments are randomly selected from the original videos, and the original videos and the corresponding forged versions form a training set, and the rest 50% of the video segments are used for testing. All experiments were repeated 50 times and the average results reported. All original video clips were extracted from a stationary commercial monitoring camera, each video was 3Mbit/s,1280 x 720 (720P), h.264/MPEG4 encoded, frame rate was 25frames/s, each length was about 11 seconds, each length was about 300 frames or so, wherein 100 counterfeit video clips were tampered with 1-2 video clips of length 1-5 seconds on the basis of the original video. The counterfeited video of the database finds almost no visible trace on any surface.
When training the evidence obtaining model, adopting a random gradient descent method to optimize, setting a momentum parameter to be a fixed value of 0.8, setting an initial learning rate of 0.01, setting a learning rate adjustment factor to be 0.96, setting iteration times to be 1000, initializing parameters of a full-connection layer and a softMax classification layer by a random method, and selecting an identification accuracy rate as an evaluation index of model training. And respectively inputting the four feature sample sets extracted through the steganography features into a model for training, screening out the steganography features most suitable for the model, and testing the expression results of the data samples with different features by taking the classification accuracy as a target.
Compared with the prior art, the comprehensive evaluation of the tamper evidence obtaining model recognition rate based on the steganalysis feature extraction can meet the requirement of tamper evidence obtaining of video moving objects, and is suitable for tamper evidence obtaining of intra-frame objects of monitoring videos.
The invention has four characteristics: the CC-JRM (2510 dimension), the CCPEV (548 dimension), the SPAM (686 dimension) and the CF (7850 dimension) can effectively improve the classification accuracy, and particularly the CC-JRM (2510 dimension) is more suitable for the evidence obtaining model constructed by the invention, and can effectively improve the classification accuracy.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (5)

1. The video moving object tampering evidence obtaining method based on the VGG-11 convolutional neural network is characterized by comprising the following steps of
S1: calculating a motion residual error between a forged frame and an untrustworthy frame in a video by adopting an aggregation operation, and classifying the forged frame and the untrustworthy frame, wherein the S1 comprises the following steps: in step S1, since the tampering of the moving object only affects the content in part of the frames in the video, a mutation of the content between the consecutive frames of the forged frame and the non-forged frame is caused, and the statistical characteristics of the mutation are similar to those of steganography, the extraction can be performed in the motion residual map, and the forged frame and the non-forged frame can be classified by using the statistical characteristics:
a video frame sequence with a length of N is defined as
Seq={F 1 ,F 2 ,F 3 ,...,F N-1 ,F N },N∈Z (1)
Then the kth decompressed video frame is:
Figure QLYQS_1
and is of size n 1 ×n 2 8-bit gray scale still image of (F) k The frame is centered and the window size is l=2×l h +1, aggregation operation in local time window, where L h Is F. Th k The number of left or right neighbor frames of a frame is defined as:
Figure QLYQS_2
Figure QLYQS_3
wherein agg is: an aggregation function that takes as a minimum or maximum or intermediate value of the pixel differences of the corresponding coordinates (i, j) of all neighboring frames in the time window
Figure QLYQS_4
Namely, col is obtained by the formulas (2) and (3) k ,Col k Representing the motion of a moving object in the k frame of a time aggregation window, MR k Is a measure of motion residual, therefore, F k The motion residual of a frame can be defined as:
MR k =|F k -Col k | (4)
i.e. in the corresponding coordinates (i, j)
Figure QLYQS_5
The definition is as follows:
Figure QLYQS_6
therefore, the minimum residual map is obtained from equation (5)
Figure QLYQS_7
Wherein +.>
Figure QLYQS_8
The definition is as follows:
Figure QLYQS_9
Figure QLYQS_10
from the formulas (6) and (7), it can be seen that:
Figure QLYQS_11
then +.>
Figure QLYQS_12
Figure QLYQS_13
Thus (S)>
Figure QLYQS_14
The minimum residual map MR can also be considered k Is a still image with gray value of 8-bit;
s2: extracting motion residual diagram features based on the motion residual;
s3: constructing a convolutional neural network based on VGG-11;
s4: training the VGG-11 based convolutional neural network using the motion residual map features;
s5: and determining whether the video moving object is tampered or not by using the convolutional neural network based on VGG-11.
2. The method for tamper evidence collection of video moving objects based on VGG-11 convolutional neural network according to claim 1, wherein the motion residual map feature extraction in step S2 comprises extracting four features of 548-dimensional CC-PEV, 686-dimensional SPAM, 2510-dimensional CC-JRM and 7850-dimensional CF.
3. The video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network as set forth in claim 1, wherein step S3 further comprises step S31: before the VGG-11 network is input, a full connection layer is added for converting the features with different dimensions into the features with fixed dimensions, so that feature diagrams with the same dimensions can be conveniently constructed, and training and testing of the VGG-11 network can be facilitated.
4. The video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network as set forth in claim 1, wherein the step S3 comprises the steps of:
s31: randomly selecting feature data from the feature set, transmitting the feature data to a first full-connection layer to obtain 1024-dimensional features, and constructing a feature image with the size of 32 multiplied by 1;
s32: using the 32×32×1 image as an input, sequentially performing processing of a convolution layer and a pooling layer in a convolution block, and outputting a result of 1×1×512 image;
s33: and taking a finally output 1 multiplied by 512 image of the convolution layer sequence as input, sequentially passing through two full connection layers, and finally outputting a classification result by a softMax classification layer.
5. The video motion object tampering evidence obtaining method based on VGG-11 convolutional neural network as set forth in claim 1, wherein in step S4, the VGG-11 convolutional neural network is trained, a random gradient descent method is adopted for optimization, a momentum parameter is set to be a fixed value of 0.8, an initial learning rate is set to be 0.01, a learning rate adjustment factor is set to be 0.96, iteration times are set to be 1000, parameters of a full connection layer and a softMax classification layer are initialized through a random method, and recognition accuracy is selected as an evaluation index of model training.
CN201910561127.3A 2019-06-26 2019-06-26 Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network Active CN110457996B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910561127.3A CN110457996B (en) 2019-06-26 2019-06-26 Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910561127.3A CN110457996B (en) 2019-06-26 2019-06-26 Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network

Publications (2)

Publication Number Publication Date
CN110457996A CN110457996A (en) 2019-11-15
CN110457996B true CN110457996B (en) 2023-05-02

Family

ID=68481071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910561127.3A Active CN110457996B (en) 2019-06-26 2019-06-26 Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network

Country Status (1)

Country Link
CN (1) CN110457996B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113170160B (en) * 2019-11-21 2022-06-14 无锡安科迪智能技术有限公司 ICS frame transformation method and device for computer vision analysis
CN111144314B (en) * 2019-12-27 2020-09-18 北京中科研究院 Method for detecting tampered face video
CN111325687B (en) * 2020-02-14 2022-10-14 上海工程技术大学 Smooth filtering evidence obtaining method based on end-to-end deep network
CN113627285A (en) * 2021-07-26 2021-11-09 长沙理工大学 Video forensics method, system, and medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527337B (en) * 2017-08-07 2019-07-09 杭州电子科技大学 A kind of the video object removal altering detecting method based on deep learning
CN107622489B (en) * 2017-10-11 2020-06-09 广东工业大学 Image tampering detection method and device
CN108985165A (en) * 2018-06-12 2018-12-11 东南大学 A kind of video copy detection system and method based on convolution and Recognition with Recurrent Neural Network
CN109348211B (en) * 2018-08-06 2020-11-06 中国科学院声学研究所 General information hiding detection method for video intra-frame inter-frame coding
CN109191444A (en) * 2018-08-29 2019-01-11 广东工业大学 Video area based on depth residual error network removes altering detecting method and device
CN109446923B (en) * 2018-10-10 2021-09-24 北京理工大学 Deep supervision convolutional neural network behavior recognition method based on training feature fusion
CN109754393A (en) * 2018-12-19 2019-05-14 众安信息技术服务有限公司 A kind of tampered image identification method and device based on deep learning
CN109902202B (en) * 2019-01-08 2021-06-22 国家计算机网络与信息安全管理中心 Video classification method and device
CN109635791B (en) * 2019-01-28 2023-07-14 深圳大学 Video evidence obtaining method based on deep learning

Also Published As

Publication number Publication date
CN110457996A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110457996B (en) Video moving object tampering evidence obtaining method based on VGG-11 convolutional neural network
Wu et al. Busternet: Detecting copy-move image forgery with source/target localization
Walia et al. Digital image forgery detection: a systematic scrutiny
Shelke et al. A comprehensive survey on passive techniques for digital video forgery detection
Mushtaq et al. Digital image forgeries and passive image authentication techniques: a survey
Dua et al. Image forgery detection based on statistical features of block DCT coefficients
CN104661037B (en) The detection method and system that compression image quantization table is distorted
Gan et al. Video object forgery detection algorithm based on VGG-11 convolutional neural network
Khan et al. Robust method for detection of copy-move forgery in digital images
CN111445454A (en) Image authenticity identification method and application thereof in license identification
AlSawadi et al. Copy-move image forgery detection using local binary pattern and neighborhood clustering
CN104519361A (en) Video steganography analysis method based on space-time domain local binary pattern
Yao et al. Detecting copy-move forgery using non-negative matrix factorization
Samanta et al. Analysis of perceptual hashing algorithms in image manipulation detection
Hakimi et al. Image-splicing forgery detection based on improved lbp and k-nearest neighbors algorithm
Fadl et al. A proposed accelerated image copy-move forgery detection
CN105120294A (en) JPEG format image source identification method
Jarusek et al. Photomontage detection using steganography technique based on a neural network
Zhang et al. Multi-scale segmentation strategies in PRNU-based image tampering localization
CN107977964A (en) Slit cropping evidence collecting method based on LBP and extension Markov feature
Bhartiya et al. Forgery detection using feature-clustering in recompressed JPEG images
Qiao et al. Csc-net: Cross-color spatial co-occurrence matrix network for detecting synthesized fake images
Wang et al. Steganalysis of JPEG images by block texture based segmentation
Li et al. Distinguishing computer graphics from photographic images using a multiresolution approach based on local binary patterns
CN113850284B (en) Multi-operation detection method based on multi-scale feature fusion and multi-branch prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant