CN108537762A - Secondary jpeg compressed image evidence collecting method based on the multiple dimensioned network of depth - Google Patents

Secondary jpeg compressed image evidence collecting method based on the multiple dimensioned network of depth Download PDF

Info

Publication number
CN108537762A
CN108537762A CN201810315119.6A CN201810315119A CN108537762A CN 108537762 A CN108537762 A CN 108537762A CN 201810315119 A CN201810315119 A CN 201810315119A CN 108537762 A CN108537762 A CN 108537762A
Authority
CN
China
Prior art keywords
dct coefficient
data block
image
neural network
histogram feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810315119.6A
Other languages
Chinese (zh)
Other versions
CN108537762B (en
Inventor
邓成
李昭
赵泽雨
杨延华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Publication of CN108537762A publication Critical patent/CN108537762A/en
Application granted granted Critical
Publication of CN108537762B publication Critical patent/CN108537762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/168Segmentation; Edge detection involving transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The present invention proposes a kind of secondary jpeg compressed image evidence collecting method based on the multiple dimensioned network of depth, it is intended to improve the accuracy rate of image forensics, realize that step is:Extract N number of DCT coefficient histogram feature of jpeg image to be collected evidence;Four deep neural networks are trained;Obtain the preliminary tampering detection result of a DCT coefficient histogram feature corresponding data block in jpeg image to be collected evidence;Obtain the final tampering detection result of a DCT coefficient histogram feature corresponding data block in jpeg image to be collected evidence;Obtain the final tampering detection result of 1 DCT coefficient histogram feature corresponding data block of other N in jpeg image to be collected evidence;Obtain the evidence obtaining result figure of jpeg image to be collected evidence.The present invention can be used for the fields such as news photography identification, judicial expertise, insurance identification and the identification of bank electronic bill.

Description

Secondary jpeg compressed image evidence collecting method based on the multiple dimensioned network of depth
Technical field
The invention belongs to technical field of image processing, are related to a kind of secondary jpeg compressed image evidence collecting method, and in particular to It is a kind of to be directed to the secondary jpeg compressed image altering detecting method based on the multiple dimensioned network of depth, it can be used for secondary JPEG compression figure As evidence obtaining field.
Background technology
With the fast development of Image Acquisition tool and popularizing for Social Media, digital picture is widely used and becomes master The information carrier of stream.Using various image processing tools, amending image can be easily arbitrary required content by people.All Such as news, law, business, medical applications and many fields of academic research, the confidence level of visual pattern have been subjected to digital skill The damage of art.Accordingly, it is intended to identify image primary source or determine that the digital image evidence collecting whether picture material is changed becomes It is particularly important.
Since JPEG is the picture format of most of digital devices, the relevant image forensics researchs of JPEG cause extensive pass Note.Since compression can weaken certain traces of distorted image, many forensic technologies are not all suitable for jpeg image.But when When jpeg image is tampered and saves as jpeg format again, the special mark of some second-compresseds is often left.Existing Image forensics technology based on JPEG compression is mainly secondary to detect by analyzing the statistics feature of image DCT coefficient histogram The trace of JPEG compression, to distinguish the region in a width jpeg image by single compression and the region Jing Guo second-compressed, from And realize the positioning to tampered region.
The main method of traditional secondary jpeg compressed image evidence obtaining is that jpeg image is divided into several image blocks, is utilized Statistical method assesses the first time compression quality factor and probability Distribution Model of each image block, calculates each image block warp The probability distorted is crossed, so that it is determined that the specific location of tampered region.This method is needed by a large amount of theory deduction and artificial Ground design feature, and these features are not often suitable for jpeg image first time compression quality factor and are more than the second second compression matter The case where measuring factor, cannot it is accurate, automatically tampered region is positioned.
Currently, deep learning is utilized in secondary jpeg compressed image evidence obtaining, avoids a large amount of statistics and derive, it can be certainly It is dynamic to classify with non-tampered region to distorting for image.Deep learning is that one kind is based on characterizing data in machine learning The method of study, benefit are feature learning with non-supervisory formula or Semi-supervised and layered characteristic extraction highly effective algorithm to substitute hand Work obtains feature.For example, Q.Wang et al. 2016 is the of EURASIP Journal on Information Security It is delivered on the phase of volume 2016 the 1st《Double JPEG compression forensics based on a convolutional neural network》In, disclose a kind of secondary jpeg compressed image based on convolutional neural networks DCT coefficient in jpeg image header file is divided into several data blocks by evidence collecting method first, then according to the second second compression matter The value for measuring factor, one network of selection is each automatically to extract from advance trained 8 different convolutional neural networks The feature of a data block DCT coefficient histogram exports the data block by the probability distorted, is finally carried out automatically to entire image Tampering location.But since this method does not fully take into account some statistics features of secondary JPEG compression, just with volume Product neural network simply extracts feature, and the information extracted is less, and considers that situation is not comprehensive enough, when When jpeg image first time compression quality factor is more than second of compression quality factor, there is no effective solution method, image to take It is low to demonstrate,prove accuracy rate.
Invention content
It is an object of the invention to overcome above-mentioned the deficiencies in the prior art, it is proposed that a kind of based on the multiple dimensioned network of depth Secondary jpeg compressed image evidence collecting method, it is intended to improve the accuracy rate of image forensics.
The technical thought of the present invention is to extract several DCT coefficient histogram features of jpeg image to be collected evidence first, then Four deep neural networks of training obtain one using trained three deep neural networks for extracting Analysis On Multi-scale Features Whether the preliminary tampering detection of DCT coefficient histogram feature corresponding data block further according to this probabilistic determination as a result, need to use Another deep neural network assists detecting, and obtains the final of DCT coefficient histogram feature corresponding data block and distorts inspection It surveys as a result, then obtaining the final tampering detection of other data blocks of jpeg image to be collected evidence as a result, last basis waits collecting evidence The final tampering detection result of all data blocks of jpeg image obtains the evidence obtaining result figure of jpeg image to be collected evidence.
To achieve the above object, the technical solution that the present invention takes includes the following steps:
(1) N number of DCT coefficient histogram feature F of jpeg image to be collected evidence is extracted:
(1a) reads in the image header file of width jpeg image to be collected evidence, and extracts DCT coefficient from the image header file, Obtain the DCT coefficient matrix that size is m × n, m >=32, n >=32;
Whether the line number and columns of (1b) detection DCT coefficient matrix can be divided exactly by L, if so, step (1c) is executed, it is no Then, in the zero padding of the DCT coefficient matrix rightmost sideRow, in lower side zero paddingRow, and execute step (1c), wherein the multiple that 32≤L≤96, L are 8;
(1c) with the gap length of 8 pixels, it is L that N number of size is extracted from DCT coefficient matrix according to the sequence of Row Column The DCT coefficient data block of × L, composition data set of blocks;
N number of data block in data block set is divided into L by (1d)2/ 64 8 × 8 data patch, and according in a zigzag Arrangement mode extracts the 2nd to the 10th data patch from each data block, obtains 9N 8 × 8 data patch, then distinguish The DCT coefficient histogram that length of each data patch at { -15, -14 ..., 14,15 } position is 31 is extracted, constitutes and waits taking Demonstrate,prove the DCT coefficient histogram feature F of N number of 279 dimension of jpeg image;
(2) four deep neural networks are trained:
It is the first convolutional layer stacked gradually, the first pond layer, the second convolutional layer, that (2a), which builds four basic structures, Two pond layers, the first full articulamentum, the second full articulamentum, the full articulamentum of third and Softmax layers of the first deep neural network, Second deep neural network, third deep neural network and the 4th deep neural network;
(2b) extracts X according to the method for step (1) from JPEG image data library1A size is L1×L1Do not distort figure Picture data block and tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block1Make For positive sample collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block2As negative sample collection, By positive sample collection and negative sample collection composition the first training set, wherein X1>=10,32≤L1≤ 96, L1For 8 multiple;
(2c) extracts X according to the method for step (1) from JPEG image data library2A size is L2×L2Do not distort figure Picture data block and tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block3Make For positive sample collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block4As negative sample collection, By positive sample collection and negative sample collection composition the second training set, wherein X2>=10,96 < L2≤ 160, L2For 8 multiple;
(2d) extracts X according to the method for step (1) from JPEG image data library3A size is L3×L3Do not distort figure Picture data block and tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block5Make For positive sample collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block6As negative sample collection, By positive sample collection and negative sample collection composition third training set, wherein X3>=10,160≤L3≤ 256, L3For 8 multiple;
(2e) extracts X according to the method for step (1) from JPEG image data library4A size is L4×L4Do not distort figure As the tampered image data block of data block and QF1 > QF2, and the DCT coefficient histogram that will be extracted from each tampered image data block Figure feature F7As positive sample collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block8As Negative sample collection is gathered positive sample collection and the 4th training of negative sample collection composition, wherein X4>=10,32≤L4≤ 96, L4For 8 times Number, QF1 are first time compression quality factor, and QF2 is second of compression quality factor;
(2f) is trained the first deep neural network using the first training set, using the second training set to second Deep neural network is trained, and is trained to third deep neural network using third training set, using the 4th training Set the 4th deep neural network is trained, obtain the first deep neural network Net1, the second deep neural network Net2, Third deep neural network Net3 and the 4th deep neural network Net4;
(3) the preliminary tampering detection of a DCT coefficient histogram feature F corresponding data block in jpeg image to be collected evidence is obtained As a result S (1,1) and S (2,1):
(3a) by a DCT coefficient histogram feature F in step (1d), first be separately input in step (2f) is deep Neural network Net1, the second deep neural network Net2 and third deep neural network Net3 are spent, the first depth nerve net is obtained The output s of network Net11(1,1) and s1(2,1), the output s of the second deep neural network Net22(1,1) and s2(2,1) and third The output s of deep neural network Net33(1,1) and s3(2,1);
(3b) is to s1(1,1)、s1(2,1)、s2(1,1)、s2(2,1)、s3(1,1) and s3(2,1) are weighted fusion, obtain The probability and not tampered probability that one DCT coefficient histogram feature F corresponding data blocks process is distorted, and process is distorted Preliminary tampering detection result S (1,1) of the probability as a DCT coefficient histogram feature F corresponding data block, will be not tampered Preliminary tampering detection result S (2,1) of the probability as a DCT coefficient histogram feature F corresponding data block;
(4) the final tampering detection of a DCT coefficient histogram feature F corresponding data block in jpeg image to be collected evidence is obtained As a result Sl(1,1) and Sl(2,1):
(4a) is calculated | S (1,1)-S (2,1) |, given threshold t;
(4b) judges | S (1,1)-S (2,1) | the size with t, when | S (1,1)-S (2,1) | when >=t, by S (1,1) and S (2,1) the final tampering detection result S as a DCT coefficient histogram feature F corresponding data blockl(1,1) and Sl(2,1), when | S (1,1)-S (2,1) | when < t, a DCT coefficient histogram feature F is inputted into the 4th deep neural network Net4, obtains the The output s of four deep neural network Net44(1,1) and s4(2,1), and by s4(1,1) and s4(2,1) it is used as a DCT coefficient straight The final tampering detection result S of square figure feature F corresponding data blocksl(1,1) and Sl(2,1);
(5) other N-1 DCT coefficient histogram feature F in jpeg image to be collected evidence is obtainedkThe final of corresponding data block is usurped Change testing resultWith
(5a) obtains other N-1 DCT coefficient histogram feature in jpeg image to be collected evidence according to the method for step (3) FkThe preliminary tampering detection result S of corresponding data blockk(1,1) and Sk(2,1), wherein k are labels, and k=1,2 ..., N-1;
(5b) obtains other N-1 DCT coefficient histogram feature in jpeg image to be collected evidence according to the method for step (4) FkThe final tampering detection result of corresponding data blockWithWherein k is label, and k=1,2 ..., N-1;
(6) the evidence obtaining result figure of jpeg image to be collected evidence is obtained:
(6a) is by the final tampering detection of a DCT coefficient histogram feature F corresponding data block in jpeg image to be collected evidence As a result Sl(1,1) and Sl(2,1) other N-1 DCT coefficient histogram feature F and in jpeg image to be collected evidencekCorresponding data block Final tampering detection resultWithMerge, obtains the final of all N number of data blocks in jpeg image to be collected evidence and usurp Change testing resultWithWherein k is label, and k=1,2 ..., N-1, p are labels, and p=1,2 ..., N;
(6b) uses the final tampering detection result of p-th of data block in jpeg image to be collected evidenceValue, substitute pth The pixel value of 8 × 8 image fritter of image block center of a data block corresponding position, wherein p are labels, and p=1,2 ..., N, obtaining size isJpeg image to be collected evidence distort probability graph;
The probability graph of distorting that (6c) treats evidence obtaining jpeg image carries out binaryzation, obtains the evidence obtaining result figure of jpeg image.
Compared with prior art, the present invention having the following advantages that:
First, the present invention is due to extracting each DCT coefficient histogram feature of jpeg image to be collected evidence in different scale sky Between in characteristic information, be to be realized by the different deep neural network of three structure identical parameters, fully take into account secondary The statistical properties of JPEG compression eliminate the prior art due to simply extracting feature merely with a convolutional neural networks The less defect of caused extraction information, obtains the feature for being better able to distinguish distorted image and non-tampered region, to carry The high accuracy rate of image forensics.
Second, the present invention is due to each DCT coefficient histogram feature corresponding data block in obtaining jpeg image to be collected evidence Final tampering detection result when, by will data block in the preliminary tampering detection result of each data block of jpeg image be collected evidence By the absolute value of the difference and a threshold value comparison of the probability and not tampered probability distorted, it is divided into two kinds of situations and carries out respectively Different disposal, when the absolute value is more than or equal to threshold value, by the preliminary tampering detection result of data block directly as finally distorting Testing result is more than second using one for jpeg image first time compression quality factor when the absolute value is less than threshold value The deep neural network of compression quality factor situation design assists detecting, and obtains the final tampering detection of data block as a result, keeping away Exempt from the prior art and considered the not comprehensive enough problem of situation, further improves the accuracy rate of image forensics.
Description of the drawings
Fig. 1 is the implementation process block diagram of the present invention;
Fig. 2 is the present invention and the existing secondary JPEG compression evidence collecting method based on convolutional neural networks, in Florence Image forensics under secondary JPEG compression positioning image data base disclosed in university test accuracy rate broken line comparison diagram.
Specific implementation mode
In the following with reference to the drawings and specific embodiments, the present invention is described in further detail.
Referring to Fig.1, the secondary jpeg compressed image evidence collecting method based on the multiple dimensioned network of depth, includes the following steps:
Step 1) extracts N number of DCT coefficient histogram feature F of jpeg image to be collected evidence:
Step 1a) with jpeg image kit read in 1024 × 1024 sizes jpeg image to be collected evidence, waited for The image data and image header file of image forensic, and DCT coefficient is extracted from the image header file, it is m × n=to obtain size 1024 × 1024 DCT coefficient matrix;
Step 1b) detect whether the line number of DCT coefficient matrix and columns can be divided exactly by L=64, if so, executing step (1c), otherwise, in the zero padding of the DCT coefficient matrix rightmost sideRow, in lower side zero paddingRow, and Step (1c) is executed, can be divided exactly by L=64 by m=1024 in this present embodiment and n=1024, so directly executing step (1c);
Step 1c) with the gap length of 8 pixels, N=is extracted from DCT coefficient matrix according to the sequence of Row Column 14400 sizes are the DCT coefficient data block of L × L=64 × 64, composition data set of blocks;
Step 1d) N=14400 data block in data block set be divided into L2/ 64=64 8 × 8 data are small Block, and the 2nd to the 10th data patch is extracted from each data block according to zigzag arrangement mode, that is, in every number According to (1,2) of block, (2,1), (3,1), (2,2), (1,3), (Isosorbide-5-Nitrae), (2,3), (3,2), the data patch at (4,1) position, 9N=129600 8 × 8 data patch is obtained, then extracts each data patch respectively in { -15, -14 ..., 14,15 } position The DCT coefficient histogram that the length at place is 31 is set, the DCT coefficient for constituting N=14400 279 dimension of jpeg image to be collected evidence is straight Square figure feature F, expression formula are:
F={ Hi(-15),Hi(-14),...,Hi(-2),Hi(-1),
Hi(0),Hi(1),Hi(2),...,Hi(14),Hi(15)},
i∈{2,3,...,9,10}
Wherein, Hi(x) DCT coefficient histogram of i-th of 8 × 8 data patch of corresponding arrangement in a zigzag at x position is indicated Figure;
Step 2) is trained four deep neural networks:
Step 2a) structure four basic structures be the first convolutional layer stacked gradually, the first pond layer, the second convolution Layer, the second pond layer, the first full articulamentum, the second full articulamentum, the full articulamentum of third and Softmax layers of the first depth nerve Network, the second deep neural network, third deep neural network and the 4th deep neural network, wherein the first depth nerve net Network, the second deep neural network and third deep neural network are for extracting three of characteristic information in different scale space depths Neural network is spent, and the 4th deep neural network is a deep neural network for assisting detection, each depth nerve net The core size of convolutional layer is 3 × 1 in network, and step-length is 1, and characteristic pattern quantity is 100, pond in each deep neural network The core size of layer is 3 × 1, and step-length is 2, and the characteristic pattern quantity of full articulamentum is 1000 in each deep neural network, Softmax layers of output size is 2 in each deep neural network;
Step 2b) according to the method for step (1), select 6400 jpeg images for training from JPEG image data library, X is extracted from this 6400 jpeg images1=1638400 sizes are L1×L1=64 × 64 non-tampered image data block With tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block1As positive sample Collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block2As negative sample collection, by positive sample Collection and the first training of negative sample collection composition are gathered, and the JPEG image data library in the present embodiment is using Florence universities public affairs The secondary JPEG compression positioning image data base opened, the database include the left-half warp of 10000 1024 × 1024 sizes Single is crossed to compress and high definition jpeg image of the right half part Jing Guo second-compressed;
Step 2c) according to the method for step (1), select 6400 jpeg images for training from JPEG image data library, X is extracted from this 6400 jpeg images2=409600 sizes are L2×L2=128 × 128 non-tampered image data block With tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block3As positive sample Collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block4As negative sample collection, by positive sample Collection and negative sample collection composition the second training set;
Step 2d) according to the method for step (1), select 6400 jpeg images for training from JPEG image data library, X is extracted from this 6400 jpeg images3=102400 sizes are L3×L3=256 × 256 non-tampered image data block With tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block5As positive sample Collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block6As negative sample collection, by positive sample Collection and negative sample collection composition third training set;
Step 2e) according to the method for step (1), select 6400 jpeg images for training from JPEG image data library, X is extracted from this 6400 jpeg images4=1638400 sizes are L4×L4=64 × 64 non-tampered image data block With the tampered image data block of QF1 > QF2, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block7 As positive sample collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block8As negative sample Collection gathers positive sample collection and the 4th training of negative sample collection composition, and wherein QF1 is first time compression quality factor, QF2 second The second compression factor of quality;
Step 2f) the first deep neural network is trained using the first training set, using the second training set pair Second deep neural network is trained, and is trained to third deep neural network using third training set, using the 4th Training set is trained the 4th deep neural network, obtains the first deep neural network Net1, the second deep neural network Net2, third deep neural network Net3 and the 4th deep neural network Net4, the structure of this four deep neural networks are identical But parameter is different;
Step 3) obtains the preliminary of DCT coefficient histogram feature F corresponding data blocks in jpeg image to be collected evidence and distorts Testing result S (1,1) and S (2,1):
Step 3a) by a DCT coefficient histogram feature F in step (1d), be separately input in step (2f) One deep neural network Net1, the second deep neural network Net2 and third deep neural network Net3 obtain the first depth god Output s through network N et11(1,1) and s1(2,1), the output s of the second deep neural network Net22(1,1) and s2(2,1) and The output s of third deep neural network Net33(1,1) and s3(2,1), with s1(1,1) and s1For (2,1), s1(1,1) and s1 (2,1) are the numbers between two 0 to 1, and s1(1,1)+s1(2,1)=1, s1(1,1) a DCT coefficient histogram feature F is represented The probability size that corresponding data block process is distorted, and s1(2,1) a DCT system of the first deep neural network Net1 outputs is represented Number histogram feature F corresponding datas block is without the probability size distorted;
Step 3b) to s1(1,1)、s1(2,1)、s2(1,1)、s2(2,1)、s3(1,1) and s3(2,1) are weighted fusion, A DCT coefficient histogram feature F corresponding data block is obtained by the probability distorted and not tampered probability, and will pass through and usurp Preliminary tampering detection result S (1,1) of the probability changed as a DCT coefficient histogram feature F corresponding data block, will be without usurping Preliminary tampering detection result S (2,1) of the probability changed as a DCT coefficient histogram feature F corresponding data block, S (1,1) and S (2,1) is the number between two 0 to 1, and S (1,1)+S (2,1)=1, the method for wherein Weighted Fusion are:
S (1,1)=w1×s1(1,1)+w2×s2(1,1)+w3×s3(1,1),
S (2,1)=w1×s1(2,1)+w2×s2(2,1)+w3×s3(2,1),
s.t.w1,w2,w3≤1,w1+w2+w3=1
Wherein, S (1,1) and S (2,1) is to s1(1,1)、s1(2,1)、s2(1,1)、s2(2,1)、s3(1,1) and s3(2,1) The preliminary tampering detection for merging and obtaining is weighted as a result, w1=0.8 is the weight of the first deep neural network Net1, w2=0.1 For the weight of the second deep neural network Net2, w3=0.1 is the weight of third deep neural network Net3;
Step 4) obtains the final of DCT coefficient histogram feature F corresponding data blocks in jpeg image to be collected evidence and distorts Testing result Sl(1,1) and Sl(2,1):
Step 4a) calculate | S (1,1)-S (2,1) |, given threshold t=0.3, the threshold value is empirical value;
Step 4b) judge | S (1,1)-S (2,1) | the size with t, as | S (1,1)-S (2,1) | when >=t, by S (1,1) and Final tampering detection result Ss of the S (2,1) directly as a DCT coefficient histogram feature F corresponding data blockl(1,1) and Sl(2, 1), as | S (1,1)-S (2,1) | when < t, illustrate that the gaps of two values of S (1,1) and S (2,1) are little, that is, more difficult judges one Whether a DCT coefficient histogram feature F corresponding datas block is generally big in jpeg image first time compression quality factor by distorting It will appear the situation when second of compression quality factor, need to assist examining using the 4th deep neural network Net4 at this time It surveys, a DCT coefficient histogram feature F is inputted into the 4th deep neural network Net4, obtains the 4th deep neural network Net4 Output s4(1,1) and s4(2,1), and by s4(1,1) and s4(2,1) it is used as a DCT coefficient histogram feature F corresponding data The final tampering detection result S of blockl(1,1) and Sl(2,1);
Step 5) obtains other N-1=14399 DCT coefficient histogram feature F in jpeg image to be collected evidencekCorresponding data The final tampering detection result of blockWith
Step 5a) according to the method for step (3), obtain other N-1=14399 DCT coefficient in jpeg image to be collected evidence Histogram feature FkThe preliminary tampering detection result S of corresponding data blockk(1,1) and Sk(2,1), wherein k are labels, and k=1, 2,...,N-1;
Step 5b) according to the method for step (4), obtain other N-1=14399 DCT coefficient in jpeg image to be collected evidence Histogram feature FkThe final tampering detection result of corresponding data blockWithWherein k is label, and k=1, 2,...,N-1;
Step 6) obtains the evidence obtaining result figure of jpeg image to be collected evidence:
Step 6a) the final of DCT coefficient histogram feature F corresponding data block in jpeg image to be collected evidence distorted into inspection Survey result Sl(1,1) and Sl(2,1) other N-1=14399 DCT coefficient histogram feature F and in jpeg image to be collected evidencekIt is right Answer the final tampering detection result of data blockWithMerge, obtains all N=14400 in jpeg image to be collected evidence The final tampering detection result of a data blockWithWherein k is label, and k=1,2 ..., N-1, p are marks Number, and p=1,2 ..., N;
Step 6b) with the final tampering detection result of p-th of data block in jpeg image to be collected evidenceValue, substitute The pixel value of 8 × 8 image fritter of image block center of p-th of data block corresponding position, wherein p are labels, and p=1, 2 ..., N, obtaining size isJpeg image to be collected evidence distort probability Scheme, the data in figure are all the number between 0 to 1;
Step 6c) treat evidence obtaining jpeg image distort probability graph carry out binaryzation, by tampered region be labeled as black, will Non- tampered region obtains the evidence obtaining result figure of jpeg image labeled as white.
Below in conjunction with emulation experiment, the technique effect of the present invention is described further.
1. simulated conditions and content:
The emulation experiment Computer configuration surroundings of the present invention are Intel (R) Core (i5-3470) 3.20GHZ centres Device, 7 operating system of memory 8G, WINDOWS are managed, computer simulation software uses MATLAB R2015b softwares.The present invention is normal It is emulated in secondary JPEG compression positioning image data base disclosed in Florence universities.The image data base packet of use Left-half containing 10000 1024 × 1024 sizes is compressed and high definition JPEG of the right half part Jing Guo second-compressed by single Image, selection wherein 6400 jpeg images select other 2000 jpeg images for testing, L, L for training1, L2, L3, L4Value be respectively 64,64,128,256,64, threshold value t takes 0.3, w1, w2, w3Value be respectively 0.8,0.1,0.1.
The control methods of the method for the present invention is the secondary JPEG compression evidence collecting method based on convolutional neural networks. It is taken with the image of control methods and the method for the present invention in secondary JPEG compression positioning image data base disclosed in Florence universities Accuracy rate is demonstrate,proved to assess the performance of image forensics.The results are shown in Figure 2 for it.Fig. 2 is when taking different second of compression quality factor Image forensics accuracy rate line chart, the axis of abscissas in Fig. 2 indicates second of compression quality factor of jpeg image, axis of ordinates Indicate image forensics accuracy rate.
2. analysis of simulation result:
By the simulation result of Fig. 2 as it can be seen that using the present invention to jpeg image carry out image forensics when, image forensics it is accurate Rate is apparently higher than the existing secondary JPEG compression evidence collecting method based on convolutional neural networks, therefore, compared with prior art, this Invention improves the accuracy rate of image forensics.

Claims (3)

1. a kind of secondary jpeg compressed image evidence collecting method based on the multiple dimensioned network of depth, which is characterized in that including walking as follows Suddenly:
(1) N number of DCT coefficient histogram feature F of jpeg image to be collected evidence is extracted:
(1a) reads in the image header file of width jpeg image to be collected evidence, and extracts DCT coefficient from the image header file, obtains Size is the DCT coefficient matrix of m × n, m >=32, n >=32;
Whether the line number and columns of (1b) detection DCT coefficient matrix can be divided exactly by L, if so, step (1c) is executed, otherwise, The zero padding of the DCT coefficient matrix rightmost sideRow, in lower side zero paddingRow, and step (1c) is executed, Wherein, the multiple that 32≤L≤96, L are 8;
(1c) with the gap length of 8 pixels, it is L × L that N number of size is extracted from DCT coefficient matrix according to the sequence of Row Column DCT coefficient data block, composition data set of blocks;
N number of data block in data block set is divided into L by (1d)2/ 64 8 × 8 data patch, and arranged according to zigzag Mode extracts the 2nd to the 10th data patch from each data block, obtains 9N 8 × 8 data patch, then extract respectively The DCT coefficient histogram that length of each data patch at { -15, -14 ..., 14,15 } position is 31, constitutes and waits collecting evidence The DCT coefficient histogram feature F of N number of 279 dimension of jpeg image;
(2) four deep neural networks are trained:
It is the first convolutional layer stacked gradually, the first pond layer, the second convolutional layer, the second pond that (2a), which builds four basic structures, Change layer, the first full articulamentum, the second full articulamentum, the full articulamentum of third and Softmax layers of the first deep neural network, second Deep neural network, third deep neural network and the 4th deep neural network;
(2b) extracts X according to the method for step (1) from JPEG image data library1A size is L1×L1Non- tampered image number According to block and tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block1As just Sample set, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block2It, will just as negative sample collection Sample set and negative sample collection composition the first training set, wherein X1>=10,32≤L1≤ 96, L1For 8 multiple;
(2c) extracts X according to the method for step (1) from JPEG image data library2A size is L2×L2Non- tampered image number According to block and tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block3As just Sample set, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block4It, will just as negative sample collection Sample set and negative sample collection composition the second training set, wherein X2>=10,96 < L2≤ 160, L2For 8 multiple;
(2d) extracts X according to the method for step (1) from JPEG image data library3A size is L3×L3Non- tampered image number According to block and tampered image data block, and the DCT coefficient histogram feature F that will be extracted from each tampered image data block5As just Sample set, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block6It, will just as negative sample collection Sample set and negative sample collection composition third training set, wherein X3>=10,160≤L3≤ 256, L3For 8 multiple;
(2e) extracts X according to the method for step (1) from JPEG image data library4A size is L4×L4Non- tampered image number According to the tampered image data block of block and QF1 > QF2, and the DCT coefficient histogram extracted from each tampered image data block is special Levy F7As positive sample collection, while the DCT coefficient histogram feature F that will be extracted from each non-tampered image data block8As negative sample This collection is gathered positive sample collection and the 4th training of negative sample collection composition, wherein X4>=10,32≤L4≤ 96, L4For 8 multiple, QF1 is first time compression quality factor, and QF2 is second of compression quality factor;
(2f) is trained the first deep neural network using the first training set, using the second training set to the second depth Neural network is trained, and is trained to third deep neural network using third training set, is gathered using the 4th training 4th deep neural network is trained, the first deep neural network Net1, the second deep neural network Net2, third are obtained Deep neural network Net3 and the 4th deep neural network Net4;
(3) the preliminary tampering detection result of a DCT coefficient histogram feature F corresponding data block in jpeg image to be collected evidence is obtained S (1,1) and S (2,1):
(3a) by a DCT coefficient histogram feature F in step (1d), the first depth being separately input in step (2f) is refreshing Through network N et1, the second deep neural network Net2 and third deep neural network Net3, the first deep neural network is obtained The output s of Net11(1,1) and s1(2,1), the output s of the second deep neural network Net22(1,1) and s2(2,1) and third is deep Spend the output s of neural network Net33(1,1) and s3(2,1);
(3b) is to s1(1,1)、s1(2,1)、s2(1,1)、s2(2,1)、s3(1,1) and s3(2,1) are weighted fusion, obtain one DCT coefficient histogram feature F corresponding data blocks will pass through the probability distorted by the probability distorted and not tampered probability As the preliminary tampering detection result S (1,1) of a DCT coefficient histogram feature F corresponding data block, by not tampered probability Preliminary tampering detection result S (2,1) as a DCT coefficient histogram feature F corresponding data block;
(4) the final tampering detection result of a DCT coefficient histogram feature F corresponding data block in jpeg image to be collected evidence is obtained Sl(1,1) and Sl(2,1):
(4a) is calculated | S (1,1)-S (2,1) |, given threshold t;
(4b) judges | S (1,1)-S (2,1) | the size with t, when | S (1,1)-S (2,1) | when >=t, by S (1,1) and S (2,1) Final tampering detection result S as a DCT coefficient histogram feature F corresponding data blockl(1,1) and Sl(2,1), when | S (1, 1) (2,1)-S | when < t, a DCT coefficient histogram feature F is inputted into the 4th deep neural network Net4, obtains the 4th depth The output s of neural network Net44(1,1) and s4(2,1), and by s4(1,1) and s4(2,1) it is used as a DCT coefficient histogram special Levy the final tampering detection result S of F corresponding data blocksl(1,1) and Sl(2,1);
(5) other N-1 DCT coefficient histogram feature F in jpeg image to be collected evidence is obtainedkThe final of corresponding data block distorts inspection Survey result Sl k(1,1) and Sl k(2,1):
(5a) obtains other N-1 DCT coefficient histogram feature F in jpeg image to be collected evidence according to the method for step (3)kIt is corresponding The preliminary tampering detection result S of data blockk(1,1) and Sk(2,1), wherein k are labels, and k=1,2 ..., N-1;
(5b) obtains other N-1 DCT coefficient histogram feature F in jpeg image to be collected evidence according to the method for step (4)kIt is corresponding The final tampering detection result of data blockWithWherein k is label, and k=1,2 ..., N-1;
(6) the evidence obtaining result figure of jpeg image to be collected evidence is obtained:
(6a) is by the final tampering detection result S of a DCT coefficient histogram feature F corresponding data block in jpeg image to be collected evidencel (1,1) and Sl(2,1) other N-1 DCT coefficient histogram feature F and in jpeg image to be collected evidencekThe final of corresponding data block is usurped Change testing resultWithMerge, obtains the final tampering detection of all N number of data blocks in jpeg image to be collected evidence As a resultWithWherein k is label, and k=1,2 ..., N-1, p are labels, and p=1,2 ..., N;
(6b) uses the final tampering detection result of p-th of data block in jpeg image to be collected evidenceValue, substitute p-th number According to the pixel value of 8 × 8 image fritter of image block center of block corresponding position, wherein p is label, and p=1,2 ..., N are obtained It is to sizeJpeg image to be collected evidence distort probability graph;
The probability graph of distorting that (6c) treats evidence obtaining jpeg image carries out binaryzation, obtains the evidence obtaining result figure of jpeg image.
2. the secondary jpeg compressed image evidence collecting method according to claim 1 based on the multiple dimensioned network of depth, feature It is, the DCT coefficient histogram feature F of N number of 279 dimension of the jpeg image to be collected evidence described in step (1d), expression formula is:
F={ Hi(-15),Hi(-14),...,Hi(-2),Hi(-1),
Hi(0),Hi(1),Hi(2),...,Hi(14),Hi(15)},
i∈{2,3,...,9,10}
Wherein, Hi(x) DCT coefficient histogram of i-th of 8 × 8 data patch of corresponding arrangement in a zigzag at x position is indicated.
3. the secondary jpeg compressed image evidence collecting method according to claim 1 based on the multiple dimensioned network of depth, feature It is, the Weighted Fusion described in step (3b), method is:
S (1,1)=w1×s1(1,1)+w2×s2(1,1)+w3×s3(1,1),
S (2,1)=w1×s1(2,1)+w2×s2(2,1)+w3×s3(2,1),
s.t.w1,w2,w3≤1,w1+w2+w3=1
Wherein, S (1,1) and S (2,1) is to s1(1,1)、s1(2,1)、s2(1,1)、s2(2,1)、s3(1,1) and s3(2,1) it carries out The preliminary tampering detection that Weighted Fusion obtains is as a result, w1Weight, w for the first deep neural network Net12For the second depth nerve The weight of network N et2, w3For the weight of third deep neural network Net3.
CN201810315119.6A 2017-12-29 2018-04-10 Depth multi-scale network-based secondary JPEG compressed image evidence obtaining method Active CN108537762B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2017114706574 2017-12-29
CN201711470657 2017-12-29

Publications (2)

Publication Number Publication Date
CN108537762A true CN108537762A (en) 2018-09-14
CN108537762B CN108537762B (en) 2019-12-24

Family

ID=63480646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810315119.6A Active CN108537762B (en) 2017-12-29 2018-04-10 Depth multi-scale network-based secondary JPEG compressed image evidence obtaining method

Country Status (1)

Country Link
CN (1) CN108537762B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717851A (en) * 2019-10-18 2020-01-21 京东方科技集团股份有限公司 Image processing method and device, neural network training method and storage medium
CN112614116A (en) * 2020-12-28 2021-04-06 厦门市美亚柏科信息股份有限公司 Tamper detection method and system for digital image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544692A (en) * 2012-07-13 2014-01-29 深圳市智信达软件有限公司 Blind detection method for tamper with double-compressed JPEG (joint photographic experts group) images on basis of statistical judgment
CN104661037A (en) * 2013-11-19 2015-05-27 中国科学院深圳先进技术研究院 Tampering detection method and system for compressed image quantization table
US20170091588A1 (en) * 2015-09-02 2017-03-30 Sam Houston State University Exposing inpainting image forgery under combination attacks with hybrid large feature mining

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544692A (en) * 2012-07-13 2014-01-29 深圳市智信达软件有限公司 Blind detection method for tamper with double-compressed JPEG (joint photographic experts group) images on basis of statistical judgment
CN104661037A (en) * 2013-11-19 2015-05-27 中国科学院深圳先进技术研究院 Tampering detection method and system for compressed image quantization table
US20170091588A1 (en) * 2015-09-02 2017-03-30 Sam Houston State University Exposing inpainting image forgery under combination attacks with hybrid large feature mining

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IRENE AMERINI,ET AL.: "Localization of JPEG Double Compression Through Multi-domain Convolutional Neural Networks", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717851A (en) * 2019-10-18 2020-01-21 京东方科技集团股份有限公司 Image processing method and device, neural network training method and storage medium
WO2021073493A1 (en) * 2019-10-18 2021-04-22 京东方科技集团股份有限公司 Image processing method and device, neural network training method, image processing method of combined neural network model, construction method of combined neural network model, neural network processor and storage medium
CN110717851B (en) * 2019-10-18 2023-10-27 京东方科技集团股份有限公司 Image processing method and device, training method of neural network and storage medium
US11954822B2 (en) 2019-10-18 2024-04-09 Boe Technology Group Co., Ltd. Image processing method and device, training method of neural network, image processing method based on combined neural network model, constructing method of combined neural network model, neural network processor, and storage medium
CN112614116A (en) * 2020-12-28 2021-04-06 厦门市美亚柏科信息股份有限公司 Tamper detection method and system for digital image
CN112614116B (en) * 2020-12-28 2022-06-28 厦门市美亚柏科信息股份有限公司 Digital image tampering detection method and system

Also Published As

Publication number Publication date
CN108537762B (en) 2019-12-24

Similar Documents

Publication Publication Date Title
Bayar et al. Design principles of convolutional neural networks for multimedia forensics
Yang et al. Detecting fake images by identifying potential texture difference
CN111563557B (en) Method for detecting target in power cable tunnel
CN110852316B (en) Image tampering detection and positioning method adopting convolution network with dense structure
WO2016165082A1 (en) Image stego-detection method based on deep learning
CN111738044B (en) Campus violence assessment method based on deep learning behavior recognition
CN106503687A (en) The monitor video system for identifying figures of fusion face multi-angle feature and its method
CN113762138B (en) Identification method, device, computer equipment and storage medium for fake face pictures
CN106295501A (en) The degree of depth based on lip movement study personal identification method
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
He et al. A multi-attentive pyramidal model for visual sentiment analysis
CN104751485B (en) GPU adaptive foreground extracting method
Wu et al. Blind quality assessment for screen content images by combining local and global features
Mehta et al. Markov features based DTCWS algorithm for online image forgery detection using ensemble classifier in the pandemic
CN116012653A (en) Method and system for classifying hyperspectral images of attention residual unit neural network
Kiruthika et al. Image quality assessment based fake face detection
Uma et al. Copy-move forgery detection of digital images using football game optimization
Zhang Application of artificial intelligence recognition technology in digital image processing
Yu et al. SegNet: a network for detecting deepfake facial videos
CN108537762A (en) Secondary jpeg compressed image evidence collecting method based on the multiple dimensioned network of depth
CN107944373A (en) A kind of video anomaly detection method based on deep learning
CN108510483A (en) A kind of calculating using VLAD codings and SVM generates color image tamper detection method
CN105894010A (en) Meibomian gland function test method based on rough set and improved FCM algorithm
Li et al. Development of a single-leaf disease severity automatic grading system based on image processing
CN106570910B (en) Based on the image automatic annotation method from coding characteristic and Neighborhood Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant