CN107909536B - JPEG image-oriented steganalysis blind detection method - Google Patents

JPEG image-oriented steganalysis blind detection method Download PDF

Info

Publication number
CN107909536B
CN107909536B CN201710742641.8A CN201710742641A CN107909536B CN 107909536 B CN107909536 B CN 107909536B CN 201710742641 A CN201710742641 A CN 201710742641A CN 107909536 B CN107909536 B CN 107909536B
Authority
CN
China
Prior art keywords
features
image
block
absnj
secret
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710742641.8A
Other languages
Chinese (zh)
Other versions
CN107909536A (en
Inventor
王丽娜
王汉森
翟黎明
徐一波
任延珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710742641.8A priority Critical patent/CN107909536B/en
Publication of CN107909536A publication Critical patent/CN107909536A/en
Application granted granted Critical
Publication of CN107909536B publication Critical patent/CN107909536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a JPEG image-oriented blind detection method for steganalysis. Aiming at the problem of modifying DCT (discrete cosine transformation) coefficients in the process of steganography of a JPEG (joint photographic experts group) image, the method combines the currently widely applied adjacent joint density feature extraction algorithm and the bilateral large-distance hypersphere classifier to train a universal detection model, so as to detect the secret-carrying image generated by the unknown steganography algorithm. The invention has the advantages that: most of the current universal blind detection models are trained by using a single-class classifier, the detection rate is low, the model trained by using a second-class classifier is difficult to detect unknown algorithms, and the method can accurately detect the unknown algorithms by using the second-class hypersphere classifier and has higher detection rate than that of the single-class classifier.

Description

JPEG image-oriented steganalysis blind detection method
Technical Field
The invention relates to the technical field of computer information hiding, in particular to a steganalysis blind detection method and a method for establishing a universal detection model.
Background
With the rapid development of network technology, communication technology and multimedia signal processing technology, information hiding as an emerging cryptographic technology has become a new research hotspot in the field of information security. Steganography is an important branch of information hiding technology, and mainly researches how to hide information in open multimedia data to realize covert communication. While the corresponding steganalysis studies the attack on steganography, i.e. how to detect, extract or destroy hidden secret information.
In response to the development and application requirements of information hiding technology, many steganographic algorithms based on JPEG images, such as F5, MB2, MME, etc., have been proposed and achieved good results. Although each algorithm is effectively detected by a corresponding detection mode, the method is difficult to select a proper classification model for classification in practical application. Therefore, compared with the field of steganalysis, how to effectively detect the steganographic image generated by an unknown steganographic algorithm is important.
Meanwhile, for blind detection of steganalysis, although a plurality of general steganography feature extraction algorithms are provided, in practical application, a determined secret-carrying image needs to be subjected to feature extraction, and then a model is trained. The number of secret-carrying images which can be obtained by the user is limited, the number of non-secret-carrying images is large, a model trained by data imbalance has certain bias, and a relatively high omission factor can be generated due to relatively large deviation of detection accuracy. However, although the method can relatively effectively detect the image generated by the unknown steganography algorithm, the method has a low relative detection rate and cannot meet the use requirement in many cases.
For the problems, the patent provides a method for carrying out model training of universal detection through a widely-used universal feature extraction algorithm and a class II hypersphere classifier so as to realize universal blind detection of steganalysis.
Disclosure of Invention
The invention aims to provide a method for training general steganalysis blind detection based on JPEG images. The method comprises the steps of extracting features by adopting a general feature extraction algorithm, namely an adjacent Joint Density algorithm (neighbor Density), training a model by adopting a bilateral maximum interval hypersphere classifier (SS2LM Small Sphere and Two Large spheres), and searching for optimal parameters in a grid searching mode, so that a general detection model is trained.
The technical scheme of the invention is a blind detection algorithm for steganalysis, and the blind detection algorithm specifically comprises two parts, namely model training and model detection.
The process of model building comprises the following steps:
step 1, extracting the characteristics in a DCT coefficient matrix block of an image, and the value absNJ of an adjacent united density matrix in the block in the horizontal direction1hAnd the value absNJ in the vertical direction1vAre calculated from the following formulas:
Figure BDA0001389418450000021
Figure BDA0001389418450000022
the image is quantized to obtain a DCT coefficient matrix, which is represented by a variable F and comprises M × N blocks, each of which is represented by Fij(i 1, 2.. times.m; j 1, 2.. times.n), where each partition is an 8 × 8 matrix, we use c as the referenceijmnThe representation is located at block FijThe m-th row and n-th column of DCT coefficients, and in both equations listed above, δ becomes 1 if the equation in parenthesis holds, and δ becomes 0 if it does not hold.
In view of computational efficiency, absNJ is defined1As the intra-block adjacent joint density characteristics, the following formula is shown
Figure BDA0001389418450000023
In this algorithm, x and y are integers in the interval [0, 5], each having 6 values, thus containing a total of 36-dimensional features.
Step 2, extracting the characteristics among the DCT coefficient matrix blocks of the image, and the horizontal direction characteristics absNJ of the adjacent joint density characteristics among the blocks2hAnd vertical orientation feature absNJ2vCan be calculated by the following formula:
Figure BDA0001389418450000024
Figure BDA0001389418450000025
the image is quantized to obtain a DCT coefficient matrix, and weDenoted by the variable F, which comprises M N blocks, each block denoted by Fij(i 1, 2.. times.m; j 1, 2.. times.n), where each partition is an 8 × 8 matrix, we use c as the referenceijmnThe representation is located at block FijThe m-th row and n-th column of DCT coefficients, and in both equations listed above, δ becomes 1 if the equation in parenthesis holds, and δ becomes 0 if it does not hold.
Defining neighboring joint density features between blocks as absNJ2It can be calculated by the following method:
Figure BDA0001389418450000031
similarly, x and y are [0, 5]]Is taken to be a value, therefore absNJ236-dimensional features are also included.
And 3, combining the inter-block adjacent joint density characteristic and the intra-block adjacent joint density characteristic to obtain a 72-dimensional adjacent joint density characteristic.
Feature=[absNJ1(x,y),absNJ2(x,y)]x,y=0,1,2,3,4,5
And 4, adding labels to the 72-dimensional adjacent combined density features of the secret-carrying image and the non-secret-carrying image, adding a label +1 to the features of the non-secret-carrying image, adding a label-1 to the features of the secret-carrying image, and sending the features into an SS2LM classifier for training. The model formula of the classifier is shown as follows:
Figure BDA0001389418450000032
the constraint conditions are as follows: i phi (x)i)-c2||≤R2-δρ2i,i=1...m1
||φ(xi)-c2||≥R22i,i=m1+1...s
ξi≥0,i=1......s
Wherein R and c represent the radius and center of the optimal hyper-sphere respectivelyAnd xi ═ xi [ xi ]12,...,ξs]T∈RsRepresenting the relaxation variables, ρ represents the distance of the outer boundary, i.e., the anomalous data, to the edge of the hypersphere, and δ (0 ≦ δ ≦ v) is the ratio of the outer boundary to the inner boundary, so that the distance of the inner boundary, i.e., the normal data, to the edge of the hypersphere can be represented by δ ρ.
The classifier effect is shown in fig. 1.
The feature model detection process comprises the following steps:
step 1, quantizing an image to be detected into a DCT coefficient matrix, and extracting adjacent joint density features, including 36-dimensional inter-block features and 36-dimensional intra-block features.
And 2, combining the inter-block features and the intra-block features into 72-dimensional adjacent joint density features, and adding labels, wherein the secret image is added with a "-1" label, and the non-secret image is added with a "+ 1" label.
And 3, classifying the 72-dimensional labeled features by using a model trained in a training stage, wherein a decision function is as follows:
Figure BDA0001389418450000041
the decision function classifies unknown new feature points x by comparing the distance from the new feature point x to the center c of the hyper-sphere and the radius R. By calculating the distance from each feature point to the center of the hyper sphere | | | φ (x) -c | |, the radius R and the distance are compared, and if the distance is smaller than the radius R, it can be classified as normal data, otherwise it can be classified as abnormal data. The normal data will be labeled as +1 and the abnormal data will be labeled as-1, according to the decision function set forth in the equation above. The classification process is shown in figure 3 below.
Drawings
FIG. 1 is a diagram illustrating the SS2LM classifier training process according to the present invention.
FIG. 2 is a flow chart of feature model training in accordance with the present invention.
Fig. 3 is a schematic diagram of the SS2LM classifier classification process according to the present invention.
FIG. 4 is a flow chart of feature model detection according to the present invention.
Detailed Description
The invention aims to provide a steganalysis blind detection method with universality. The method is characterized in that a secret-carrying image and an non-secret-carrying image are subjected to feature extraction by using a neighboring joint density feature extraction algorithm, and then the secret-carrying image and the non-secret-carrying image are used as training data to be subjected to model training by using an SS2LM classifier. The model established in the way has the advantages of strong universality, low omission factor, high recognition degree and the like during blind detection, and can also keep corresponding stability under the condition of unbalanced training data.
The technical scheme of the invention is a method for universal blind detection of steganalysis, and the overall recognition process comprises two processes of training and detection.
The training process comprises the following implementation steps:
step 1, quantizing a secret-carrying image and an non-secret-carrying image into DCT coefficient matrixes, and respectively extracting 36-dimensional intra-block features and 36-dimensional inter-block features by using an adjacent joint density feature extraction algorithm.
And 2, respectively synthesizing features in blocks and features between blocks into 72-dimensional adjacent joint density features, adding a "-1" label to the extracted features of the secret-carrying image as a negative sample, and adding a "+ 1" label to the extracted features of the non-secret-carrying image as a positive sample.
And 3, taking the positive sample and the negative sample as training data, performing model training by using an SS2LM classifier, adjusting optimal parameters by using grid search, and obtaining an optimal hypersphere model, thus finishing the training process.
The detection process comprises the following implementation steps:
step 1, processing the image to be detected in the step 1 of the training process, and acquiring 36-dimensional intra-block features and 36-dimensional inter-block features.
Step 2, as in step 2 of the training step process, the three-dimensional adjacent joint density features are integrated into 72-dimensional adjacent joint density features, and the dense images and the non-dense images are respectively marked with "-1" and "+ 1" labels.
And 3, taking the labeled features obtained in the step 2 as detection samples, classifying the detection samples by using the optimal hypersphere model obtained in the step 3 in the training process, and judging whether the detection samples are secret-carrying images or not according to the classification result.
The specific implementations described herein are merely distance equivalents to the spirit of the invention. Various fine-tuning modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art, for example, by selecting a matrix of absolute values of quantized DCT coefficients for feature extraction, modeling by using other ways to determine optimal parameters of the classifier, modifying the SS2LM classifier or using blurred edges as a basis for decision-making, without departing from the spirit of the invention or exceeding the scope of the invention as defined by the appended claims.

Claims (1)

1. A JPEG image-oriented blind detection method for steganalysis is characterized by comprising the following steps:
training a feature model, specifically comprising:
step 1, extracting the characteristics in a DCT coefficient matrix block of an image, and the value absNJ of an adjacent united density matrix in the block in the horizontal direction1hAnd the value absNJ in the vertical direction1vAre calculated from the following formulas:
Figure FDA0002833283800000011
Figure FDA0002833283800000012
the image is quantized to obtain a DCT coefficient matrix, which is represented by a variable F and comprises M × N blocks, each of which is represented by FijWherein, i is 1, 2. N, each partition is an 8 × 8 matrix with cijmnThe representation is located at block FijDCT coefficient of m-th row and n-th columnIn the two equations listed above, δ is 1 if the equation in the parenthesis is true, and δ is 0 if it is false;
in view of computational efficiency, absNJ is defined1As the intra-block adjacent joint density characteristics, the following formula is shown
Figure FDA0002833283800000013
In the algorithm, x and y are integers in an interval [0, 5], and each has 6 value cases, so that 36-dimensional features are included in total;
step 2, extracting the characteristics among the DCT coefficient matrix blocks of the image, and the horizontal direction characteristics absNJ of the adjacent joint density characteristics among the blocks2hAnd vertical orientation feature absNJ2vCan be calculated by the following formula:
Figure FDA0002833283800000014
Figure FDA0002833283800000015
the image is quantized to obtain a DCT coefficient matrix, which is represented by a variable F and comprises M × N blocks, each of which is represented by FijWherein, i is 1, 2. N, each partition is an 8 × 8 matrix, which we use cijmnThe representation is located at block FijThe m-th row and n-th column DCT coefficients, and in both equations listed above, δ is 1 if the equation in parenthesis holds, and δ is 0 if it does not hold;
defining neighboring joint density features between blocks as absNJ2It can be calculated by the following method:
Figure FDA0002833283800000021
similarly, x and y are [0, 5]]Is taken to be a value, therefore absNJ236-dimensional features are also included;
step 3, combining the inter-block adjacent joint density characteristic and the intra-block adjacent joint density characteristic to obtain a 72-dimensional adjacent joint density characteristic;
Feature=[absNJ1(x,y),absNJ2(x,y)]x,y=0,1,2,3,4,5
step 4, labeling the 72-dimensional adjacent combined density features of the secret-carrying image and the non-secret-carrying image, labeling the features of the non-secret-carrying image with +1, labeling the features of the secret-carrying image with-1, and sending the features into an SS2LM classifier for training; the model formula of the classifier is shown as follows:
Figure FDA0002833283800000022
the constraint conditions are as follows: i phi (x)i)-c2||≤R2-δρ2i,i=1...m1
||φ(xi)-c2||≥R22i,i=m1+1...s
ξi≥0,i=1......s
Where R and c represent the radius and center of the optimal hypersphere, respectively, and ξ [ ξ ]12,...,ξs]T∈RsRepresenting a relaxation variable, rho represents the distance from the outer boundary, i.e. the abnormal data, to the edge of the hypersphere, delta is the ratio of the outer boundary and the inner boundary, 0 ≦ delta ≦ v, so that the inner boundary, i.e. the distance from the normal data, to the edge of the hypersphere can be represented by delta rho;
the characteristic model detection step specifically comprises the following steps:
step 1, quantizing an image to be detected into a DCT coefficient matrix, and extracting adjacent joint density characteristics, including 36-dimensional inter-block characteristics and 36-dimensional intra-block characteristics;
step 2, combining the inter-block features and the intra-block features into 72-dimensional adjacent joint density features, and adding labels, wherein the secret-carrying image is added with a "-1" label, and the non-secret-carrying image is added with a "+ 1" label;
and 3, classifying the 72-dimensional labeled features by using a model trained in a training stage, wherein a decision function is as follows:
Figure FDA0002833283800000031
the decision function classifies unknown new feature points x by comparing the distance from the new feature points x to the center c of the hyper-sphere and the radius R; comparing the radius R with the distance by calculating the distance from each feature point to the center of the hyper-sphere, | | phi (x) -c | |, if the distance is less than the radius R, classifying the distance as normal data, otherwise, classifying the distance as abnormal data; the normal data will be labeled as +1 and the abnormal data will be labeled as-1, according to the decision function set forth in the equation above.
CN201710742641.8A 2017-08-25 2017-08-25 JPEG image-oriented steganalysis blind detection method Active CN107909536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710742641.8A CN107909536B (en) 2017-08-25 2017-08-25 JPEG image-oriented steganalysis blind detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710742641.8A CN107909536B (en) 2017-08-25 2017-08-25 JPEG image-oriented steganalysis blind detection method

Publications (2)

Publication Number Publication Date
CN107909536A CN107909536A (en) 2018-04-13
CN107909536B true CN107909536B (en) 2021-08-03

Family

ID=61840082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710742641.8A Active CN107909536B (en) 2017-08-25 2017-08-25 JPEG image-oriented steganalysis blind detection method

Country Status (1)

Country Link
CN (1) CN107909536B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301733A (en) * 2014-09-06 2015-01-21 南京邮电大学 Video steganalysis method based on feature fusions
CN106548445A (en) * 2016-10-20 2017-03-29 天津大学 Spatial domain picture general steganalysis method based on content

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8965038B2 (en) * 2012-02-01 2015-02-24 Sam Houston University Steganalysis with neighboring joint density

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301733A (en) * 2014-09-06 2015-01-21 南京邮电大学 Video steganalysis method based on feature fusions
CN106548445A (en) * 2016-10-20 2017-03-29 天津大学 Spatial domain picture general steganalysis method based on content

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于相邻系数关系对的偏序马尔可夫JPEG隐写分析模型;肖海松,等;《武汉大学学报(理学版)》;20141231;第60卷(第6期);518-523 *

Also Published As

Publication number Publication date
CN107909536A (en) 2018-04-13

Similar Documents

Publication Publication Date Title
CN107704877B (en) Image privacy perception method based on deep learning
CN112396027B (en) Vehicle re-identification method based on graph convolution neural network
CN108228915B (en) Video retrieval method based on deep learning
CN107341463B (en) Face feature recognition method combining image quality analysis and metric learning
CN110866896B (en) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN108537818B (en) Crowd trajectory prediction method based on cluster pressure LSTM
CN108682007B (en) JPEG image resampling automatic detection method based on depth random forest
CN107688829A (en) A kind of identifying system and recognition methods based on SVMs
CN104661037B (en) The detection method and system that compression image quantization table is distorted
CN111027377B (en) Double-flow neural network time sequence action positioning method
CN116052218B (en) Pedestrian re-identification method
CN116452862A (en) Image classification method based on domain generalization learning
CN116206327A (en) Image classification method based on online knowledge distillation
Guo et al. Exposing deepfake face forgeries with guided residuals
CN114842507A (en) Reinforced pedestrian attribute identification method based on group optimization reward
CN116912184B (en) Weak supervision depth restoration image tampering positioning method and system based on tampering area separation and area constraint loss
CN107909536B (en) JPEG image-oriented steganalysis blind detection method
CN111199199B (en) Action recognition method based on self-adaptive context area selection
Weng et al. Adaptive smoothness evaluation and multiple asymmetric histogram modification for reversible data hiding
CN112200831B (en) Dynamic template-based dense connection twin neural network target tracking method
CN108376413B (en) JPEG image recompression detection method based on frequency domain difference statistical characteristics
KR101367821B1 (en) video identification method and apparatus using symmetric information of hierachical image blocks
Sadddique et al. Robust video content authentication using video binary pattern and extreme learning machine
Quan JPEG Steganalysis Based on Local Dimension Estimation
Wang et al. Camera source identification of digital images based on sample selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant