CN107909536A - A kind of method of steganalysis blind Detecting towards jpeg image - Google Patents
A kind of method of steganalysis blind Detecting towards jpeg image Download PDFInfo
- Publication number
- CN107909536A CN107909536A CN201710742641.8A CN201710742641A CN107909536A CN 107909536 A CN107909536 A CN 107909536A CN 201710742641 A CN201710742641 A CN 201710742641A CN 107909536 A CN107909536 A CN 107909536A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msubsup
- feature
- absnj
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0065—Extraction of an embedded watermark; Reliable detection
Abstract
The present invention discloses a kind of method of the steganalysis blind Detecting towards jpeg image.This method is directed to the modification to DCT coefficient during carrying out steganography to jpeg image the problem of, combine the adjacent joint density feature extraction algorithm being widely used at present and the bilateral big training that general detection model has been carried out apart from suprasphere grader, thus the close image of the load generated by unknown steganographic algorithm is detected.Advantage of the invention is that:Current general blind Detecting model is trained using one-class classifier mostly, verification and measurement ratio is relatively low, and the use of the model that two classification device is trained is difficult to detect unknown algorithm, and this method can relatively accurately detect unknown algorithm using two class suprasphere graders, at the same it is high compared with the verification and measurement ratio of one-class classifier.
Description
Technical field
The present invention relates to computerized information concealing technology field, more particularly to steganalysis blind checking method and establish general
The method of detection model.
Background technology
With developing rapidly for network technology, the communication technology and multimedia signal processing technique, Information hiding is as a kind of
Emerging cryptographic technique, has become one new research hotspot of information security field.Steganography is Information Hiding Techniques
One important branch, main research how in disclosed multi-medium data hiding information is to realize covert communications.It is and corresponding
Steganalysis study attack to Steganography, i.e., how to detect, extract or destroy the secret information hidden.
Development and application demand for Information Hiding Techniques, many steganographic algorithms based on jpeg image, as F5, MB2,
MME etc., is suggested and achieves good effect.Although for each algorithm have corresponding detection mode to its into
The effective detection of row, but but it is faced with the problem of being difficult to select suitable disaggregated model to be classified in practical application.Therefore
For steganalysis field, how effectively to detect that the hidden image of unknown steganographic algorithm generation is most important.
It is directed at the same time for steganalysis blind Detecting, although proposing the general steganography feature extraction algorithm of many,
In practical applications, it would be desirable to feature extraction is carried out to the close image of load having determined and thus trains model again.And this is
The close amount of images of load that we can obtain is limited, rather than the close amount of images of load is very much, and data nonbalance trains the model come
Certain skewed popularity is had, has bigger deviation for the accuracy of detection, higher omission factor can be produced.And also having makes
The single class model being trained with one-class classifier, although this method can relatively effectively detect unknown steganographic algorithm life
Into image, but the opposite verification and measurement ratio of this mode is relatively low, cannot reach requirement in many cases.
For these problems, this patent proposes such a method, is extracted by more widely used generic features
Algorithm and two class suprasphere graders carry out the model training of general detection, so as to realize the general blind Detecting of steganalysis.
The content of the invention
The object of the present invention is to provide a kind of method of general steganalysis blind Detecting of training based on jpeg image.The party
Method is by using generic features extraction algorithm --- and adjacent joint density algorithm (Neighboring Joint Density) carries out
Feature extraction, and using bilateral largest interval suprasphere grader (SS2LM Small Sphere and Two Large
Margins) carry out model training, the search of optimized parameter is carried out by way of grid search, thus train one it is general
Detection model, the model have the advantages of omission factor is low, and verification and measurement ratio is of a relatively high while versatile.
The technical scheme is that a kind of steganalysis Blind Detect Algorithm, the technology specifically include model training and model
Detect two parts.
Modeling process comprises the following steps:
Step 1, feature in the DCT coefficient matrix-block of image is extracted, adjacent joint density matrix is in the horizontal direction in block
Value absNJ1hWith the value absNJ in vertical direction1vIt is calculated respectively by following formula:
Image obtains DCT coefficient matrix after quantization, we are represented with variable F, it includes M × N number of piecemeal, often
A piecemeal Fij(i=1,2 ..., M;J=1,2 ..., N) represent, wherein each piecemeal is the matrix of one 8 × 8, I
Use cijmnExpression is located at block FijM rows n-th arrange DCT coefficient, and it is listed above go out two formulas in, when in bracket
In the case that equation is all set up, then δ=1, δ=0 if invalid.
In view of computational efficiency, absNJ is defined1Represented as adjacent joint density feature in block, such as following formula
In this algorithm, x and y are the integers in section [0,5], respectively have 6 kinds of value conditions, therefore contain altogether
36 dimensional features.
Step 2, feature between the DCT coefficient matrix-block of image is extracted, level of the adjacent joint density feature between block between block
Direction character absNJ2hWith vertical direction feature absNJ2vIt can be calculated by following formula:
Image obtains DCT coefficient matrix after quantization, we are represented with variable F, it includes M × N number of piecemeal, often
A piecemeal Fij(i=1,2 ..., M;J=1,2 ..., N) represent, wherein each piecemeal is the matrix of one 8 × 8, I
Use cijmnExpression is located at block FijM rows n-th arrange DCT coefficient, and it is listed above go out two formulas in, when in bracket
In the case that equation is all set up, then δ=1, δ=0 if invalid.
Adjacent joint density between definition block is characterized as absNJ2, can be calculated by the method for following formula:
Similarly, x and y carries out value, therefore absNJ in [0,5]2It also contains 36 dimensional features.
Step 3, by the way that adjacent joint density feature in adjacent joint density feature between block and block is combined, it is obtained
The adjacent joint density feature of 72 dimensions.
Feature=[absNJ1(x,y),absNJ2(x, y)] x, y=0,1,2,3,4,5
Step 4, close image will be carried and the non-adjacent joint density feature of 72 dimensions for carrying close image adds label, close figure is carried to non-
The feature of picture adds label+1, adds label -1 to the feature for carrying close image, is sent into SS2LM graders and is trained.Classification
Shown in the model formation following formula of device:
Constraints is:||φ(xi)-c2||≤R2-δρ2+ξi, i=1...m1
||φ(xi)-c2||≥R2-ρ2+ξi, i=m1+1...s
ξi>=0, i=1......s
Wherein, R and c represents radius and the center of circle of optimal suprasphere respectively, and ξ=[ξ1,ξ2,...,ξs]T∈RsRepresent pine
Relaxation variable, what ρ was represented is outer boundary, that is, abnormal data, to the distance at suprasphere edge, δ (0≤δ≤v) is external edge
Boundary and the ratio of inner boundary, therefore, inner boundary, that is, normal data to suprasphere edge distance can with δ ρ come
It is indicated.
Grader effect is as shown in Figure 1.
Characteristic model detection process comprises the following steps:
Step 1, the image quantization that will be detected is DCT coefficient matrix, extracts adjacent joint density feature, including 36 dimensions
Feature in the block of feature and 36 dimensions between block.
Step 2, the adjacent joint density feature by feature between block with combinations of features in block into 72 dimensions, and label is added,
Wherein carry close image and add " -1 " label, the non-close image of load adds "+1 " label.
Step 3, by 72 dimension it is tagged after feature classified with training stage trained model, wherein determining
Plan function such as following formula:
This decision function is by comparing the distance and radius R of new feature point x to suprasphere centre of sphere c, to unknown new spy
Sign point x classifies.By calculating from each characteristic point to the distance of the suprasphere centre of sphere | | φ (x)-c | |, compare radius R and be somebody's turn to do
Distance, if the distance is less than radius R, can be classified as normal data, otherwise can be classified as abnormal data.
According to the decision function proposed in above formula, normal data will be marked as+1, and abnormal data will be marked as -1.Assorting process is such as
Shown in lower Fig. 3.
Brief description of the drawings
Fig. 1 is SS2LM classifier trainings process schematic of the present invention.
Fig. 2 is feature of present invention model training flow chart.
Fig. 3 is SS2LM graders assorting process schematic diagram of the present invention.
Fig. 4 is feature of present invention model inspection flow chart.
Embodiment
The object of the present invention is to provide a kind of steganalysis blind checking method with versatility.This method is by using adjacent
Joint density feature extraction algorithm carries out feature extraction to carrying close image and the non-close image of load, is then used as training data
SS2LM graders carry out model training.The model for being built such that out in blind Detecting have it is versatile, omission factor is low, know
The advantages that Bie Dugao, at the same for training data it is unbalanced in the case of can also keep corresponding stability the advantages of.
The technical scheme is that a kind of method for general steganalysis blind Detecting, overall identification process includes instruction
Practice and detect two processes.
Training process implementation steps:
Step 1, it is DCT coefficient matrix that will carry close image and the non-close image quantization of load, is carried using adjacent joint density feature
Algorithm is taken, extracts feature between the blocks of feature in 36 dimension blocks and 36 dimensions respectively.
Step 2, respectively by block between feature and block characteristic synthetic into 72 dimension adjacent joint density features, while for carry
The feature that close image is extracted adds " -1 " label as negative sample, and adds for the non-feature of close image zooming-out out that carries
Upper "+1 " label is as positive sample.
Step 3, using positive sample and negative sample as training data, model training is carried out using SS2LM graders, utilizes net
Lattice search adjustment optimized parameter, obtains optimal hypersphere body Model, then training process finishes.
Detection process implementation steps:
Step 1, for the image to be detected, the step 1 with training process is handled, and obtains feature and 36 in the block of 36 dimensions
For block between feature.
Step 2, as with the step 2 of training step process, the adjacent joint density feature of 72 dimensions is integrated into, and
For carrying close image and the non-close image of load, " -1 " and "+1 " label is stamped respectively.
Step 3, using the feature beaten after label got from step 2 as detection sample, using from training process
In step 3 in the optimal hypersphere body Model that gets classify, carry close image according to classification results to determine whether.
It is that spirit of the invention is made apart from title that specific implementation described herein, which is only,.Technology belonging to the present invention is led
The technical staff in domain can carry out described specific embodiment various fine settings and change or supplement or be replaced using similar method
Generation, such as by selecting the absolute value matrix of the DCT coefficient after quantifying to carry out feature extraction, grader is determined using other modes
Optimized parameter establish model, SS2LM graders are improved or using fuzzy edge as judgment basis, but can't
Deviate the spirit of the present invention or surmount paid scope as defined in the claims.
Claims (1)
1. a kind of method of steganalysis blind Detecting towards jpeg image, it is characterised in that including:
Characteristic model training step, specifically includes:
Step 1, feature in the DCT coefficient matrix-block of image is extracted, the value in the horizontal direction of adjacent joint density matrix in block
absNJ1hWith the value absNJ in vertical direction1vIt is calculated respectively by following formula:
<mrow>
<msub>
<mi>absNJ</mi>
<mrow>
<mn>1</mn>
<mi>h</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>m</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>7</mn>
</msubsup>
<mi>&delta;</mi>
<mrow>
<mo>(</mo>
<mo>|</mo>
<msub>
<mi>c</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
<mi>m</mi>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>=</mo>
<mi>x</mi>
<mo>,</mo>
<mo>|</mo>
<msub>
<mi>c</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
<mi>m</mi>
<mrow>
<mo>(</mo>
<mi>n</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</msub>
<mo>|</mo>
<mo>=</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>56</mn>
<mi>M</mi>
<mi>N</mi>
</mrow>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>absNJ</mi>
<mrow>
<mn>1</mn>
<mi>v</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>m</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>7</mn>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</msubsup>
<mi>&delta;</mi>
<mrow>
<mo>(</mo>
<mo>|</mo>
<msub>
<mi>c</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
<mi>m</mi>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>=</mo>
<mi>x</mi>
<mo>,</mo>
<mo>|</mo>
<msub>
<mi>c</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
<mrow>
<mo>(</mo>
<mi>m</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>=</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>56</mn>
<mi>M</mi>
<mi>N</mi>
</mrow>
</mfrac>
</mrow>
Image obtains DCT coefficient matrix after quantization, we are represented with variable F, it includes M × N number of piecemeal, Mei Gefen
Block Fij(i=1,2 ..., M;J=1,2 ..., N) represent, wherein each piecemeal is the matrix of one 8 × 8, Wo Menyong
cijmnExpression is located at block FijM rows n-th arrange DCT coefficient, and it is listed above go out two formulas in, when the equation in bracket
In the case of all setting up, then δ=1, δ=0 if invalid;
In view of computational efficiency, absNJ is defined1Represented as adjacent joint density feature in block, such as following formula
<mrow>
<msub>
<mi>absNJ</mi>
<mn>1</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>absNJ</mi>
<mrow>
<mn>1</mn>
<mi>h</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>absNJ</mi>
<mrow>
<mn>1</mn>
<mi>v</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mn>2</mn>
</mfrac>
</mrow>
In this algorithm, x and y are the integers in section [0,5], respectively have 6 kinds of value conditions, therefore contain 36 dimensions altogether
Feature;
Step 2, feature between the DCT coefficient matrix-block of image is extracted, horizontal direction of the adjacent joint density feature between block between block
Feature absNJ2hWith vertical direction feature absNJ2vIt can be calculated by following formula:
<mrow>
<msub>
<mi>absNJ</mi>
<mrow>
<mn>2</mn>
<mi>h</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>m</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>M</mi>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<mi>&delta;</mi>
<mrow>
<mo>(</mo>
<mo>|</mo>
<msub>
<mi>c</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
<mi>m</mi>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>=</mo>
<mi>x</mi>
<mo>,</mo>
<mo>|</mo>
<msub>
<mi>c</mi>
<mrow>
<mi>i</mi>
<mrow>
<mo>(</mo>
<mi>j</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mi>m</mi>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>=</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>64</mn>
<mi>M</mi>
<mrow>
<mo>(</mo>
<mi>N</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</mrow>
</mfrac>
</mrow>
<mrow>
<msub>
<mi>absNJ</mi>
<mrow>
<mn>2</mn>
<mi>v</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>m</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>n</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mn>8</mn>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mrow>
<mi>M</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msubsup>
<msubsup>
<mo>&Sigma;</mo>
<mrow>
<mi>j</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>N</mi>
</msubsup>
<mi>&delta;</mi>
<mrow>
<mo>(</mo>
<mo>|</mo>
<msub>
<mi>c</mi>
<mrow>
<mi>i</mi>
<mi>j</mi>
<mi>m</mi>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>=</mo>
<mi>x</mi>
<mo>,</mo>
<mo>|</mo>
<msub>
<mi>c</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>+</mo>
<mn>1</mn>
<mo>)</mo>
<mi>j</mi>
<mi>m</mi>
<mi>n</mi>
</mrow>
</msub>
<mo>|</mo>
<mo>=</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<mn>64</mn>
<mrow>
<mo>(</mo>
<mi>M</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
<mi>N</mi>
</mrow>
</mfrac>
</mrow>
Image obtains DCT coefficient matrix after quantization, we are represented with variable F, it includes M × N number of piecemeal, Mei Gefen
Block Fij(i=1,2 ..., M;J=1,2 ..., N) represent, wherein each piecemeal is the matrix of one 8 × 8, Wo Menyong
cijmnExpression is located at block FijM rows n-th arrange DCT coefficient, and it is listed above go out two formulas in, when the equation in bracket
In the case of all setting up, then δ=1, δ=0 if invalid;
Adjacent joint density between definition block is characterized as absNJ2, can be calculated by the method for following formula:
<mrow>
<msub>
<mi>absNJ</mi>
<mn>2</mn>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfrac>
<mrow>
<msub>
<mi>absNJ</mi>
<mrow>
<mn>2</mn>
<mi>h</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<msub>
<mi>absNJ</mi>
<mrow>
<mn>2</mn>
<mi>v</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
<mo>)</mo>
</mrow>
</mrow>
<mn>2</mn>
</mfrac>
</mrow>
Similarly, x and y carries out value, therefore absNJ in [0,5]2It also contains 36 dimensional features;
Step 3, by the way that adjacent joint density feature in adjacent joint density feature between block and block is combined, 72 dimensions are obtained
Adjacent joint density feature;
Feature=[absNJ1(x,y),absNJ2(x, y)] x, y=0,1,2,3,4,5
Step 4, close image will be carried and the non-adjacent joint density feature of 72 dimensions for carrying close image adds label, to the non-close image of load
Feature adds label+1, adds label -1 to the feature for carrying close image, is sent into SS2LM graders and is trained;Grader
Shown in model formation following formula:
<mrow>
<munder>
<mi>min</mi>
<mrow>
<mi>R</mi>
<mo>,</mo>
<mi>c</mi>
<mo>,</mo>
<mi>&xi;</mi>
<mo>,</mo>
<mi>&rho;</mi>
</mrow>
</munder>
<msup>
<mi>R</mi>
<mn>2</mn>
</msup>
<mo>-</mo>
<msup>
<mi>v&rho;</mi>
<mn>2</mn>
</msup>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msub>
<mi>v</mi>
<mn>1</mn>
</msub>
<msub>
<mi>m</mi>
<mn>1</mn>
</msub>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<msub>
<mi>m</mi>
<mn>1</mn>
</msub>
</munderover>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
<mo>+</mo>
<mfrac>
<mn>1</mn>
<mrow>
<msub>
<mi>v</mi>
<mn>2</mn>
</msub>
<msub>
<mi>m</mi>
<mn>2</mn>
</msub>
</mrow>
</mfrac>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<msub>
<mi>m</mi>
<mn>1</mn>
</msub>
<mo>+</mo>
<mn>1</mn>
</mrow>
<mi>s</mi>
</munderover>
<msub>
<mi>&xi;</mi>
<mi>i</mi>
</msub>
</mrow>
Constraints is:||φ(xi)-c2||≤R2-δρ2+ξi, i=1...m1
||φ(xi)-c2||≥R2-ρ2+ξi, i=m1+1...s
ξi>=0, i=1......s
Wherein, R and c represents radius and the center of circle of optimal suprasphere respectively, and ξ=[ξ1,ξ2,...,ξs]T∈RsRepresent that relaxation becomes
Amount, what ρ was represented is outer boundary, that is, abnormal data arrives the distance at suprasphere edge, δ (0≤δ≤v) be outer boundary with
The ratio of inner boundary, therefore, inner boundary, that is, the distance at normal data to suprasphere edge can be carried out with δ ρ
Represent;
Characteristic model detecting step, specifically includes:
Step 1, the image quantization that will be detected is DCT coefficient matrix, is extracted between adjacent joint density feature, including the block of 36 dimensions
Feature in the block of feature and 36 dimensions;
Step 2, the adjacent joint density feature by feature between block with combinations of features in block into 72 dimensions, and label is added, wherein
Carry close image and add " -1 " label, the non-close image of load adds "+1 " label;
Step 3, by 72 dimensions it is tagged after feature classified with training stage trained model, wherein decision-making letter
Number such as following formula:
<mfenced open = "" close = "">
<mtable>
<mtr>
<mtd>
<mrow>
<mi>f</mi>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mi>s</mi>
<mi>i</mi>
<mi>g</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>R</mi>
<mn>2</mn>
</msup>
<mo>-</mo>
<mo>|</mo>
<mo>|</mo>
<mi>&phi;</mi>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
<mo>-</mo>
<mi>c</mi>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>=</mo>
<mi>s</mi>
<mi>i</mi>
<mi>g</mi>
<mi>n</mi>
<mrow>
<mo>(</mo>
<msup>
<mi>R</mi>
<mn>2</mn>
</msup>
<mo>-</mo>
<mo>|</mo>
<mo>|</mo>
<mi>c</mi>
<mo>|</mo>
<msup>
<mo>|</mo>
<mn>2</mn>
</msup>
<mo>-</mo>
<mi>K</mi>
<mo>(</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>y</mi>
</mrow>
<mo>)</mo>
<mo>+</mo>
<mn>2</mn>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>s</mi>
</munderover>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<msub>
<mi>y</mi>
<mi>i</mi>
</msub>
<mi>K</mi>
<mo>(</mo>
<mrow>
<msub>
<mi>x</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<mi>x</mi>
</mrow>
<mo>)</mo>
<mo>)</mo>
</mrow>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
This decision function is by comparing the distance and radius R of new feature point x to suprasphere centre of sphere c, to unknown new feature point x
Classify;By calculating from each characteristic point to the distance of the suprasphere centre of sphere | | φ (x)-c | |, compare radius R and the distance,
If the distance is less than radius R, normal data can be classified as, otherwise can be classified as abnormal data;According to
The decision function proposed in above formula, normal data will be marked as+1, and abnormal data will be marked as -1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710742641.8A CN107909536B (en) | 2017-08-25 | 2017-08-25 | JPEG image-oriented steganalysis blind detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710742641.8A CN107909536B (en) | 2017-08-25 | 2017-08-25 | JPEG image-oriented steganalysis blind detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107909536A true CN107909536A (en) | 2018-04-13 |
CN107909536B CN107909536B (en) | 2021-08-03 |
Family
ID=61840082
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710742641.8A Active CN107909536B (en) | 2017-08-25 | 2017-08-25 | JPEG image-oriented steganalysis blind detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107909536B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130208941A1 (en) * | 2012-02-01 | 2013-08-15 | Qingzhong Liu | Steganalysis with neighboring joint density |
CN104301733A (en) * | 2014-09-06 | 2015-01-21 | 南京邮电大学 | Video steganalysis method based on feature fusions |
CN106548445A (en) * | 2016-10-20 | 2017-03-29 | 天津大学 | Spatial domain picture general steganalysis method based on content |
-
2017
- 2017-08-25 CN CN201710742641.8A patent/CN107909536B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130208941A1 (en) * | 2012-02-01 | 2013-08-15 | Qingzhong Liu | Steganalysis with neighboring joint density |
CN104301733A (en) * | 2014-09-06 | 2015-01-21 | 南京邮电大学 | Video steganalysis method based on feature fusions |
CN106548445A (en) * | 2016-10-20 | 2017-03-29 | 天津大学 | Spatial domain picture general steganalysis method based on content |
Non-Patent Citations (1)
Title |
---|
肖海松,等: "基于相邻系数关系对的偏序马尔可夫JPEG隐写分析模型", 《武汉大学学报(理学版)》 * |
Also Published As
Publication number | Publication date |
---|---|
CN107909536B (en) | 2021-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kang et al. | Robust median filtering forensics using an autoregressive model | |
CN110852316B (en) | Image tampering detection and positioning method adopting convolution network with dense structure | |
CN108876780B (en) | Bridge crack image crack detection method under complex background | |
CN112907598B (en) | Method for detecting falsification of document and certificate images based on attention CNN | |
CN104504669B (en) | A kind of medium filtering detection method based on local binary patterns | |
CN109859091B (en) | Image steganography detection method based on Gabor filtering and convolutional neural network | |
CN103048329A (en) | Pavement crack detecting method based on active contour model | |
CN109615604B (en) | Part appearance flaw detection method based on image reconstruction convolutional neural network | |
CN107133955A (en) | A kind of collaboration conspicuousness detection method combined at many levels | |
CN103729856B (en) | A kind of Fabric Defects Inspection detection method utilizing S-transformation signal extraction | |
CN106548445A (en) | Spatial domain picture general steganalysis method based on content | |
CN104217388A (en) | Method and device of embedding and extracting image watermark based on FSSVM (Fuzzy Smooth Support Vector Machine) | |
CN104217389A (en) | Image watermark embedding and extracting method and device based on improved Arnold transform | |
Rhee | Detection of spliced image forensics using texture analysis of median filter residual | |
CN104217387A (en) | Image watermark embedding and extracting method and device based on quantization embedding | |
CN103325123A (en) | Image edge detection method based on self-adaptive neural fuzzy inference systems | |
CN113159052B (en) | Method for identifying failure mode of flexural reinforced concrete simply supported beam based on deep learning | |
CN110378433A (en) | The classifying identification method of bridge cable surface defect based on PSO-SVM | |
CN101493927B (en) | Image reliability detecting method based on edge direction characteristic | |
CN108537762B (en) | Depth multi-scale network-based secondary JPEG compressed image evidence obtaining method | |
CN107909536A (en) | A kind of method of steganalysis blind Detecting towards jpeg image | |
CN116912184A (en) | Weak supervision depth restoration image tampering positioning method and system based on tampering area separation and area constraint loss | |
CN106780547A (en) | Monitor video velocity anomaly mesh object detection method is directed to based on kinergety model | |
CN103440616B (en) | High volume reversible watermarking method based on self-adaptive prediction model | |
Agarwal et al. | Forensic analysis of colorized grayscale images using local binary pattern |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |