CN112785478B - Hidden information detection method and system based on generation of embedded probability map - Google Patents

Hidden information detection method and system based on generation of embedded probability map Download PDF

Info

Publication number
CN112785478B
CN112785478B CN202110052916.1A CN202110052916A CN112785478B CN 112785478 B CN112785478 B CN 112785478B CN 202110052916 A CN202110052916 A CN 202110052916A CN 112785478 B CN112785478 B CN 112785478B
Authority
CN
China
Prior art keywords
detected
image
map
probability map
embedding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110052916.1A
Other languages
Chinese (zh)
Other versions
CN112785478A (en
Inventor
付章杰
陈君夫
李恩露
孙星明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110052916.1A priority Critical patent/CN112785478B/en
Publication of CN112785478A publication Critical patent/CN112785478A/en
Application granted granted Critical
Publication of CN112785478B publication Critical patent/CN112785478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a hidden information detection method and a hidden information detection system based on an embedding probability map, wherein the method can generate an embedding probability map to be detected according to an image to be detected, the embedding probability map to be detected and the image to be detected are respectively convolved with a plurality of high-pass filtering kernels to obtain a first residual map corresponding to the embedding probability map to be detected and a second residual map corresponding to the image to be detected, the first residual map and the second residual map are fused to obtain a fused image to be detected, a steganographic analysis model to be detected obtained by training in advance is adopted to learn the fused image to be detected, and the probability of whether the secret information is hidden in the image to be detected is output, so that whether the secret information is hidden in the image to be detected is detected, and the hidden information detection method has higher detection precision and detection efficiency.

Description

Hidden information detection method and system based on generation of embedded probability map
Technical Field
The invention relates to the technical field of image analysis, in particular to a hidden information detection method and system based on generation of an embedded probability map.
Background
Image steganography is a technique for hiding secret information in a carrier image for covert communication. With respect to traditional encrypted communications, the focus of image steganography is to mask the covert communication behavior itself, so that a listener cannot capture the secret message therein.
In recent years, with the advent of content adaptive steganography algorithms such as S-UNIWARD, HILL, HUGO, secret information is guided to texture complex areas in an image by constructing an embedded probability map through respective distortion functions, so that the security of steganography is greatly improved, and meanwhile, great challenges are brought to image steganography analysis. However, most of the existing steganography analysis researches improve network structures to improve detection performance of a steganography analysis model, and characteristics of the content self-adaptive steganography algorithm are not considered, so that the model cannot be guided in the training process, input information is not distinguished, and the final detection precision is low, and the training time is long.
Disclosure of Invention
Aiming at the problems, the invention provides a hidden information detection method and a hidden information detection system with strong robustness and high training efficiency based on the generation of an embedded probability map.
In order to achieve the purpose of the present invention, a hidden information detection method based on generating an embedding probability map is provided, comprising the following steps:
s10, generating an embedding probability map to be detected according to an image to be detected; the to-be-detected embedding probability map records the probability that each pixel point in the to-be-detected image carries hidden secret information;
s40, respectively convolving the to-be-detected embedding probability map and the to-be-detected image with a plurality of high-pass filter kernels to obtain a first residual map corresponding to the to-be-detected embedding probability map and a second residual map corresponding to the to-be-detected image;
s50, fusing the first residual error map and the second residual error map to obtain a fused image to be detected;
s60, learning the fusion image to be detected by adopting a steganography analysis model obtained through pre-training, and outputting the probability of whether the secret information is hidden in the image to be detected so as to judge whether the secret information is hidden in the image to be detected.
In one embodiment, when generating the embedding probability map to be tested according to the image to be tested, the method further includes:
s20, carrying out random overturning and rotating operation on the image to be detected and the embedding probability map to be detected so as to enhance data;
s30, pruning operation is carried out on the embedded probability map generation module so as to remove high-dimensional semantic information interference.
In one embodiment, fusing the first residual map and the second residual map to obtain a fused image to be measured includes:
fusing the first residual error map and the second residual error map by adopting a Pseudo-Siamese structure to obtain a fused image to be detected;
and carrying out weight clipping on the fusion image to be detected after the fusion image is processed by an attention mechanism.
In one embodiment, the training process of the steganalysis model includes:
collecting a plurality of sample images and determining sample labels of the sample images; the sample tag includes a probability that the corresponding sample image includes secret information;
and acquiring fusion images of all sample images, taking the fusion images of all sample images as input, and taking sample labels of all sample images as output to train an initial neural network model so as to obtain a steganalysis model.
Specifically, acquiring a fused image of each sample image includes:
generating each sample embedding probability map according to each sample image; the sample embedding probability map records the probability that each pixel point in the corresponding sample image carries hidden secret information;
convolving the sample embedding probability map and the corresponding sample image with a plurality of high-pass filter kernels respectively to obtain a third residual map corresponding to each sample embedding probability map and a fourth residual map corresponding to each sample image;
and respectively fusing each third residual error map and the corresponding fourth residual error map to obtain fused images of each sample image.
A hidden information detection system based on generating an embedded probability map, comprising:
the generating module is used for generating an embedding probability map to be detected according to the image to be detected; the to-be-detected embedding probability map records the probability that each pixel point in the to-be-detected image carries hidden secret information;
the convolution module is used for respectively convolving the to-be-detected embedding probability graph and the to-be-detected image with a plurality of high-pass filter kernels to obtain a first residual graph corresponding to the to-be-detected embedding probability graph and a second residual graph corresponding to the to-be-detected image;
the fusion module is used for fusing the first residual error map and the second residual error map to obtain a fusion image to be detected;
and the output module is used for learning the fusion image to be detected by adopting a steganography analysis model obtained by training in advance and outputting the probability of whether the secret information is hidden in the image to be detected so as to judge whether the secret information is hidden in the image to be detected.
The hidden information detection method based on the generation of the embedded probability map has the following beneficial effects:
1. the method is characterized by firstly introducing an embedding probability generation module in the field of image steganalysis, and reducing the potential search space during steganalysis model training.
2. And the Pseudo-twin (Pseudo-Siamese) network architecture is used for carrying out multi-scale information fusion on the embedded probability map and the input picture, so that the guiding effect of the embedded probability map is enhanced. Compared with the traditional convolution-based image steganalysis model, the method and the device guide the model to learn the content characteristics in a directed way through the corresponding embedding probability map output by the generation module, and allocate the model weight in the network by adopting the attention mechanism, so that better pertinence and usability are achieved. Experiments show that compared with other airspace steganalysis models, the method has the advantages of shorter training time and higher accuracy.
Drawings
FIG. 1 is a flow diagram of a hidden information detection method based on generating an embedding probability map, according to one embodiment;
FIG. 2 is a flow chart of another embodiment hidden information detection method based on generating an embedding probability map;
FIG. 3 is a graph of the output results of the generation module of one embodiment;
fig. 4 is a schematic diagram of a network structure of a feature extraction module according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, in one aspect, the present application provides a hidden information detection method based on generating an embedding probability map, including the following steps:
s10, generating an embedding probability map to be detected according to an image to be detected; and the to-be-detected embedded probability map records the probability that each pixel point in the to-be-detected image carries hidden secret information.
The step can be performed with training in advance to obtain an embedding probability map generating module, and the image to be detected is input into the embedding probability map generating module to generate the embedding probability map to be detected. And an embedding probability map with the same size can be generated according to the input image to be detected so as to obtain the probability of hiding the secret information of each pixel point.
And S40, respectively convolving the to-be-detected embedding probability map and the to-be-detected image with a plurality of high-pass filter kernels to obtain a first residual map corresponding to the to-be-detected embedding probability map and a second residual map corresponding to the to-be-detected image.
The step of convolving the to-be-detected embedding probability map and the input to-be-detected image with 30 different high-pass filter layers respectively to obtain a residual map of the image, wherein the residual map embodies information distribution in the image to a certain extent, and can extract part of steganography noise (trace of whether the image is steganography or not).
And S50, fusing the first residual error map and the second residual error map to obtain a fused image to be detected.
In one embodiment, fusing the first residual map and the second residual map to obtain a fused image to be measured includes:
fusing the first residual error map and the second residual error map by adopting a Pseudo-Siamese structure (Pseudo-twin network structure) to obtain a fused image to be detected;
and carrying out weight clipping on the fusion image to be detected after the fusion image is processed by an attention mechanism.
In the embodiment, a Pseudo-Siamese structure is adopted to fuse two pictures (a first residual error picture and a second residual error picture), weight clipping is performed after an attention mechanism, and the system is light-weighted while the potential search space of a steganalysis model is reduced. In one example, a Pseudo-Siamese structure may also be used to fuse multi-scale residual information for both pictures, and then the two pictures are accessed to SELayer for weight clipping and pruning steganalysis model.
S60, learning the fusion image to be detected by adopting a steganography analysis model obtained through pre-training, and outputting the probability of whether the secret information is hidden in the image to be detected so as to judge whether the secret information is hidden in the image to be detected.
The method comprises the steps of learning pictures in a related data set by using a steganography analysis model, outputting probability, and judging whether secret information is hidden in the pictures or not through a cross entropy loss function.
According to the hidden information detection method based on the generated embedding probability map, the embedding probability map to be detected can be generated according to the image to be detected, the embedding probability map to be detected and the image to be detected are respectively convolved with a plurality of high-pass filter kernels to obtain the first residual map corresponding to the embedding probability map to be detected and the second residual map corresponding to the image to be detected, the first residual map and the second residual map are fused to obtain the fused image to be detected, the fusion image to be detected is learned by adopting the steganographic analysis model to be detected, and the probability of whether the secret information is hidden in the fused image to be detected is output, so that whether the secret information is hidden in the image to be detected is detected, and the detection precision and the detection efficiency are high.
In one embodiment, when generating the embedding probability map to be tested according to the image to be tested, the method further includes:
s20, carrying out random overturning and rotating operation on the image to be detected and the embedding probability map to be detected so as to enhance data; the step utilizes random overturn and rotation operation to enhance data, so that the robustness of the system to the input picture can be enhanced while the diversity of the training set can be increased.
The steps can be used for carrying out random overturning and rotating operation on the image to be detected and the embedding probability map to be detected.
S30, pruning operation is carried out on the embedded probability map generation module so as to remove high-dimensional semantic information interference; according to knowledge in the information hiding field, pruning operation is carried out on the embedded probability map generating module, unnecessary network layers are removed, the accuracy of generated images is guaranteed, meanwhile, interference of unnecessary high-dimensional semantic information is reduced, and an embedded probability map is generated.
In one embodiment, the training process of the steganalysis model includes:
collecting a plurality of sample images and determining sample labels of the sample images; the sample tag includes a probability that the corresponding sample image includes secret information;
and acquiring fusion images of all sample images, taking the fusion images of all sample images as input, and taking sample labels of all sample images as output to train an initial neural network model so as to obtain a steganalysis model. The initial neural network model is an untrained neural network model.
The embodiment adopts U 2 Net as a pre-training network, learning the embedded probability maps generated by different spatial steganography algorithms, for example: S-UNIWARD, HUGO-BD, WOW, miPOD.
In one example, the plurality of sample images may appear as a BOSSBase dataset, with BOSSBase as the dataset, and the dataset is randomly rotated and flipped to enhance the robustness of steganalysis and generalization of different steganalysis algorithms.
Specifically, acquiring a fused image of each sample image includes:
generating each sample embedding probability map according to each sample image; the sample embedding probability map records the probability that each pixel point in the corresponding sample image carries hidden secret information;
convolving the sample embedding probability map and the corresponding sample image with a plurality of high-pass filter kernels respectively to obtain a third residual map corresponding to each sample embedding probability map and a fourth residual map corresponding to each sample image;
and respectively fusing each third residual error map and the corresponding fourth residual error map to obtain fused images of each sample image.
In one embodiment, the hidden information detection method based on the generation of the embedding probability map may also refer to fig. 2, which includes the following contents:
(1) Pre-training embedded probability map generation module
This embodiment refers to U in significance detection 2 Net as the main reference model of the embedding probability map generation module, the present embodiment is directed to the original U in order to make the generated embedding probability map more consistent with knowledge of information hiding 2 Net performs pruning operations because deep high-dimensional semantic features, no matter how much help the output of the final embedded probability map and the overall training of the network, can greatly increase the training time of the network.
(2) Pre-training data sets, enhancing data sets with random flipping and rotation operations
The data set is enhanced by performing random direction flipping and rotation operations on the BOSSBase data set prior to entering the network. The images are randomly operated to enhance the learning ability of the model to prevent the occurrence of over-fitting, wherein the probability of random inversion p=0.5 is set, and the probability of rotation q=0.5 is set, but in order to simulate a real rotation environment, three different rotation angles of 90 °,180 °,270 ° and three schemes are designed to occur.
(3) Adjusting an embedding probability map generation module to generate an embedding probability map
Employing cross entropy loss l m Sum total loss L of sum-to-sum ratio loss IoU combination all Optimizing an embedding probability map generation module; wherein the total loss L all The calculation formula is as follows:
Figure GDA0004201143690000061
in the formula (1), M is U 2 The total depth of Net, taking into account the difference in contribution of the different layers to the final result, will be the cross entropy loss of the different layers l m Giving different weights so that L all The method is more suitable for the depiction of details; wherein l m The calculation formula is as follows:
Figure GDA0004201143690000062
wherein (H, W) represents the height and width of the input picture, and (r, c) represents the value of each pixel, P G(r,c) And P S(r,c) Where (r, c) is the pixel coordinates and (H, W) is the image size: height and width; PG (r, c) and PS (r, c) represent the true embedding probability map and the corresponding coordinate pixel values of the module-generated embedding probability map, respectively.
In the formula (1), the IoU loss calculation method is as follows:
Figure GDA0004201143690000063
wherein C is j And
Figure GDA0004201143690000064
representing the true box and the embedded probability map generated by the module, respectively, equation (3) is used to constrain the ratio of the coincidence between the shaded area in the generated image and the true image.
As shown in fig. 3, where (a) represents an input image to be measured, (b) represents a true embedding probability map obtained through S-uniwasd calculation (c) is an embedding probability map obtained through the generating module of the present invention.
(4) Respectively preprocessing the obtained embedded probability map and the data set
The adopted preprocessing layer selects 30 different 5×5 filter kernels to carry out convolution operation to obtain 30 different characteristic diagrams. The method is characterized in that an output image O is formed by combining an input image I and high-pass filtering, and M represents the number of filtering kernels:
Figure GDA0004201143690000065
in formula (4), I represents the original input image, x represents the convolution operation, K m Represents different filter kernels, O represents the output image group subjected to convolution operation, O m Representing the convolution results of different filter kernels.
(5) And merging the two pictures by adopting a Pseudo-Siamese structure, and carrying out weight clipping after an attention mechanism.
Different from the twin network architecture mode in the well-known deep learning, considering the difference of the bearing contents of different branches in the network in the embodiment, one is an embedded probability map generated by a model, the other is an image to be measured, and the two branches do not share the weight because the two branches have certain similarity but have larger difference in the deep learning. The information on the two branches is overlapped to carry out multi-scale content fusion after convolution operation, and the fusion of the content level is utilized to replace the original weight sharing operation.
And (3) distributing weights to the SElayers by the feature images subjected to the two-time multi-scale fusion, so that the feature images of all channels are subjected to weight matching in the process of counter propagation by the network. The subsequent network layer can pay attention to the channel characteristics with the largest information quantity, and further reduces the learning pressure of the subsequent network so as to achieve the effect of pruning the whole steganalysis network.
(6) Learning pictures in the data set by using the model, outputting probability, and judging whether secret information is hidden in the pictures
In fig. 4, the present embodiment finally adopts the residual block as the base module of the initial neural network model and inputs the features passing through these base layers to the full-connection layer, the full-connection layer is used as the classification module in most networks, each full-connection layer contains a large number of neurons, and the calculation formula between layers is as follows:
Figure GDA0004201143690000071
wherein the method comprises the steps of
Figure GDA0004201143690000072
I number of input units representing layer i, < >>
Figure GDA0004201143690000073
Representing the weight of the full link layer of the first layer, j representing the number of output units of the first layer,/>
Figure GDA0004201143690000074
Representing the bias of the output cell. The embodiment converts the processed data into image tag data according to the boolean mapping relationship. Considering that a plurality of full connection layers can improve the final detection precision of the network, but a large number of parameters are correspondingly increased, the invention only selects one full connection layer to finish the operation.
In summary, the spatial domain image steganalysis method based on generating the embedding probability map fully combines the knowledge of the content adaptive steganalysis and the deep learning network; embedding a probability map in self-adaptive steganography, and fusing the embedding probability map with an image to be detected by using a pseudo twin network multi-level depth, wherein steganography marks (if any) on the image to be detected reduce the potential search space of a depth network and shorten training set time; pruning U 2 The Net network structure reduces useless high-dimensional characteristic information and enhances effective characteristic multiplexing operation; adopting an attention mechanism to cut the weight of the network layer during training, selecting more effective high-dimensional characteristics, and preventing interference caused by excessive useless characteristics of the network; and finally, the residual block is used as a basic module of the classification network to effectively promote the characteristic multiplexing and enhance the effective information in the high-dimensional information.
The hidden information detection method based on the generation of the embedded probability map has the following beneficial effects:
1. the method is characterized by firstly introducing an embedding probability generation module in the field of image steganalysis, and reducing the potential search space during steganalysis model training.
2. And the Pseudo-twin (Pseudo-Siamese) network architecture is used for carrying out multi-scale information fusion on the embedded probability map and the input picture, so that the guiding effect of the embedded probability map is enhanced. Compared with the traditional convolution-based image steganalysis model, the method and the device guide the model to learn the content characteristics in a directed way through the corresponding embedding probability map output by the generation module, and allocate the model weight in the network by adopting the attention mechanism, so that better pertinence and usability are achieved. Experiments show that compared with other airspace steganalysis models, the method has the advantages of shorter training time and higher accuracy.
Another aspect of the present application provides a hidden information detection system based on generating an embedding probability map, including:
the generating module is used for generating an embedding probability map to be detected according to the image to be detected; the to-be-detected embedding probability map records the probability that each pixel point in the to-be-detected image carries hidden secret information;
the convolution module is used for respectively convolving the to-be-detected embedding probability graph and the to-be-detected image with a plurality of high-pass filter kernels to obtain a first residual graph corresponding to the to-be-detected embedding probability graph and a second residual graph corresponding to the to-be-detected image;
the fusion module is used for fusing the first residual error map and the second residual error map to obtain a fusion image to be detected;
and the output module is used for learning the fusion image to be detected by adopting a steganography analysis model obtained by training in advance and outputting the probability of whether the secret information is hidden in the image to be detected so as to judge whether the secret information is hidden in the image to be detected.
For a specific limitation of the hidden information detection system based on the generation of the embedding probability map, reference may be made to the limitation of the hidden information detection method based on the generation of the embedding probability map hereinabove, and the description thereof will not be repeated here. The above-described modules in the hidden information detection system based on the generation of the embedding probability map may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
It should be noted that, the term "first\second\third" in the embodiments of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, and it is understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate to enable embodiments of the present application described herein to be implemented in sequences other than those illustrated or described herein.
The terms "comprising" and "having" and any variations thereof, in embodiments of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, article, or device that comprises a list of steps or modules is not limited to the particular steps or modules listed and may optionally include additional steps or modules not listed or inherent to such process, method, article, or device.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (4)

1. The hidden information detection method based on the generation of the embedded probability map is characterized by comprising the following steps of:
s10, generating an embedding probability map to be detected according to the image to be detected, recording the probability that each pixel point in the image to be detected carries hidden secret information by the embedding probability map to be detected, adjusting an embedding probability map generating module to generate the embedding probability map,
employing cross entropy loss l m Sum total loss L of sum-to-sum ratio loss IoU combination all Optimizing an embedding probability map generation module; wherein the total loss L all The calculation formula is as follows:
Figure FDA0004203252750000011
in the formula (1), M is U 2 The total depth of Net, taking into account the difference in contribution of the different layers to the final result, will be the cross entropy loss of the different layers l m Giving different weights so that L all The method is more suitable for the depiction of details; wherein l m The calculation formula is as follows:
Figure FDA0004203252750000012
wherein (H, W) represents the height and width of the input picture, and (r, c) represents the value of each pixel, P G(r,c) And P S(r,c) The corresponding coordinate pixel values of the true embedding probability map and the embedding probability map generated by the module are respectively represented;
in the formula (1), the IoU loss calculation method is as follows:
Figure FDA0004203252750000013
wherein C is j And
Figure FDA0004203252750000014
representing the true selection box and the embedded probability map generated by the module respectively, and the formula (3) is used for restraining the yin in the generated imageA coincidence ratio between the shadow region and the real image;
s20, carrying out random overturning and rotating operation on the image to be detected and the embedding probability map to be detected so as to enhance data;
s30, pruning operation is carried out on the embedded probability map generation module so as to remove high-dimensional semantic information interference;
s40, respectively convolving the to-be-detected embedding probability map and the to-be-detected image with a plurality of high-pass filter kernels to obtain a first residual map corresponding to the to-be-detected embedding probability map and a second residual map corresponding to the to-be-detected image;
s50, fusing the first residual error map and the second residual error map to obtain a fused image to be detected, wherein the method comprises the following steps: fusing the first residual error map and the second residual error map by adopting a Pseudo-Siamese structure to obtain a fused image to be detected; the fusion image to be measured is subjected to weight clipping after being processed by an attention mechanism;
s60, learning the fusion image to be detected by adopting a steganography analysis model obtained through pre-training, and outputting the probability of whether the secret information is hidden in the image to be detected so as to judge whether the secret information is hidden in the image to be detected.
2. The hidden information detection method based on the generation of the embedding probability map according to claim 1, wherein the training process of the steganalysis model includes:
collecting a plurality of sample images and determining sample labels of the sample images; the sample tag includes a probability that the corresponding sample image includes secret information;
and acquiring fusion images of all sample images, taking the fusion images of all sample images as input, and taking sample labels of all sample images as output to train an initial neural network model so as to obtain a steganalysis model.
3. The hidden information detection method based on the generation of the embedding probability map according to claim 2, wherein acquiring the fused image of each sample image includes:
generating each sample embedding probability map according to each sample image; the sample embedding probability map records the probability that each pixel point in the corresponding sample image carries hidden secret information;
convolving the sample embedding probability map and the corresponding sample image with a plurality of high-pass filter kernels respectively to obtain a third residual map corresponding to each sample embedding probability map and a fourth residual map corresponding to each sample image;
and respectively fusing each third residual error map and the corresponding fourth residual error map to obtain fused images of each sample image.
4. The detection system based on the hidden information detection method for generating an embedding probability map according to claim 1, comprising:
the generating module is used for generating an embedding probability map to be detected according to the image to be detected; the to-be-detected embedding probability map records the probability that each pixel point in the to-be-detected image carries hidden secret information;
the convolution module is used for respectively convolving the to-be-detected embedding probability graph and the to-be-detected image with a plurality of high-pass filter kernels to obtain a first residual graph corresponding to the to-be-detected embedding probability graph and a second residual graph corresponding to the to-be-detected image;
the fusion module is used for fusing the first residual error map and the second residual error map to obtain a fusion image to be detected;
and the output module is used for learning the fusion image to be detected by adopting a steganography analysis model obtained by training in advance and outputting the probability of whether the secret information is hidden in the image to be detected so as to judge whether the secret information is hidden in the image to be detected.
CN202110052916.1A 2021-01-15 2021-01-15 Hidden information detection method and system based on generation of embedded probability map Active CN112785478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110052916.1A CN112785478B (en) 2021-01-15 2021-01-15 Hidden information detection method and system based on generation of embedded probability map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110052916.1A CN112785478B (en) 2021-01-15 2021-01-15 Hidden information detection method and system based on generation of embedded probability map

Publications (2)

Publication Number Publication Date
CN112785478A CN112785478A (en) 2021-05-11
CN112785478B true CN112785478B (en) 2023-06-23

Family

ID=75756138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110052916.1A Active CN112785478B (en) 2021-01-15 2021-01-15 Hidden information detection method and system based on generation of embedded probability map

Country Status (1)

Country Link
CN (1) CN112785478B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537379B (en) * 2021-07-27 2024-04-16 沈阳工业大学 Three-dimensional matching method based on CGANs

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791871A (en) * 2015-11-25 2017-05-31 中国科学院声学研究所 A kind of motion vector modulation intelligence hides detection method
CN108280480A (en) * 2018-01-25 2018-07-13 武汉大学 A kind of hidden image vector safety evaluation method based on residual error symbiosis probability
CN108346125A (en) * 2018-03-15 2018-07-31 中山大学 A kind of spatial domain picture steganography method and system based on generation confrontation network
US10270790B1 (en) * 2014-12-09 2019-04-23 Anbeco, LLC Network activity monitoring method and apparatus
CN109934761A (en) * 2019-01-31 2019-06-25 中山大学 Jpeg image steganalysis method based on convolutional neural networks
CN110084734A (en) * 2019-04-25 2019-08-02 南京信息工程大学 A kind of big data ownership guard method being locally generated confrontation network based on object
CN110399789A (en) * 2019-06-14 2019-11-01 佳都新太科技股份有限公司 Pedestrian recognition methods, model building method, device, equipment and storage medium again
CN111696021A (en) * 2020-06-10 2020-09-22 中国人民武装警察部队工程大学 Image self-adaptive steganalysis system and method based on significance detection
CN111768326A (en) * 2020-04-03 2020-10-13 南京信息工程大学 High-capacity data protection method based on GAN amplification image foreground object

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10270790B1 (en) * 2014-12-09 2019-04-23 Anbeco, LLC Network activity monitoring method and apparatus
CN106791871A (en) * 2015-11-25 2017-05-31 中国科学院声学研究所 A kind of motion vector modulation intelligence hides detection method
CN108280480A (en) * 2018-01-25 2018-07-13 武汉大学 A kind of hidden image vector safety evaluation method based on residual error symbiosis probability
CN108346125A (en) * 2018-03-15 2018-07-31 中山大学 A kind of spatial domain picture steganography method and system based on generation confrontation network
CN109934761A (en) * 2019-01-31 2019-06-25 中山大学 Jpeg image steganalysis method based on convolutional neural networks
CN110084734A (en) * 2019-04-25 2019-08-02 南京信息工程大学 A kind of big data ownership guard method being locally generated confrontation network based on object
CN110399789A (en) * 2019-06-14 2019-11-01 佳都新太科技股份有限公司 Pedestrian recognition methods, model building method, device, equipment and storage medium again
CN111768326A (en) * 2020-04-03 2020-10-13 南京信息工程大学 High-capacity data protection method based on GAN amplification image foreground object
CN111696021A (en) * 2020-06-10 2020-09-22 中国人民武装警察部队工程大学 Image self-adaptive steganalysis system and method based on significance detection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection;Xuebin Qin 等;《https://arxiv.org/pdf/2005.09007.pdf》;第1-15页 *
基于卷积神经网络的图像隐写分析;魏伟航;《中国优秀硕士学位论文全文数据库信息科技辑》(第07期);I138-31 *
基于深度学习的图像自适应隐写分析若干技术研究;沈强;《中国优秀硕士学位论文全文数据库 信息科技辑》(第03期);I138-69 *

Also Published As

Publication number Publication date
CN112785478A (en) 2021-05-11

Similar Documents

Publication Publication Date Title
Zheng et al. A survey on image tampering and its detection in real-world photos
Li et al. Concealed attack for robust watermarking based on generative model and perceptual loss
CN111507386B (en) Method and system for detecting encryption communication of storage file and network data stream
CN109492627B (en) Scene text erasing method based on depth model of full convolution network
CN113077377B (en) Color image steganography method based on generation countermeasure network
CN112487365B (en) Information steganography method and information detection method and device
CN104636764B (en) A kind of image latent writing analysis method and its device
Wang et al. HidingGAN: High capacity information hiding with generative adversarial network
Fu et al. CCNet: CNN model with channel attention and convolutional pooling mechanism for spatial image steganalysis
Hu et al. Adaptive steganalysis based on selection region and combined convolutional neural networks
CN111696021A (en) Image self-adaptive steganalysis system and method based on significance detection
Yuan et al. GAN-based image steganography for enhancing security via adversarial attack and pixel-wise deep fusion
CN115809953A (en) Attention mechanism-based multi-size image robust watermarking method and system
CN112785478B (en) Hidden information detection method and system based on generation of embedded probability map
Tang et al. Detection of GAN-synthesized image based on discrete wavelet transform
Khoo et al. Deepfake attribution: On the source identification of artificially generated images
Liu et al. Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack
CN116152523A (en) Image detection method, device, electronic equipment and readable storage medium
Liu et al. TBFormer: Two-Branch Transformer for Image Forgery Localization
CN112560034B (en) Malicious code sample synthesis method and device based on feedback type deep countermeasure network
Sultan et al. A new framework for analyzing color models with generative adversarial networks for improved steganography
CN110503157B (en) Image steganalysis method of multitask convolution neural network based on fine-grained image
CN115632843A (en) Target detection-based generation method of backdoor attack defense model
Wang An efficient multiple-bit reversible data hiding scheme without shifting
CN114743148A (en) Multi-scale feature fusion tampering video detection method, system, medium, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant