CN111563263B - Carrier-free information hiding method for migration of arbitrary image style - Google Patents

Carrier-free information hiding method for migration of arbitrary image style Download PDF

Info

Publication number
CN111563263B
CN111563263B CN202010301278.8A CN202010301278A CN111563263B CN 111563263 B CN111563263 B CN 111563263B CN 202010301278 A CN202010301278 A CN 202010301278A CN 111563263 B CN111563263 B CN 111563263B
Authority
CN
China
Prior art keywords
image
style
sub
feature map
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010301278.8A
Other languages
Chinese (zh)
Other versions
CN111563263A (en
Inventor
张善卿
苏圣琦
李黎
陆剑锋
白瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaoxing Conglomerate Data Technology Co ltd
Original Assignee
Shaoxing Conglomerate Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoxing Conglomerate Data Technology Co ltd filed Critical Shaoxing Conglomerate Data Technology Co ltd
Priority to CN202010301278.8A priority Critical patent/CN111563263B/en
Publication of CN111563263A publication Critical patent/CN111563263A/en
Application granted granted Critical
Publication of CN111563263B publication Critical patent/CN111563263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Bioethics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a carrier-free information hiding method for migration of any image style. The method combines the ideas of carrier-free information hiding with the operation of non-parametric image style migration, and designs a secret information coding and adjusting scheme capable of self-adapting according to an input image to generate a self-adapting information hiding matrix. And under the guidance of the self-adaptive information hiding matrix, performing arbitrary image style migration, and directly synthesizing an image style migration result driven by secret information. The invention not only maintains the advantages of artistic, more natural and high fidelity of the visual effect of the image style migration result, but also has the characteristic that the hidden of the carrier-free information can resist steganalysis, and is superior to other existing carrier-free information hiding methods in the aspect of embedding capacity.

Description

Carrier-free information hiding method for migration of arbitrary image style
Technical Field
The invention belongs to the field of carrier-free information hiding, and particularly relates to a carrier-free information hiding method for migration of any image style.
Background
Information hiding is a technique of hiding secret information in invisible fashion into information that appears to be normal. In the conventional image information hiding technique, an image is designated as a suitable carrier, and secret information is then embedded therein to generate a secret-containing image. With the development of traditional information hiding, although the modification of a carrier image by the existing method is very little, whether secret information is hidden in the image can be detected by a steganalysis algorithm. For this purpose, experts propose carrier-free information hiding algorithms. No carrier information hiding can hide secret information without any modification of the carrier image to resist existing steganalysis algorithms.
With the rise of deep learning, gatys and Ecker and the like creatively propose an image style migration method based on a convolutional neural network. However, because a network corresponding to each style image needs to be trained, a great amount of training time is needed, and the trained network cannot perform any image style migration.
Disclosure of Invention
After comprehensively considering factors such as visual effect, embedding mode and the like of image style migration, a carrier-free information hiding network for arbitrary image style migration is provided. The network combines the ideas of carrier-free information hiding with the operation of non-parametric image style migration, and designs a secret information coding and adjusting scheme capable of generating a self-adaptive information hiding matrix according to self-adaptation of an input image. And under the guidance of the self-adaptive information hiding matrix, performing arbitrary image style migration, and directly synthesizing an image style migration result driven by secret information.
The technical scheme of the invention comprises the following steps:
a carrier-free information hiding method for migration of any image style comprises the following steps:
s1: obtaining a stylistic image feature map from a target Relu layer of a convolutional neural network for image style migration, extracting P of all m multiplied by m sizes from the stylistic image feature map in an overlapped manner S The sub-blocks are clustered, and then Mean Shift clustering is carried out on all the sub-blocks, so that K-type style image aggregation is obtainedClass results;
s2: determining a correction class number K' according to the style image clustering result:
Figure BDA0002454076070000021
wherein: p is the number of m multiplied by m sub-blocks which can be extracted from the adaptive image at most in a non-overlapping mode in a target Relu layer of the convolutional neural network; τ is the minimum number of sub-blocks contained in each cluster;
the method for determining the size h multiplied by w of the adaptive image comprises the following steps:
h=min{h C ,h S },
w=min{w C ,w S }.
wherein: h is a C ×w C To input the size of the content image of the convolutional neural network, h S ×w S Inputting the style image size of the convolutional neural network;
if K '. Noteq.K, combining the clustered K-type style images by using a bottom-up AGNES clustering method until the class number is equal to K'; if K' =K, keeping the clustering result of the K-type style images unchanged;
s3: clustering K' -class style images
Figure BDA0002454076070000022
The class is used as a buffer class, the remaining K' -1 class is used for hiding secret information, and then the grouping length r-1 of the secret information is calculated according to the following calculation formula:
Figure BDA0002454076070000023
wherein: p (P) C The number of m multiplied by m sub-blocks which can be extracted at most in a non-overlapping mode in a target Relu layer of a convolutional neural network for a content image;
dividing secret information to be hidden into K' -1 group B 1 ,B 2 ,...,B K′-1 Wherein B is i =(b 1 b 2 ... b r-1 ), b j =0 or 1, i=1, 2,..k' -1, j=1, 2,..r-1;
for each B i Adding a marker bit to form B' i
Figure BDA0002454076070000024
Wherein: b' j =1-b j
S4: e to be hidden n Conversion of bit secret information to B' 1 ,B′ 2 ,...,B′ K′-1 After that, the code is converted from binary system to decimal system, which is marked as D 1 ,D 2 ,...,D K′-1 D is to 1 ,D 2 ,...,D K′-1 Data frequency constraints available for the 1,2, …, K' -1 classes except the buffer class; the number of times that data of each class need to be used is assembled into an adaptive information hiding matrix H according to the serial numbers of the classes:
Figure BDA0002454076070000025
wherein: p (P) b The number of blocks that must be used for the buffer class, P when the number of blocks that are required for the network to perform image style migration together is P b The calculation formula of (2) is as follows:
Figure BDA0002454076070000031
s5: and using the self-adaptive information hiding matrix H as the using times of sub-block clustering in a target Relu layer of the convolutional neural network, and restricting non-parameterized style migration to obtain a secret style migration result.
On the basis of the scheme, the steps can be further realized in the following specific mode.
Preferably, the specific method of Mean Shift clustering in S1 is as follows:
s11: randomly selecting one sub-block from all unclassified image characteristic image sub-blocks as a central point, finding out all sub-blocks with the distance from the central point within a set range, and recording as a set M;
s12: calculating an offset vector of the center point and the set M, and moving the center point along the offset vector;
s13: repeating the step S12 until the magnitude of the offset vector meets the set threshold value, and recording the value of the center point at the moment;
s14: continuously repeating S11-S13 until all the image feature image sub-blocks are classified; and finally, sequencing the coordinates of all the clustering centers according to a dimension order, wherein the coordinates are respectively numbered as 1,2, … and K.
Preferably, the convolutional neural network for image style migration is a VGG-19 network with hole convolution.
Further, the target Relu layer is a Relu 3-1 layer in the VGG-19 network.
Preferably, the sub-block sizes extracted from the content image feature map and the grid image feature map are each 3×3.
Preferably, the specific step S5 is as follows:
s51: inputting the content image C and the style image S into the convolutional neural network, and obtaining a content image feature map and a style image feature map in a target Relu layer; extracting all m x m sub-blocks f in non-overlapping manner on a content image feature map i (C),1≤i≤P C Extracting all m×m sub-blocks f from the style image feature map in an overlapping manner j (S),1≤j≤P S ,P C And P S Respectively representing the number of extractable small blocks in the content image feature map and the style image feature map; dividing all the style image characteristic image sub-blocks into K' style image clustering results according to the methods of S1-S2;
s52: for each content image feature map sub-block f i (C) Selecting a clustering center and f from the K' -type style image clustering result i (C) Clustering the nearest style image feature map sub-blocks;
s53: for each inner partImage feature map sub-block f i (C) In the selected style image characteristic map sub-block clustering, screening all P 'with the distance from the clustering center not exceeding half of the clustering radius' S Personal style image feature map sub-block f j (S),j=1,2,...,P′ S Then using the cross-correlation function to determine a sub-block f of the content image feature map i (C) Best matching intra-class optimal block f i st (C,S):
Figure BDA0002454076070000041
S54: for each content image feature map sub-block f i (C) By using intra-class optimal block f i st (C, S) substitution f i (C) And the optimal block f in the class i st The number of times the cluster to which (C, S) belongs is used in the information hiding matrix H is reduced by 1;
if S54 is executed, the intra-class optimal block f i st If the use order value of the cluster to which (C, S) belongs in the information hiding matrix H is 0, then at f i st (C, S) performing S53 and S54 in adjacent clusters of the clusters to which S belongs;
s55: after the replacement of all the content image feature map sub-blocks is completed, a complete content image feature map F is obtained by reconstruction ST (C,S)。
Preferably, all sub-blocks extracted from the content image feature map and the grid image feature map contain all channels of the feature map.
Preferably, the carrier-free information hiding method is used for constructing a carrier-free information hiding network, and the carrier-free information hiding network is trained by adopting a mean square error loss function and a loss function L style The (C, S) form is:
Figure BDA0002454076070000042
wherein I F Is F norm, w c 、h c And d c Width and height of content image respectivelyAnd the number of channels; f (I) a feature map of a current intermediate calculation result I in the training process, wherein ρ is a control parameter; l (L) TV (. Cndot.) is the total variation regularization term, whose formula is:
Figure BDA0002454076070000043
wherein I is i,j,k The k-channel pixel value of the ith row and jth column of the result I is currently calculated intermediately.
Compared with the prior art, the invention has the following beneficial effects:
the invention combines the advantages of the network method for transferring the image style by the nonparametric method, and realizes carrier-free information hiding in the image style transferring process. Compared with other image style migration networks, the method can realize random image style migration under the guidance of secret information, so that the method can adaptively adjust the embedded bit number of secret information while guaranteeing the visual effect of an image style migration result, and balances the relationship between the hidden capacity and the robustness of a carrier-free information hiding algorithm. The method can not only generate any image style migration result faster, but also meet the requirements of safety and detectability in carrier-free information hiding. Compared with the existing carrier-free information hiding method based on machine learning, the method not only has larger information hiding capacity, but also can obtain artistic image style migration results.
Drawings
Fig. 1 is a schematic diagram of a carrier-free information hiding network structure for arbitrary image style migration (k=k' =9 as an example).
FIG. 2 is a schematic diagram of the influence of the number of style image clusters on the migration of the image style.
FIG. 3 is a schematic diagram of a cavity convolution with a receptive field of 9X 9.
Fig. 4 is a schematic diagram of multiple style migration results output by a carrier-free information hiding network for arbitrary image style migration.
Detailed Description
The invention is further illustrated and described below with reference to the drawings and detailed description.
In the invention, by combining the concept of carrier-free information hiding with the operation of non-parametric image style migration, an adaptive information hiding matrix is designed according to an adaptive secret information coding and adjusting scheme of an input image. And under the guidance of the self-adaptive information hiding matrix, performing arbitrary image style migration, and directly synthesizing an image style migration result driven by secret information. In addition, the network has good performance in terms of steganography capacity, anti-steganography analysis and safety. Specific implementation steps of the carrier-free information hiding method for arbitrary image style migration are described in detail below.
As shown in fig. 1, a schematic diagram describing steps of the carrier-free information hiding method for arbitrary image style migration is shown, and the implementation process is as follows:
s1: mean Shift clustering is used for activated patches of a certain layer of the convolutional neural network.
Firstly, constructing a convolutional neural network for image style migration, obtaining a style image feature map from a target Relu layer of the convolutional neural network, and extracting P with the size of m multiplied by m from the style image feature map in an overlapped mode (namely, partial overlapping is allowed between two sub-blocks) S And carrying out Mean Shift clustering on all the sub-blocks to obtain a K-type style image clustering result. In this embodiment, the convolutional neural network for image style migration is a VGG-19 network with hole convolution, and the target Relu layer is the Relu 3-1 layer in the VGG-19 network.
The specific method for the Mean Shift clustering is as follows:
s11: randomly selecting one sub-block from all unclassified image characteristic image sub-blocks as a central point, finding out all sub-blocks with the distance from the central point within a set range, and recording as a set M;
s12: calculating an offset vector of the center point and the set M, and moving the center point along the offset vector;
s13: repeating the step S12 until the magnitude of the offset vector meets the set threshold value, and recording the value of the center point at the moment;
s14: continuously repeating S11-S13 until all the image feature image sub-blocks are classified; and finally, sequencing the coordinates of all the clustering centers according to a dimension order, wherein the coordinates are respectively numbered as 1,2, … and K.
S2: and determining a correction class number K' by using AGNES clustering on the style image clustering result.
First, the size of a content image input into a convolutional neural network is determined to be h C ×w C Inputting the wind lattice image size h of the convolutional neural network S ×w S Taking the minimum value of the two as the size h multiplied by w of the self-adaptive image, namely:
h=min{h C ,h S },
w=min{w C ,w S }.
according to the adaptive image size h×w, the number of m×m (m=3) sub-blocks which can be extracted at most in a non-overlapping manner (i.e. no overlapping portion is allowed between any two sub-blocks) in the target Relu layer of the convolutional neural network is P, and the minimum number of sub-blocks included in each cluster is τ, then the corrected class number K' is:
Figure BDA0002454076070000061
k' is the number of clusters which adapt to the characteristics of the image and keep better robustness.
If K '. Noteq.K, combining the clustered K-type style images by using a bottom-up AGNES clustering method until the class number is equal to K'; if K' =K, the clustering result of the K-type style images is kept unchanged. A schematic diagram of the influence of the number of style image clusters on the migration of the image style is shown in fig. 2. In this embodiment, K' =k=9, and thus remains unchanged.
Therefore, through the step S2, the K' type style image clustering result can be obtained.
S3: and calculating the packet length r-1 according to the correction class number K'.
Clustering K' -class style images
Figure BDA0002454076070000062
Class is used as buffer class (>
Figure BDA0002454076070000063
The representation rounds up), the remaining K' -1 class is used to hide secret information. The number of m multiplied by m small blocks which are obtained by non-overlapping division of the content image in a convolutional neural network target Relu layer to the maximum is P C The average number of blocks per class that can be used for information hiding is +.>
Figure BDA0002454076070000064
(in the formula->
Figure BDA0002454076070000065
Representing a downward rounding), the packet length r-1 is:
Figure BDA0002454076070000066
the secret information to be hidden is divided into K' -1 groups and is marked as B 1 ,B 2 ,...,B K′-1 . Wherein B is i With position r-1, i.e. B i =(b 1 b 2 ... b r-1 ),b i =0 or 1, i=1, 2,..k' -1, j=1, 2,..r-1; then pair B is according to the following rule i Adding a marker bit to form B' i
Figure BDA0002454076070000071
Wherein: b' j =1-b j ,j=1,2,...,r-1。
Taking secret information E n Bits, converting subsequences into B 'according to coding scheme' 1 ,B′ 2 ,...,B′ K′-1 Converting the updated code from binary to decimal, denoted as D 1 ,D 2 ,...,D K′-1 . Will D 1 ,D 2 ,...,D K′-1 As a constraint on the number of data times that the remaining classes other than the buffer class are available.
S4: the secret information is encoded into an adaptive information hiding matrix H using a new encoding scheme.
Taking secret information E n Bits, converting subsequences into B 'according to coding scheme' 1 ,B′ 2 ,...,B′ K′-1 The updated code is then converted from binary to decimal, denoted as D 1 ,D 2 ,...,D K′-1 D is to 1 ,D 2 ,...,D K′-1 As data count constraints available for classes 1,2, …, K' -1, respectively, except for the buffer class. The K' -1 class for encoding secret information is totally available for use with the number of data constraints P n The method comprises the following steps:
Figure BDA0002454076070000072
the total number of blocks required for the network to perform image style migration is P, and the number of blocks P which are required by the buffer class b The method comprises the following steps:
P b =P-P n .
the number of times of data needed to be used of each class is assembled into an information hiding matrix H according to the serial number of the class, namely
Figure BDA0002454076070000073
S5: and using the self-adaptive information hiding matrix H as the using times of sub-block clustering in a target Relu layer of the convolutional neural network, and restricting non-parameterized style migration to obtain a secret style migration result. The specific implementation process of the steps is as follows:
s51: and inputting the content image C and the wind grid image S into the convolutional neural network (VGG-19 network with hole convolution), and obtaining a content image characteristic map and a wind grid image characteristic map in a target Relu layer. Extracting all m×m sub-blocks f in non-overlapping manner on a content image feature map i (C),1≤i≤P C Extracting all m×m sub-blocks f from the style image feature map in an overlapping manner j (S),1≤j≤P S ,P C And P S The number of the small blocks which can be extracted from the content image feature map and the style image feature map is respectively represented. The sub-block sizes extracted from the content image feature map and the grid image feature map are each 3×3, i.e., m=3. And all the style image feature image sub-blocks are divided into K' style image clustering results according to the methods of S1-S2. All sub-blocks extracted from the content image feature map and the grid image feature map contain all channels of the feature map.
S52: for each content image feature map sub-block f i (C) Respectively solving the distance between the clustering center of the clustering result and the clustering center of the K '-type image, and further selecting the clustering center and f from the clustering result of the K' -type image i (C) Clustering the nearest style image feature map sub-blocks;
s53: for each content image feature map sub-block f i (C) In the selected style image characteristic map sub-block clustering, screening all P 'with the distance from the clustering center not exceeding half of the clustering radius' S Personal style image feature map sub-block f j (S),j=1,2,...,P′ S Then using the cross-correlation function to determine a sub-block f of the content image feature map i (C) Best matching intra-class optimal block f i st (C,S):
Figure BDA0002454076070000081
S54: for each content image feature map sub-block f i (C) By using intra-class optimal block f i st (C, S) substitution f i (C) And the optimal block f in the class i st The number of times the cluster to which (C, S) belongs is used in the information hiding matrix H is reduced by 1;
if S54 is executed, the intra-class optimal block f i st If the use order value of the cluster to which (C, S) belongs in the information hiding matrix H is 0, then at f i st (C, S) performing S53 and S54 in adjacent clusters of the clusters to which S belongs;
s55: after the replacement of all the content image characteristic image sub-blocks is completed, the complete content image is obtained by reconstructionImage feature map F ST (C,S)。
Constructing a carrier-free information hiding network (CSST-Net) based on the carrier-free information hiding method, wherein the carrier-free information hiding network is trained by adopting a mean square error loss function and a loss function L style The (C, S) form is:
Figure BDA0002454076070000082
wherein I F Is F norm, w c 、h c And d c The width, the height and the channel number of the content image are respectively; f (I) a feature map of a current intermediate calculation result I in the training process, wherein ρ is a control parameter; l (L) TV (. Cndot.) is the total variation regularization term, whose formula is:
Figure BDA0002454076070000083
wherein I is i,j,k The k-channel pixel value of the ith row and jth column of the result I is currently calculated intermediately.
Therefore, secret information can be embedded into the style migration image by using the carrier-free information hiding method, and carrier-free information hiding is realized in the image style migration process.
In order to facilitate understanding, the invention further provides a specific extraction step of the secret information at the receiving end, and the specific steps are as follows:
step 1: and extracting a characteristic diagram of a style migration result by using a VGG-19 network with cavity convolution of CSST-Net, wherein the characteristic diagram is obtained by adopting a Relu 3-1 layer in the VGG-19 network as well as the encryption end. The resulting feature map is divided into m x m sub-blocks in a non-overlapping manner.
Step 2: clustering the obtained feature map small blocks by using the Mean Shift algorithm to obtain C 1 ,C 2 ,...,C K′
Step 3: counting the number of the sub-blocks of the feature map in each class after sequencing, C 1 Feature map sub-blocks in classNumber n 1 , C 2 The number of the feature map small blocks in the class is n 2 ,…,C K′ The number of the sub-blocks of the feature map in the class is n K′ . Summarizing the number of the sub-blocks of each class of feature map into a matrix according to the class coding order
Figure BDA0002454076070000091
Namely:
Figure BDA0002454076070000092
step 4: using the inverse of the information hiding matrix H described above, the sub-block number is matrix
Figure BDA0002454076070000093
Inverse solution results in an embedded secret information bit stream.
Therefore, the invention combines the ideas of carrier-free information hiding with the operation of non-parametric image style migration, designs a secret information coding and adjusting scheme capable of self-adapting according to the input image, and generates an adaptive information hiding matrix. And under the guidance of the self-adaptive information hiding matrix, performing arbitrary image style migration, and directly combining the arbitrary image style migration results driven by secret information. The invention not only inherits the advantages of artistic, more natural and high fidelity of the visual effect of the image style migration result, but also has the characteristic that the hidden of the carrier-free information can resist the steganalysis, and is superior to other existing carrier-free information hiding methods in the aspect of embedding capacity. In order to demonstrate the effects achieved by the present invention, the above method is applied to a specific embodiment, and specific steps thereof are not described in detail, and specific parameters and technical effects thereof are mainly shown below.
Examples
In this embodiment, the image conversion task of image style migration is performed using CSST-Net. The implementation details and visual effect evaluation will be analyzed in detail later. Furthermore, another important task of CSST-Net is to generate codebooks without carrier information hiding. Therefore, this embodiment also performs experimental analysis of steganography capacity, detectability, and security for the carrier-free information hiding operation.
1. Image style migration results
The CSST-Net feed forward network uses pretrained VGG-19 with hole convolution, the reverse network reference paper [3] uses Microsoft COCO (MSCOCO) data set training, randomly extracts 15000 pictures from the verification set as training set, and trains three periods. Because the method adopts the Mean shift cluster and the AGNES cluster, some extra time is consumed in the image style migration. All of the following results are those obtained within 18-25 seconds on a NVIDIA GeFoce GTX 1060 GB GPU. The results of the output of CSST-Net with both artistic and fidelity are shown in fig. 4.
2. Network speed
Because convolutional neural networks are needed in both the information hiding and information extracting processes, the image style training speed and style category of CSST-Net are aspects to be considered. The comparison results are shown in Table 1:
TABLE 1 training contrast for style migration networks
Figure BDA0002454076070000101
The specific practice of the comparison method is as follows:
[1]Gatys L A,Ecker A S,Bethge M.A neural algorithm of artistic style[J]. arXiv preprint arXiv:1508.06576,2015.
[2]Li Y,Fang C,Yang J,et al.Diversified texture synthesis with feed-forward networks[C].IEEE Conference on Computer Vision and Pattern Recognition.2017: 3920-3928.
[3]Chen T Q,Schmidt M.Fast patch-based style transfer of arbitrary style[J]. arXiv preprint arXiv:1612.04337,2016.
3. information hiding capacity
Since the image carrier-free information hiding method is not perfect, there is no comparability between the capacity and the traditional information hiding method, so that only a comparison between several image carrier-free information hiding directions can be performed. The size of the information hiding capacity is adaptively determined according to the size characteristics of the content image and the style image. In the experiment, the hidden capacity of an image with the size of 640 x 360 is 48 bits when the clustering number is equal to 9. The specific comparison results are shown in Table 2.
TABLE 2 comparison of hiding capacities of Carrier-free information
Figure BDA0002454076070000111
The specific practice of the comparison method is as follows:
[4] zhou Zhili, cao, sun Xingming. Carrier-free information hiding based on the image Bag-of-Words model [ J ]. Applied science journal, 2016,34 (5): 527-536.
[5]Zhou Z,Sun H,Harit R,et al.Coverless image steganography without embedding[C].International Conference on Cloud Computing and Security.Springer, Cham,2015:123-132.
[6]Yi C,Zhou Z,Yang C N,et al.Coverless information hiding based on Faster R-CNN[C].International Conference on Security with Intelligent Computing and Big-data Services.Springer,Cham,2018:795-807.
4. Detectability and safety
The CSST-Net directly stores the secret information in the image style migration result in an encoding mode, and the process of transmitting the secret information does not modify the secret-contained image. From the above results, it can be seen that the image style migration result obtained by CSST-Net has a good visual effect, and can better camouflage as a daily transmission image, and is more difficult to arouse doubt of an attacker. Detection of the statistical-based information hiding analysis algorithm can be fundamentally resisted.
The above embodiment is only a preferred embodiment of the present invention, but it is not intended to limit the present invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, all the technical schemes obtained by adopting the equivalent substitution or equivalent transformation are within the protection scope of the invention.

Claims (7)

1. The carrier-free information hiding method for migration of any image style is characterized by comprising the following steps:
s1: obtaining a style image feature map from a target Relu layer of a convolutional neural network for image style migration, extracting P of all m×m sizes from the style image feature map in an overlapping manner S Performing Mean Shift clustering on all the sub-blocks to obtain K-type style image clustering results;
s2: determining a correction class number K' according to the style image clustering result:
Figure FDA0004092682660000011
wherein: p is the number of m multiplied by m sub-blocks which can be extracted from the adaptive image in a non-overlapping mode at most in a target Relu layer of the convolutional neural network; τ is the minimum number of sub-blocks contained in each cluster;
the method for determining the size h multiplied by w of the adaptive image comprises the following steps:
h=min{h C ,h S },
w=min{w C ,w S }.
wherein: h is a C ×w C To input the size of the content image of the convolutional neural network, h S ×w S Inputting the style image size of the convolutional neural network;
if K '. Noteq.K, combining the clustered K-type style images by using a bottom-up AGNES clustering method until the class number is equal to K'; if K' =K, keeping the clustering result of the K-type style images unchanged;
s3: clustering K' -class style images
Figure FDA0004092682660000012
The class serves as a buffer class, the remaining K'-class 1 is used to conceal the secret information and then the packet length r-1 of the secret information is calculated as:
Figure FDA0004092682660000013
wherein: p (P) C The number of m multiplied by m sub-blocks which can be extracted at most in a non-overlapping manner in a target Relu layer of the convolutional neural network for the content image;
dividing secret information to be hidden into K' -1 group B 1 ,B 2 ,...,B K′-1 Wherein B is i =(b 1 b 2 ...b r-1 ),b j =0 or 1, i=1, 2,..k' -1, j=1, 2,..r-1;
for each B i Adding a marker bit to form B' i
Figure FDA0004092682660000014
Wherein: b' j =1-b j
S4: e to be hidden n Conversion of bit secret information to B' 1 ,B′ 2 ,...,B′ K′-1 After that, the code is converted from binary to decimal, which is marked as D 1 ,D 2 ,...,D K′-1 D is to 1 ,D 2 ,...,D K′-1 Data frequency constraints available for the 1,2, …, K' -1 classes except the buffer class; the number of times that each class of data needs to be used is assembled into an adaptive information hiding matrix H according to the serial numbers of the classes:
Figure FDA0004092682660000021
wherein: p (P) b The number of blocks that must be used for the buffer class, P when the number of blocks that are required for the network to perform image style migration together is P b The calculation formula of (2) is as follows:
Figure FDA0004092682660000022
s5: the self-adaptive information hiding matrix H is used as the using times of sub-block clustering in a target Relu layer of a convolutional neural network, the non-parameterized style migration is restrained, and a secret style migration result is obtained, and the specific steps are as follows:
s51: inputting the content image C and the style image S into the convolutional neural network, and obtaining a content image feature map and a style image feature map in a target Relu layer; extracting all m×m sub-blocks f in non-overlapping manner on a content image feature map i (C),1≤i≤P C Extracting all m×m sub-blocks f from the style image feature map in an overlapping manner j (S),1≤j≤P S ,P C And P S Respectively representing the number of extractable small blocks in the content image feature map and the style image feature map; dividing all the style image feature image sub-blocks into K' style image clustering results according to the methods of S1-S2;
s52: for each content image feature map sub-block f i (C) Selecting a clustering center and f from K' -type style image clustering results i (C) Clustering the nearest style image feature map sub-blocks;
s53: for each content image feature map sub-block f i (C) In the selected style image characteristic map sub-block clustering, screening all P 'with the distance from the clustering center not exceeding half of the clustering radius' S Personal style image feature map sub-block
Figure FDA0004092682660000024
Then using the cross-correlation function to determine a content image feature map sub-block f i (C) Best matching intra-class optimal block f i st (C,S):
Figure FDA0004092682660000023
S54:For each content image feature map sub-block f i (C) By using intra-class optimal block f i st (C, S) substitution f i (C) And the optimal block f in the class i st The number of times the cluster to which (C, S) belongs is used in the information hiding matrix H is reduced by 1;
if S54 is executed, the intra-class optimal block f i st If the number of times of use of the cluster to which (C, S) belongs in the information hiding matrix H is 0, then at f i st (C, S) performing S53 and S54 in adjacent clusters of the clusters to which S belongs;
s55: after the replacement of all the content image feature map sub-blocks is completed, a complete content image feature map F is obtained by reconstruction ST (C,S)。
2. The carrier-free information hiding method for arbitrary image style migration according to claim 1, wherein the specific method of Mean Shift clustering in S1 is as follows:
s11: randomly selecting one sub-block from all unclassified image characteristic image sub-blocks as a central point, finding out all sub-blocks with the distance from the central point within a set range, and recording as a set M;
s12: calculating an offset vector of the center point and the set M, and moving the center point along the offset vector;
s13: repeating the step S12 until the magnitude of the offset vector meets the set threshold value, and recording the value of the center point at the moment;
s14: continuously repeating S11-S13 until all the image feature image sub-blocks are classified; and finally, sequencing the coordinates of all the clustering centers according to a dimension order, wherein the coordinates are respectively numbered as 1,2, … and K.
3. The carrier-free information hiding method of any image style migration of claim 1, wherein said convolutional neural network for image style migration is a VGG-19 network with hole convolution.
4. The carrier-free information hiding method for arbitrary image style migration as claimed in claim 3, wherein said target Relu layer is a Relu 3-1 layer in VGG-19 network.
5. The carrier-free information hiding method of any image style migration of claim 1, wherein sub-block sizes extracted from said content image feature map and said grid image feature map are each 3×3.
6. The carrier-free information hiding method of any image style migration of claim 1, wherein all sub-blocks extracted from said content image feature map and grid image feature map contain all channels of feature map.
7. The carrier-free information hiding method for arbitrary image style migration as claimed in claim 1, wherein a carrier-free information hiding network is constructed based on said carrier-free information hiding method, and the carrier-free information hiding network is trained by mean square error loss function, loss function L style The (C, S) form is:
Figure FDA0004092682660000041
wherein I F Is F norm, w c 、h c And d c The width, the height and the channel number of the content image are respectively; f (I) a feature map of a current intermediate calculation result I in the training process, wherein ρ is a control parameter; l (L) TV (. Cndot.) is the total variation regularization term, whose formula is:
Figure FDA0004092682660000042
wherein I is i,j,k The k-channel pixel value of the ith row and jth column of the result I is currently calculated intermediately.
CN202010301278.8A 2020-04-16 2020-04-16 Carrier-free information hiding method for migration of arbitrary image style Active CN111563263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010301278.8A CN111563263B (en) 2020-04-16 2020-04-16 Carrier-free information hiding method for migration of arbitrary image style

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010301278.8A CN111563263B (en) 2020-04-16 2020-04-16 Carrier-free information hiding method for migration of arbitrary image style

Publications (2)

Publication Number Publication Date
CN111563263A CN111563263A (en) 2020-08-21
CN111563263B true CN111563263B (en) 2023-04-25

Family

ID=72073153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010301278.8A Active CN111563263B (en) 2020-04-16 2020-04-16 Carrier-free information hiding method for migration of arbitrary image style

Country Status (1)

Country Link
CN (1) CN111563263B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288622B (en) * 2020-10-29 2022-11-08 中山大学 Multi-scale generation countermeasure network-based camouflaged image generation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533570A (en) * 2019-08-27 2019-12-03 南京工程学院 A kind of general steganography method based on deep learning
CN110677552A (en) * 2019-08-30 2020-01-10 绍兴聚量数据技术有限公司 Carrier-free information hiding method based on complete packet bases

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426858B (en) * 2017-08-29 2021-04-06 京东方科技集团股份有限公司 Neural network, training method, image processing method, and image processing apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533570A (en) * 2019-08-27 2019-12-03 南京工程学院 A kind of general steganography method based on deep learning
CN110677552A (en) * 2019-08-30 2020-01-10 绍兴聚量数据技术有限公司 Carrier-free information hiding method based on complete packet bases

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Dorin Comaniciu 等.Mean shift: A robust Approach Toward Feature Space Analysis.IEEE.2002,第603-619页. *
Fisher Yu.MULTI-SCALE CONTEXT AGGREGATION BY DILATED CONVOLUTIONS.arXiv.2016,第1-13页. *
Tian Qi Chen 等.Fast Patch-based Style Transfer of Arbitrary Style.arXiv.2016,第1-10页. *

Also Published As

Publication number Publication date
CN111563263A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
Yang et al. An embedding cost learning framework using GAN
US9390373B2 (en) Neural network and method of neural network training
US20160283842A1 (en) Neural network and method of neural network training
CN107330515A (en) A kind of apparatus and method for performing artificial neural network forward operation
Feng et al. Evolutionary fuzzy particle swarm optimization vector quantization learning scheme in image compression
JP2021502650A5 (en)
CN107077734A (en) Determining method and program
CN112464003A (en) Image classification method and related device
CN106339719A (en) Image identification method and image identification device
CN109635946A (en) A kind of combined depth neural network and the clustering method constrained in pairs
CN112580502B (en) SICNN-based low-quality video face recognition method
CN108564166A (en) Based on the semi-supervised feature learning method of the convolutional neural networks with symmetrical parallel link
CN109272061B (en) Construction method of deep learning model containing two CNNs
CN111563263B (en) Carrier-free information hiding method for migration of arbitrary image style
CN107153837A (en) Depth combination K means and PSO clustering method
CN115941112B (en) Portable hidden communication method, computer equipment and storage medium
CN104103042A (en) Nonconvex compressed sensing image reconstruction method based on local similarity and local selection
CN107578365B (en) Wavelet digital watermark embedding and extracting method based on quantum weed optimizing mechanism
CN111726472B (en) Image anti-interference method based on encryption algorithm
CN111224905A (en) Multi-user detection method based on convolution residual error network in large-scale Internet of things
CN109388959A (en) The production information concealing method of combination difference cluster and minimal error textures synthesis
CN115134474A (en) Pixel prediction-based parameter binary tree reversible data hiding method
CN118155251A (en) Palm vein recognition method based on semantic communication type federal learning
CN110991462A (en) Privacy protection CNN-based secret image identification method and system
CN114169499A (en) Packet cipher system identification method based on LeNet5-RF algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant