CN117132671B - Multi-task steganography method, system and medium based on depth self-adaptive steganography network - Google Patents

Multi-task steganography method, system and medium based on depth self-adaptive steganography network Download PDF

Info

Publication number
CN117132671B
CN117132671B CN202311402700.9A CN202311402700A CN117132671B CN 117132671 B CN117132671 B CN 117132671B CN 202311402700 A CN202311402700 A CN 202311402700A CN 117132671 B CN117132671 B CN 117132671B
Authority
CN
China
Prior art keywords
secret
image
frequency
depth
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311402700.9A
Other languages
Chinese (zh)
Other versions
CN117132671A (en
Inventor
卢瑶
张乐
李彤
卢光明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202311402700.9A priority Critical patent/CN117132671B/en
Publication of CN117132671A publication Critical patent/CN117132671A/en
Application granted granted Critical
Publication of CN117132671B publication Critical patent/CN117132671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6209Protecting access to data via a platform, e.g. using keys or access control rules to a single file or object, e.g. in a secure envelope, encrypted and accessed using a key, or with access control rules appended to the object itself
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a multi-task steganography method, a system and a medium based on a depth self-adaptive steganography network, wherein the method comprises the following steps: in the hiding stage, a sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively extract the effective space and frequency information of a carrier and a secret image step by step based on a depth adaptive steganography network, fuses the secret information and the effective part of the carrier information to obtain a secret-carried image, and sends the secret-carried image to a receiver; in the recovery stage, the receiver recovers the secret image from the secret image through a recovery network. According to the invention, the secret information is adaptively embedded into the carrier image frequency by frequency and depth by depth in the hiding stage, so that the secret image quality and the steganography concealment of a plurality of steganography tasks are obviously improved, and meanwhile, due to the effective embedding of important secret information, the secret image quality and the steganography validity can be obviously improved in the recovering stage.

Description

Multi-task steganography method, system and medium based on depth self-adaptive steganography network
Technical Field
The invention relates to the technical fields of information security, image processing and artificial intelligence, in particular to a multi-task steganography method, a system and a medium based on a depth self-adaptive steganography network.
Background
The image steganography realizes hidden communication between a sender and a receiver by hiding secret information to be transferred in a publicly available carrier image file. The sender conceals the secret image in the carrier image in an imperceptible manner and transmits the generated carrier image to the receiver. And the receiver extracts the recovered secret image from the received secret image, thereby completing secret communication. To ensure concealment of communications, the carrier image needs to be as similar as possible to the carrier image. Meanwhile, to ensure the validity of communication, the recovery secret image needs to be as similar as possible to the original secret image. Conventional image steganography is to hide limited binary information in a carrier image, and the steganography capacity of such image steganography is typically less than 0.4bpp, which is difficult to meet the high capacity requirement of steganography today. The more mainstream steganography at present is in a hidden image steganography mode (the steganography capacity of a single image is 24 bpp) capable of hiding the whole image. The watermark task and the photographic steganography are very similar to the image steganography task, and can be completely solved through a unified paradigm, but the image hiding is more focused on the concealment and the safety of communication so as to ensure the effective implementation of the concealed communication, and the watermark is more focused on the robustness in the field of copyright protection so as to prove the ownership of the image. The photographing steganography is that a receiving party obtains a secret image through a mobile phone or a camera photographing mode, and then steganography is carried out. The photographing steganography considers the problems of information clipping, torsion and chromatic aberration of different display devices which may exist in actual photographing. The three tasks of image steganography, watermarking and photographic steganography of the hidden drawings are very similar, but the emphasis of each task is different, and how to design an effective model with a general purpose to solve the important problem to be solved urgently.
The universal depth concealment method (Universal Deep Hiding, UDH) proposes a universal paradigm for the first time to deal with three problems of image steganography, watermarking and photographic steganography. In the information hiding stage, the UDH firstly uses a hiding network to encode the secret image, and then directly adds the carrier image and the encoded secret image to generate a final carrier image. The information hiding phase may be represented by the following formula:
wherein,hidden network representing UDH +_>、/>And->Respectively representing the secret, the carrier and the carrier image. The UDH restores the secret image information from the secret image through the restoration network in the information restoration stage. The system structure of UDH is shown in FIG. 1, FIG. 1 is a frame diagram of a prior art multitasking UDH system, FIG. 1 +.>、/>、/>、/>And->The original secret image, the recovered secret, the encoded secret image, the carrier image, and the carrier image are represented, respectively. As shown in FIG. 2, FIG. 2 is a schematic diagram showing the comparison of the present invention with the visual effect of UDH in the case of hiding a secret image in a carrier image, where the UDH directly fuses the carrier image and the end of the secret image in the hiding stage, so that a large amount of obvious secret image information is stored in the carrier image, which is a result of the present inventionNot only causes degradation of the quality of the loaded image, but also increases the risk of information leakage. Because a large amount of secret image information can be obtained by subtracting the carrier image from the intercepted carrier image when the carrier image is available to a third party, failure of secret communication or even destruction of the steganographic system results. Meanwhile, the carrier image and the secret image are not subjected to careful fusion of effective information in the mode, so that limited secret recovery information is facilitated to be embedded in the carrier image, and recovery of the secret image in a recovery stage is affected. Thus, the UDH method limits to some extent the concealment and validity of secret communications in multitasking steganography.
Disclosure of Invention
The invention mainly aims to provide a multi-task steganography method, a system and a medium based on a depth self-adaptive steganography network, aiming at self-adapting secret information directly influencing secret image recovery and effectively fusing the secret information with extracted necessary carrier information to obtain a carrier image very similar to the carrier image, and ensuring the effectiveness and the concealment of secret communication in various steganography tasks.
In order to achieve the above object, the present invention proposes a depth adaptive steganography network-based multitasking method, the method comprising the steps of:
in the hiding stage, a sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively extract the effective space and frequency information of a carrier and a secret image step by step based on a depth adaptive steganography network, fuses the secret information and the effective part of the carrier information to obtain a secret-carried image, and sends the secret-carried image to a receiver;
in the recovery stage, the receiver recovers the secret image from the secret image through a recovery network.
The invention further adopts the technical scheme that the sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to carry out adaptive gradual extraction on the effective space and frequency information of a carrier and a secret image, and the steps of merging the secret information and the effective part of the carrier information to obtain the secret image are expressed by adopting the following formulas:
wherein,and->Representing secret and carrier hidden network, respectively, +.>And->Respectively representing secret and carrier weight for fusion,/->、/>And->Respectively representing the secret, the carrier and the carrier image.
The depth self-adaptive steganography network comprises two sub-networks with U-Net structures, wherein the two sub-networks are respectively used for extracting secret information and carrier information, and the two sub-networks comprise an encoding part and a decoding part; in the hiding stage, a sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively extract the effective space and frequency information of a carrier and a secret image step by step based on a depth adaptive steganography network, and fuses the secret information and the effective part of the carrier information to obtain a carrier secret image, wherein the step of obtaining the carrier secret image comprises the following steps of: encoding the secret image and the carrier image respectively through the two sub-networks in the hiding stage, and extracting and fusing the effective secret and carrier image information from different frequencies in different depth convolution layers in the decoding process; after the secret information and the carrier information are extracted and fused for the last time, the secret information and the carrier information are further fused by using a plurality of convolution layers to obtain a carrier image.
The invention further adopts the technical scheme that the depth-based self-adaptive steganography network adopts a frequency-by-frequency and depth-by-depth extraction mechanism and a self-adaptive space-frequency extraction module to carry out self-adaptive gradual extraction on effective space and frequency information of a carrier and a secret image, and the step of merging the secret information and the effective part of the carrier information to obtain a carrier secret image further comprises the following steps:
and cross sharing and fusing the extracted frequency, space and channel weights by adopting a cross sharing attention mechanism.
The further technical scheme of the invention is that the overall structure of the depth self-adaptive steganography network is as follows: if the input is(/>) Wherein the input is X, X is three-dimensional, the three dimensions are channel C, height H, width W, respectively, X is X from C H W when viewed from the channel dimension i Consists of, i ranges from 0 to C-1, and a convolution kernel is introduced +.>Extracting frequency information of a carrier and a secret image for a discrete cosine convolution layer, wherein the discrete cosine convolution process is described by the following formula:
wherein the convolution layer output is ();/>Representing DCT convolution filteringOrgan, and->The size is; />Is the number of DCT convolution kernels, and +.>The method comprises the steps of carrying out a first treatment on the surface of the The output of the DCT convolution kernel may be written asAnd->The method comprises the steps of carrying out a first treatment on the surface of the This indicates a different frequency output characteristic +.>By the same input->And DCT convolution kernel->The result is, that is,sharing comes from->After the output of the DCT convolution kernel is divided into C groups and mixed and rearranged by the channel replacement operation,/-after the output of the DCT convolution kernel is divided into C groups, the output of the DCT convolution kernel is divided into C groups by the output of the DCT convolution kernel is divided into C groups>Can be divided into->Group (S)/(S)>And (2) and. At the same time (I)>Are all convolved by the same DCT kernel>Obtaining, while the channels share the same frequency information; output of convolutional layer ( />) By the same input->The same channel and plane information are shared;
the step of cross sharing fusion of the extracted frequency, space and channel weights by adopting a cross sharing attention mechanism comprises the following steps:
generating tensors using a cross-sharing attention mechanismChannel-plane weights ∈>And tensor->Frequency weight of->And the channel-plane weights and the frequency weights are respectively equal to +.>Between and tensor->Cross sharing is carried out between the two to obtain the final attention weight +.>
Tensors in a cross-sharing attention mechanismChannel-plane weights ∈>The generation process of (2) is as follows:
to obtain channel-plane weightsFirstly, each feature map is encoded by adopting pooling operation; the size of the pooling nuclei is +.>,/>And->The method comprises the steps of carrying out a first treatment on the surface of the When the input is +.>The pooling operation is represented by the following formula:
wherein,,/>and->By pooling the cores, respectively>,/>And->Outputs of the pooling operations; to save computation, the above pooling output is +.>Dimension concatenation and as sharing +.>Convolutional layer->Is input to the computer; the process is represented by the following formula:
wherein,at the same time->Representing a channel reduction ratio; />The operation of the splice is indicated and,representing a nonlinear ReLU activation function;
then, willAt->Dimension according to->Divided into->,/>And->;/>,/>And->3->Convolutional layer pairs->,/>And->Learning to obtain channel weight->H and W dimension weights、/>The process is represented by the following formula:
weights expressed in channels, +.>Weight in the H dimension, +.>Weight in W dimension, +.>And->The two jointly represent the plane weight;
wherein,and->Joint representation plane weights, ++>Representing a Sigmoid activation function; finally, channel-plane weights->Calculated from the following formula:
tensors in a cross-sharing attention mechanismFrequency weight of->The generation process of (2) is as follows:
the cross-sharing attention mechanism initializes the frequency weights to() Optimizing update frequency weights by continuous iteration in network training>
Channel-plane weights in the cross-sharing attention mechanismAnd frequency weight->The cross-sharing procedure of (2) may be represented by the following formula:
wherein the cross-sharing attention mechanism obtains weightsShared channel-plane weight consisting of two parts>And shared frequency domain weights +.>,/>Representing the dilation operation to keep the sizes of the left tensor and the right tensor consistent;
shared channel-plane weightsThe sharing procedure of (2) can be represented by the following formula:
wherein the method comprises the steps of(/>,) Is made of a rollLaminated input->The channel-plane attention weights generated;at the same time->() The method comprises the steps of carrying out a first treatment on the surface of the Furthermore, the-> (/>);/>Expressed in channel dimension +.>A secondary copy operation; />Representing a permutation operation in first and second dimensions of the tensor;
sharing frequency domain weightsThe sharing generation process of (2) may be represented by the following formula:
,
wherein,at the same time
Representing a copy operation that replicates C times in the channel dimension.
In a hiding stage, a sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively and gradually extract effective space and frequency information of a carrier and a secret image based on a depth adaptive steganography network, and the steps of fusing the secret information and the effective part of the carrier information to obtain a secret-carried image further comprise:
the effective secret information and carrier information are gradually extracted and fused in the decoder stage by adopting a depth-by-depth frequency-by-frequency mechanism, and the process can be expressed as follows:
wherein the method comprises the steps of,/>Set to 5; />Is a DCT convolution kernel; />Represents the ∈th of the hidden network>Layer extraction information; meanwhile, after hiding each transposed convolution layer of the network, gradually fusing the extracted secret information with the carrier information; the output sizes of the transposed convolutional layers are 8×8, 16×16, 32×32, 64×64, 128×128, respectively; in different fusion stages, the feature images are different in size; therefore, the depth-by-depth frequency-by-frequency mechanism also extracts and fuses secret and carrier information of different scales in the pyramid structure; the multi-level and fine-granularity information extraction and fusion mode ensures that the carried secret image and the recovered secret image have higher image quality; in->In the fusion layer, adding different frequency information of the secret image to different frequency information corresponding to the carrier image; the process is expressed as:
wherein,at the same time, the->,/>Is the number of DCT convolution kernels, and +.>
In a further technical scheme of the invention, in the recovery stage, the step of recovering the secret image from the secret image by the receiver through a recovery network is represented by the following formula:
wherein the method comprises the steps ofRepresenting the recovered secret image +_>Indicating the recovery procedure->Representing different image distortions acting on the encrypted image in the watermarking task and the photographic steganography task.
In the further technical scheme of the invention, in the recovering stage, the receiver recovers the secret image from the secret image through a recovering network, and the method comprises the following steps:
in the watermarking task, three different image distortions, dropout, gaussian and JPEG compression, are used to evaluate the robustness of the steganography method;
in the task of photographing and steganography, a random single matrix and uniform noise are adopted to respectively simulate information clipping, inversion and color difference among different devices caused by a secret image displayed by a photographing screen;
the dense image is undistorted during transmission in the task of hiding the image,the recovery process can also be expressed as:
wherein the optimization objective is to minimize the following loss function:
to achieve the above object, the present invention also proposes a depth adaptive steganography network based multitasking system comprising a memory, a processor and a depth adaptive steganography network based multitasking program stored on said processor, said depth adaptive steganography network based multitasking program being executed by said processor to perform the steps of the method as described above.
To achieve the above object, the present invention also proposes a computer readable storage medium storing a depth adaptive steganography network based multitasking program which, when run by a processor, performs the steps of the method as described above.
The multitasking steganography method, system and medium based on the depth self-adaptive steganography network have the beneficial effects that:
the invention adopts a depth-by-depth frequency-by-frequency mechanism method to extract the effective frequency information of the secret image and the carrier image from different depths of the hidden network, combines the self-adaptive space-frequency extraction module with the depth-by-depth frequency-by-frequency mechanism method, and can extract and fuse necessary secret information and carrier information at different frequencies and depth levels to ensure higher quality of the secret image and the secret image. Compared with the traditional depth steganography method, the method has finer and stronger adaptivity to the extraction and fusion processes of the carrier and the secret information, can improve the quality of the secret image and recover the secret image in different steganography tasks, and obviously improves the concealment and the effectiveness of the multi-task steganography.
Drawings
FIG. 1 is a prior art architecture diagram of a multitasking steganography UDH system;
FIG. 2 is a schematic diagram showing the comparison of the present invention with the UDH visualization effect in the case of hiding a secret image in a carrier image;
FIG. 3 is a flow chart of a preferred embodiment of the depth adaptive steganography network-based multitasking method of the present invention;
FIG. 4 is a schematic diagram of a depth adaptive steganography network architecture according to a preferred embodiment of the depth adaptive steganography network-based multitasking method of the present invention;
FIG. 5 is a schematic diagram of a steganographic phase secret image steganographic framework of a preferred embodiment of the depth adaptive steganographic network-based multitasking method of the present invention;
fig. 6 is a schematic diagram of an adaptive space-frequency extraction module according to a preferred embodiment of the depth adaptive steganography network-based multitasking method of the present invention.
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In order to solve the defects in the prior art, the invention provides a multi-task steganography method based on a depth self-adaptive steganography network. Unlike the conventional multitask steganography method, which only performs coarse fusion on all carrier information and secret information at the end of a steganography network, the method adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to extract effective space and frequency information of a carrier and a secret image and perform depth-by-depth and frequency-by-frequency fine granularity fusion, so that secret information which directly influences secret image recovery is effectively hidden in the carrier image to ensure the effectiveness and concealment of secret communication.
Referring to fig. 3, the present invention provides a depth adaptive steganography network-based multitasking steganography method, which includes the following steps:
step S10, in the hiding stage, a sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism (Gradual Depth Extraction, GDE) and an adaptive space-frequency extraction module (Attentive Frequency Extraction, AFE) to adaptively and gradually extract the effective space and frequency information of a carrier and a secret image based on a depth adaptive steganography network (Deep Adaptive Hiding Networks, DAH-Net), fuses the effective parts of the secret information and the carrier information to obtain a secret-carrying image, and sends the secret-carrying image to a receiver;
and step S20, in the recovery stage, the receiver recovers the secret image from the secret image through a recovery network.
In this embodiment, the sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively and gradually extract the effective space and frequency information of the carrier and the secret image, and fuses the secret information and the effective part of the carrier information, so that the step of obtaining the secret image is represented by the following formula:
wherein,and->Representing secret and carrier hidden network, respectively, +.>And->Respectively representing secret and carrier weight for fusion,/->、/>And->Respectively representing the secret, the carrier and the carrier image.
In this embodiment, the depth adaptive steganography network includes two sub-networks of U-Net structure, where the two sub-networks are respectively used to extract secret information and carrier information, and the two sub-networks include two parts of encoding and decoding.
In the hiding stage, a sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively and gradually extract the effective space and frequency information of a carrier and a secret image based on a depth adaptive steganography network, and the steps of fusing the secret information and the effective part of the carrier information to obtain a secret-carrying image comprise the following steps:
and respectively encoding the secret image and the carrier image through the two sub-networks in the hiding stage, and simultaneously extracting and fusing the effective secret and carrier image information from different frequencies in different depth convolution layers in the decoding process.
After the secret information and the carrier information are extracted and fused for the last time, the secret information and the carrier information are further fused by using a plurality of convolution layers to obtain a carrier image.
In particular, the depth adaptive steganography network architecture according to this embodiment is shown in fig. 4, where this embodiment is composed of two parts, namely a hiding phase and a recovery phase, in fig. 4,and->Respectively representing secret and carrier weight for fusion,/->And->Respectively representing the secret, the carrier and the carrier image.
There are two subnetworks in the hidden phase for extracting secret and carrier information, respectively. Each subnetwork is a U-Net structure and comprises two parts, encoding and decoding. Through two sub-networks in the hidden stage, the embodiment encodes the secret image and the carrier image respectively, and extracts and fuses the effective secret and carrier image information from different frequencies in different depth convolution layers in the decoding process. After the secret information and the carrier information are extracted and fused for the last time, the secret information and the carrier information are further fused by using a plurality of convolution layers to obtain a carrier image. Details of the hidden network in the present invention are shown in fig. 5, conv layer represents a convolutional layer, and ConvTranspose layer represents a deconvolution layer. DCT layer represents discrete cosine convolution layer, and fusion layer. AFE block represents an adaptive space-frequency extraction module.
In this embodiment, in the recovering stage, the step of recovering, by the receiving party, the secret image from the secret image through the recovering network is expressed by the following formula:
)
wherein the method comprises the steps ofRepresenting the recovered secret image +_>Indicating the recovery procedure->Representing different image distortions acting on the encrypted image in the watermarking task and the photographic steganography task.
Wherein three different image distortions, dropout, gaussian and JPEG compression, are used to evaluate the robustness of the steganography method in the watermarking task.
In the task of photography steganography, the embodiment adopts a random unity matrix and uniform noise to simulate information clipping, inversion and color difference among different devices caused by capturing a secret image displayed on a screen.
The dense image is undistorted during transmission in the task of hiding the image,the recovery process can also be expressed as:
among other things, the optimization objective of the present embodiment is to minimize the following loss functions:
further, in order to obtain a secret image with higher image quality and restoration, the present embodiment proposes an adaptive space-frequency extraction module that adaptively extracts and fuses relatively important part of information in the secret and carrier images. Meanwhile, the invention provides a cross sharing attention mechanism in the self-adaptive space-frequency extraction module to share and fuse the extracted frequency, space and channel weight. The structure of the adaptive space-frequency extraction module is shown in FIG. 6, and the pooling cores of X avg pool, Y avg pool and X, Y avg pool in FIG. 6 are respectively,/>And->Conv layer represents a convolution layer, sigmoid layer represents a Sigmoid activation layer, BN+Act layer represents a batch normalization and RELU activation layer, transformation represents a permutation operation, and Repeat represents a copy operation.
Specifically, in this embodiment, the depth-based adaptive steganography network adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively and gradually extract the effective space and frequency information of the carrier and the secret image, and the step of fusing the secret information and the effective part of the carrier information to obtain the secret image further includes:
and cross sharing and fusing the extracted frequency, space and channel weights by adopting a cross sharing attention mechanism.
The overall structure of the depth-adaptive steganography network is as follows: if input is made(/>) Wherein the input is X, X is three-dimensional, the three dimensions are channel C, height H, width W, respectively, X is X from C H W when viewed from the channel dimension i The value of i is in the range from 0 to C-1, and the convolution kernel is introduced into the embodiment>(for a two-dimensional convolution kernel k is the length and width of the convolution kernel, the shape of the convolution kernel is k×k square, k is k pixel points in length or width, here, discrete cosine convolution is different from normal two-dimensional convolution, the kernel of the convolution layer is k, the actual DCT convolution kernel size is not k×k), the frequency information of the carrier and the secret image is extracted by a discrete cosine convolution layer (DCT convolution layer), the discrete cosine convolution process is defined byThe following formula describes:
wherein the convolution layer output is();/>Representing a DCT convolution filter, and +>The size is; />Is the number of DCT convolution kernels, and +.>The method comprises the steps of carrying out a first treatment on the surface of the The output of the DCT convolution kernel may be written asAnd->The method comprises the steps of carrying out a first treatment on the surface of the This indicates a different frequency output characteristic +.>By the same input->And DCT convolution kernel->The result is, that is,sharing comes from->And plane information, and, at the same time,are all convolved by the same DCT kernel>Obtaining, while the channels share the same frequency information; after the output of the DCT convolution kernel is divided into C groups and mixed and rearranged by adopting channel substitution operation, the DCT convolution kernel is characterized by comprising the following steps of (1) mixing and rearranging the output of the DCT convolution kernel into C groups by adopting channel substitution operation, and (b) adding the output of the DCT convolution kernel into C groups by adopting the output of the DCT convolution kernel to form>Can be divided into->Group (S)/(S)>And->. Output of convolutional layer->( />) By the same input->The result is that the same channel and plane information are shared. Therefore, the present embodiment proposes that the cross-sharing attention mechanism predicts channel, spatial and frequency weights in the adaptive spatial-frequency extraction module, and cross-shares these weights in the frequency domain and the spatial domain to get the final attention.
The step of cross sharing fusion of the extracted frequency, space and channel weights by adopting a cross sharing attention mechanism comprises the following steps:
in particular, the present embodiment employs a cross-shared attention mechanism to generate tensorsChannel-plane weights ∈>Sum tensorFrequency weight of->And the channel-plane weights and the frequency weights are respectively expressed in tensors +.>Between and tensor->Cross sharing is carried out between the two to obtain the final attention weight +.>
Tensors in a cross-sharing attention mechanismChannel-plane weights ∈>The generation process of (2) is as follows:
to obtain channel-plane weightsFirstly, each feature map is encoded by adopting pooling operation; the size of the pooling nuclei is +.>,/>And->The method comprises the steps of carrying out a first treatment on the surface of the When the input is +.>The pooling operation is represented by the following formula: />
Wherein,,/>and->By pooling the cores, respectively>,/>And->Outputs of the pooling operations; to save computation, the above pooling output is +.>Dimension concatenation and as sharing +.>Convolutional layer->Is input to the computer; the process is represented by the following formula:
wherein,at the same time->Representation channelA reduction ratio; />The operation of the splice is indicated and,representing a nonlinear ReLU activation function.
Then, willAt->Dimension according to->Divided into->,/>And->;/>,/>And->3->Convolutional layer pairs->,/>And->Learning to obtain channel weight->H and W dimension weights、/>The process is represented by the following formula:
because the three different pooling modes are used before, the three different pooling modes are divided into three parts, and the three parts are respectively subjected to convolution kernel activation,weights expressed in channels, +.>The weights in the H dimension are represented,weight in W dimension, +.>And->Both represent planar weights.
Wherein,and->Joint representation plane weights, ++>Representing a Sigmoid activation function; finally, channel-plane weights->Calculated from the following formula:
tensors in a cross-sharing attention mechanismFrequency weight of->The generation process of (2) is as follows:
the cross-sharing attention mechanism initializes the frequency weights to(/>) Optimizing update frequency weights by continuous iteration in network training>
Channel-plane weights in the cross-sharing attention mechanismAnd frequency weight->The cross-sharing procedure of (2) may be represented by the following formula:
wherein the cross-sharing attention mechanism obtains weightsShared channel-plane weight consisting of two parts>And sharing frequency domain weights/>,/>The dilation operation is represented such that the left tensor and the right tensor are kept uniform in size.
Shared channel-plane weightsThe sharing procedure of (2) can be represented by the following formula:
wherein the method comprises the steps of(/>,) Is input by convolution layer->The channel-plane attention weights generated;at the same time->() The method comprises the steps of carrying out a first treatment on the surface of the Furthermore, the->(/>);/>Expressed in channel dimensionDegree->A secondary copy operation; />Representing the permutation operation in the first and second dimensions of the tensor.
Sharing frequency domain weightsThe sharing generation process of (2) may be represented by the following formula:
wherein,at the same time;/>Representing a copy operation that replicates C times in the channel dimension.
Because convolutional neural networks are hierarchical, different levels can extract different features to different extents. The shallow convolutional layer of CNN is generally capable of extracting detailed features of an image, while the deep convolutional layer is capable of extracting structural features of an image. Both of these features contribute to the steganography of the secret image. Therefore, this embodiment proposes a frequency-wise and depth-wise extraction mechanism to gradually extract and fuse secret information and carrier information from different depth convolutional layers in the network. Due to the use of the adaptive space-frequency extraction module, only the valid part of the secret and carrier information is extracted with the increasing depth of the hidden network. The depth-wise frequency-wise mechanism extracts and fuses the valid secret information and carrier information step by step at the decoder stage.
Specifically, in this embodiment, in the hiding stage, the sender uses a frequency-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively and gradually extract the effective space and frequency information of the carrier and the secret image based on the depth adaptive steganography network, and the step of fusing the secret information and the effective part of the carrier information to obtain the secret-loaded image further includes:
the effective secret information and carrier information are gradually extracted and fused in the decoder stage by adopting a depth-by-depth frequency-by-frequency mechanism, and the process can be expressed as follows:
wherein the method comprises the steps of,/>Set to 5; is a DCT convolution kernel; />Representing the first hidden networkLayer extraction information; meanwhile, after hiding each transposed convolution layer of the network, gradually fusing the extracted secret information with the carrier information; the output sizes of the transposed convolutional layers are 8×8, 16×16, 32×32, 64×64, 128×128, respectively; in different fusion stages, the feature images are different in size; therefore, the depth-by-depth frequency-by-frequency mechanism also extracts and fuses secret and carrier information of different scales in the pyramid structure; the multi-level and fine-granularity information extraction and fusion mode ensures that the carried secret image and the recovered secret image have higher image quality; in->In the fusion layer, adding different frequency information of the secret image to different frequency information corresponding to the carrier image; the process is expressed as:
wherein,at the same time, the->,/>Is the number of DCT convolution kernels, and +.>
The working principle of the depth adaptive steganography network-based multitasking method of the present invention is further described below.
1. Initializing parameters:
number of secret images:and the number of carrier images:>
discrete cosine transform convolutional layer (DCT layer) spatial kernel size:and the number of DCT kernels:;
the steganography phase carrier decodes partial depth in the secret network:
steganography network and recovery network parameters:
adaptive space-frequency weights for carrier and secret images
Reduction rate in adaptive space-frequency extraction module:
noise type of noise floor:
recovery stage weights in the loss function:。/>
2. the following steps (steps within one epoch) are repeated until the network converges:
input of the steganography phase: carrier imageAnd secret image->
Extracting the frequency weights of the carrier and the secret image according to the following formula:
the spatio-channel weights of the carrier and secret image are extracted according to the following formula:
frequency weights and space-channel weights are shared according to the following formula:
the extracted carrier and secret information are fused frequency by frequency and depth by depth according to the following formula, and a carrier secret image is obtained:
adding noise to the dense image to simulate a real transmission scene according to the following formula:
the input of the recovery stage is the received secret image and the secret image is recovered from the secret image according to the following formula:
calculating a loss function and updating parameters according to the following formulaAnd->
The technical scheme adopted by the invention has the main advantages that:
(1) Providing a depth self-adaptive steganography network (Deep Adaptive Hiding Networks, DAH-Net) for extracting and fusing effective information in a carrier image and a secret image in a multistage and multi-frequency and self-adaptive manner, and remarkably improving the effectiveness and concealment of secret communication of hidden tasks of graph hiding, watermarking and photography;
(2) An adaptive space-frequency extraction module (Attentive Frequency Extraction, AFE) is proposed to extract the effective frequency, space, channel information in the carrier and secret images, while a cross-sharing attention mechanism is proposed to share and fuse the extracted frequency, space, and channel weights. The self-adaptive effective information extraction and fusion can ensure that the secret image and the recovered secret image in steganography of different tasks have higher image quality;
(3) The effective information of the carrier and the secret image is extracted and fused in multiple stages from different depths of the network and different frequencies of the information by providing a depth-by-depth-frequency-by-frequency mechanism, so that the quality of the secret image is further improved and the secret image quality is recovered, and the effectiveness and the concealment of the multitask steganography are further improved.
In summary, the present invention adopts a depth-by-depth frequency-by-frequency mechanism method to extract the effective frequency information of the secret image and the carrier image from different depths of the hidden network, and combines the proposed adaptive space-frequency extraction module with the depth-by-frequency mechanism method, so that necessary secret information and carrier information can be extracted and fused at different frequencies and depth levels to ensure higher quality of the secret image and the secret image. Compared with the traditional depth steganography method, the method has finer and stronger adaptivity to the extraction and fusion processes of the carrier and the secret information, can improve the quality of the secret image and recover the secret image in different steganography tasks, and obviously improves the concealment and the effectiveness of the multi-task steganography.
To achieve the above object, the present invention further proposes a depth adaptive steganography network-based multitasking system, the system comprising a memory, a processor and a depth adaptive steganography network-based multitasking program stored on the processor, the depth adaptive steganography network-based multitasking program being executed by the processor to perform the steps of the method as described above, which are not repeated here.
To achieve the above object, the present invention also proposes a computer readable storage medium storing a depth adaptive steganography network based multitasking program which, when run by a processor, performs the steps of the method as described above.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or modifications in the structures or processes described in the specification and drawings, or the direct or indirect application of the present invention to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. A depth-adaptive steganography network-based multitasking method, the method comprising the steps of:
in the hiding stage, a sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively extract the effective space and frequency information of a carrier and a secret image step by step based on a depth adaptive steganography network, fuses the secret information and the effective part of the carrier information to obtain a secret-carried image, and sends the secret-carried image to a receiver;
in a recovery stage, the receiver recovers the secret image from the secret image through a recovery network;
the depth self-adaptive steganography network comprises two sub-networks with U-Net structures, wherein the two sub-networks are respectively used for extracting secret information and carrier information, and the two sub-networks comprise an encoding part and a decoding part;
in the hiding stage, a sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively and gradually extract the effective space and frequency information of a carrier and a secret image based on a depth adaptive steganography network, and the steps of fusing the secret information and the effective part of the carrier information to obtain a secret-carrying image comprise the following steps: encoding the secret image and the carrier image respectively through the two sub-networks in the hiding stage, and extracting and fusing the effective secret and carrier image information from different frequencies in different depth convolution layers in the decoding process; after the secret information and the carrier information are extracted and fused for the last time, the secret information and the carrier information are further fused by using a plurality of convolution layers to obtain a carrier image;
the depth-based self-adaptive steganography network adopts a frequency-by-frequency and depth-by-depth extraction mechanism and a self-adaptive space-frequency extraction module to carry out self-adaptive gradual extraction on effective space and frequency information of a carrier and a secret image, and the step of fusing the secret information and the effective part of the carrier information to obtain the secret-carrying image further comprises the following steps:
cross sharing and fusing the extracted frequency, space and channel weights by adopting a cross sharing attention mechanism;
the overall structure of the depth-adaptive steganography network is as follows: if the input is,/>Wherein the input is X, X is three-dimensional, the three dimensions are channel C, height H, width W, respectively, X is X from C H W when viewed from the channel dimension i Consists of, i ranges from 0 to C-1, and a convolution kernel is introduced +.>Extracting frequency information of a carrier and a secret image for a discrete cosine convolution layer, wherein the discrete cosine convolution process is described by the following formula:
wherein the convolution layer output is;/>Representing a DCT convolution filter, and +>Size of +.>; />Is the number of DCT convolution kernels, and +.>The method comprises the steps of carrying out a first treatment on the surface of the The output of the DCT convolution kernel may be written asAnd->The method comprises the steps of carrying out a first treatment on the surface of the This indicates a different frequency output characteristic +.>By the same input->And DCT convolution kernel->The result is, that is,sharing comes from->After the output of the DCT convolution kernel is divided into C groups and mixed and rearranged by the channel replacement operation,/-after the output of the DCT convolution kernel is divided into C groups, the output of the DCT convolution kernel is divided into C groups by the output of the DCT convolution kernel is divided into C groups>Can be divided into->Group (S)/(S)>And (2) andat the same time, the->Are all convolved by the same DCT kernel>Obtaining, while the channels share the same frequency information; output of convolutional layer ,/>By the same input->The same channel and plane information are shared;
the step of cross sharing fusion of the extracted frequency, space and channel weights by adopting a cross sharing attention mechanism comprises the following steps:
generating tensors using a cross-sharing attention mechanismChannel-plane weights ∈>And tensor->Frequency weight of->And the channel-plane weights and the frequency weights are respectively equal to +.>Between and tensor->Cross sharing is carried out between the two to obtain the final attention weight +.>
Tensors in a cross-sharing attention mechanismChannel-plane weights of (c)The generation process of (2) is as follows:
to obtain channel-plane weightsFirstly, each feature map is encoded by adopting pooling operation; the sizes of the pooling cores are respectively,/>And->The method comprises the steps of carrying out a first treatment on the surface of the When the input is +.>The pooling operation is represented by the following formula:
wherein,,/>and->By pooling the cores, respectively>,/>And->Outputs of the pooling operations; to save computation, the above pooling outputsAt->Dimension concatenation and as sharing +.>Convolutional layer->Is input to the computer; the process is represented by the following formula:
wherein,at the same time->Representing a channel reduction ratio; />The operation of the splice is indicated and,representing a nonlinear ReLU activation function;
then, willAt->Dimension according to->Divided into->,/>And->;/>,/>And->3 pieces ofConvolutional layer pairs->,/>And->Learning to obtain channel weight->H and W dimension weights +.>、/>The process is represented by the following formula:
weights expressed in channels, +.>Weight in the H dimension, +.>Weight in W dimension, +.>And->The two jointly represent the plane weight;
wherein,and->Joint representation plane weights, ++>Representing a Sigmoid activation function; finally, channel-plane weightsCalculated from the following formula:
tensors in a cross-sharing attention mechanismFrequency weight of->The generation process of (2) is as follows:
the cross-sharing attention mechanism initializes the frequency weights toOptimizing update frequency weights by continuous iteration in network training>
Channel-plane weights in the cross-sharing attention mechanismAnd frequency weight->The cross-sharing procedure of (2) may be represented by the following formula:
wherein the cross-sharing attention mechanism obtains weightsShared channel-plane weight consisting of two parts>And shared frequency domain weights +.>,/>Representing the dilation operation to keep the sizes of the left tensor and the right tensor consistent;
shared channel-plane weightsThe sharing procedure of (2) can be represented by the following formula:
wherein the method comprises the steps of,/>,Is input by convolution layer->The channel-plane attention weights generated;at the same time->The method comprises the steps of carrying out a first treatment on the surface of the Furthermore, the-> ,/>;/>Expressed in channel dimension +.>A secondary copy operation; />Representing a permutation operation in first and second dimensions of the tensor;
sharing frequency domain weightsThe sharing generation process of (2) may be represented by the following formula:
,
wherein,at the same time
Representing a copy operation that replicates C times in the channel dimension.
2. The depth adaptive steganography network-based multitasking method according to claim 1, wherein the sender adopts a frequency-by-frequency and depth-by-depth extraction mechanism and an adaptive space-frequency extraction module to adaptively extract the effective space and frequency information of the carrier and the secret image step by step, and the step of merging the secret information and the effective part of the carrier information to obtain the secret image is represented by the following formula:
wherein,and->Representing secret and carrier hidden network, respectively, +.>And->Respectively representing secret and carrier weight for fusion,/->、/>And->Respectively representing the secret, the carrier and the carrier image.
3. The depth-adaptive steganography network-based multitasking method of claim 2, wherein in the hiding phase, the sender uses a frequency-by-depth extraction mechanism based on the depth-adaptive steganography network and an adaptive space-frequency extraction module to adaptively extract the effective space and frequency information of the carrier and the secret image step by step, and the step of merging the secret information and the effective part of the carrier information to obtain the secret-loaded image further comprises:
the effective secret information and carrier information are gradually extracted and fused in the decoder stage by adopting a depth-by-depth frequency-by-frequency mechanism, and the process can be expressed as follows:
wherein the method comprises the steps of,/>Set to 5; />Is a DCT convolution kernel; />Representing the first hidden networkLayer extraction information; meanwhile, after hiding each transposed convolution layer of the network, gradually fusing the extracted secret information with the carrier information; the output sizes of the transposed convolutional layers are 8×8, 16×16, 32×32, 64×64, 128×128, respectively; in different fusion stages, the feature images are different in size; therefore, the depth-by-depth frequency-by-frequency mechanism also extracts and fuses secret and carrier information of different scales in the pyramid structure; the multi-level and fine-granularity information extraction and fusion mode ensures that the carried secret image and the recovered secret image have higher image quality; in->In the fusion layer, adding different frequency information of the secret image to different frequency information corresponding to the carrier image; the process is expressed as:
wherein,at the same time, the->,/>Is the number of DCT convolution kernels, and
4. the depth adaptive steganography network-based multitasking method of claim 1, characterized in that the receiver, in a recovery phase, uses a recovery network to extract a secret image from the secret imageThe image recovery step is represented by the following formula:
wherein the method comprises the steps ofRepresenting the recovered secret image +_>Indicating the recovery procedure->Representing different image distortions acting on the encrypted image in the watermarking task and the photographic steganography task, a +.>Representing the generated encrypted image.
5. The depth adaptive steganographic network-based multitasking method of claim 4, wherein in the step of the recipient recovering a secret image from the secret image through a recovery network in a recovery phase:
in the watermarking task, three different image distortions, dropout, gaussian and JPEG compression, are used to evaluate the robustness of the steganography method;
in the task of photographing and steganography, a random single matrix and uniform noise are adopted to respectively simulate information clipping, inversion and color difference among different devices caused by a secret image displayed by a photographing screen;
the dense image is undistorted during transmission in the task of hiding the image,the recovery process is expressed as:
wherein the optimization objective is to minimize the following loss function:
representing the original secret image->Representing the restored secret image +.>Representing the original carrier image, < > and >>Representing the generated encrypted image,/->Is the weight parameter of the hidden stage in the loss function, set to 0.75.
6. A depth adaptive steganography network based multitasking system comprising a memory, a processor and a depth adaptive steganography network based multitasking program stored on the processor which when run by the processor performs the steps of the method of any of claims 1 to 5.
7. A computer readable storage medium, characterized in that the computer readable storage medium stores a depth adaptive steganography network based multitasking program which when run by a processor performs the steps of the method according to any of claims 1 to 5.
CN202311402700.9A 2023-10-27 2023-10-27 Multi-task steganography method, system and medium based on depth self-adaptive steganography network Active CN117132671B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311402700.9A CN117132671B (en) 2023-10-27 2023-10-27 Multi-task steganography method, system and medium based on depth self-adaptive steganography network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311402700.9A CN117132671B (en) 2023-10-27 2023-10-27 Multi-task steganography method, system and medium based on depth self-adaptive steganography network

Publications (2)

Publication Number Publication Date
CN117132671A CN117132671A (en) 2023-11-28
CN117132671B true CN117132671B (en) 2024-02-23

Family

ID=88863226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311402700.9A Active CN117132671B (en) 2023-10-27 2023-10-27 Multi-task steganography method, system and medium based on depth self-adaptive steganography network

Country Status (1)

Country Link
CN (1) CN117132671B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109587372A (en) * 2018-12-11 2019-04-05 北京邮电大学 A kind of invisible image latent writing art based on generation confrontation network
CN113284033A (en) * 2021-05-21 2021-08-20 湖南大学 Large-capacity image information hiding technology based on confrontation training
CN113298689A (en) * 2021-06-22 2021-08-24 河南师范大学 Large-capacity image steganography method
CN113965659A (en) * 2021-10-18 2022-01-21 上海交通大学 HEVC (high efficiency video coding) video steganalysis training method and system based on network-to-network
CN114157773A (en) * 2021-12-01 2022-03-08 杭州电子科技大学 Image steganography method based on convolutional neural network and frequency domain attention
CN114220443A (en) * 2021-11-04 2022-03-22 合肥工业大学 BN optimization SNGAN-based training method and system for adaptive audio steganography model
CN114900586A (en) * 2022-04-28 2022-08-12 中国人民武装警察部队工程大学 Information steganography method and device based on DCGAN
WO2022241307A1 (en) * 2021-05-14 2022-11-17 Cornell University Image steganography utilizing adversarial perturbations
CN116055648A (en) * 2022-10-27 2023-05-02 河南师范大学 Self-adaptive image steganography sending and receiving method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109587372A (en) * 2018-12-11 2019-04-05 北京邮电大学 A kind of invisible image latent writing art based on generation confrontation network
WO2022241307A1 (en) * 2021-05-14 2022-11-17 Cornell University Image steganography utilizing adversarial perturbations
CN113284033A (en) * 2021-05-21 2021-08-20 湖南大学 Large-capacity image information hiding technology based on confrontation training
CN113298689A (en) * 2021-06-22 2021-08-24 河南师范大学 Large-capacity image steganography method
CN113965659A (en) * 2021-10-18 2022-01-21 上海交通大学 HEVC (high efficiency video coding) video steganalysis training method and system based on network-to-network
CN114220443A (en) * 2021-11-04 2022-03-22 合肥工业大学 BN optimization SNGAN-based training method and system for adaptive audio steganography model
CN114157773A (en) * 2021-12-01 2022-03-08 杭州电子科技大学 Image steganography method based on convolutional neural network and frequency domain attention
CN114900586A (en) * 2022-04-28 2022-08-12 中国人民武装警察部队工程大学 Information steganography method and device based on DCGAN
CN116055648A (en) * 2022-10-27 2023-05-02 河南师范大学 Self-adaptive image steganography sending and receiving method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Digital image steganography: A literature survey;PC Mandal et al.;《Information sciences》;第1-5页 *
基于卷积神经网络的轻量级图像隐写分析研究;陈君夫;《中国优秀硕士学位论文全文数据库信息科技辑》;第1-62页 *

Also Published As

Publication number Publication date
CN117132671A (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN112529758B (en) Color image steganography method based on convolutional neural network
Ernawan et al. An improved watermarking technique for copyright protection based on tchebichef moments
Huang et al. A novel double-image encryption algorithm based on Rossler hyperchaotic system and compressive sensing
Behnia et al. Watermarking based on discrete wavelet transform and q-deformed chaotic map
CN112597509B (en) Information hiding method and system integrating wavelet and self-encoder
Mhala et al. Contrast enhancement of progressive visual secret sharing (pvss) scheme for gray-scale and color images using super-resolution
CN115908095A (en) Hierarchical attention feature fusion-based robust image watermarking method and system
Yang et al. Adaptive real-time reversible data hiding for JPEG images
Vaidya et al. Imperceptible watermark for a game-theoretic watermarking system
Zhu et al. Destroying robust steganography in online social networks
CN117132671B (en) Multi-task steganography method, system and medium based on depth self-adaptive steganography network
Sun et al. An image watermarking scheme using Arnold transform and fuzzy smooth support vector machine
Datta et al. Two-layers robust data hiding scheme for highly compressed image exploiting AMBTC with difference expansion
CN115272131B (en) Image mole pattern removing system and method based on self-adaptive multispectral coding
Zhu et al. Image sanitization in online social networks: A general framework for breaking robust information hiding
Liu Comparative evaluations of image encryption algorithms
Wahed et al. A simplified parabolic interpolation based reversible data hiding scheme
Soualmi et al. A blind watermarking approach based on hybrid Imperialistic Competitive Algorithm and SURF points for color Images’ authentication
Abdullah et al. Wavelet based image steganographic system using chaotic signals
Roy et al. A robust reversible image watermarking scheme in DCT domain using Arnold scrambling and histogram modification
Mohamed et al. A Survey on Image Data Hiding Techniques
Cai et al. A multiple watermarks algorithm for image content authentication
Yatnalli et al. Review of inpainting algorithms for wireless communication application
Ghaderi et al. A new digital image watermarking approach based on DWT-SVD and CPPN-NEAT
CN117255232B (en) DWT domain robust video watermarking method and system based on self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant