WO2021047471A1 - 图像隐写及提取方法、装置及电子设备 - Google Patents

图像隐写及提取方法、装置及电子设备 Download PDF

Info

Publication number
WO2021047471A1
WO2021047471A1 PCT/CN2020/113735 CN2020113735W WO2021047471A1 WO 2021047471 A1 WO2021047471 A1 WO 2021047471A1 CN 2020113735 W CN2020113735 W CN 2020113735W WO 2021047471 A1 WO2021047471 A1 WO 2021047471A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
neural network
convolutional neural
network model
measurement value
Prior art date
Application number
PCT/CN2020/113735
Other languages
English (en)
French (fr)
Inventor
崔文学
刘永亮
刘绍辉
张迪
董瑞
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2021047471A1 publication Critical patent/WO2021047471A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • H04N1/32277Compression

Definitions

  • This application relates to the field of computer technology, in particular to an image steganography method, device and electronic equipment. This application also relates to an image extraction method, device and electronic equipment.
  • the method of concealed transmission of information through image steganography has received more and more attention.
  • the image steganography method usually involves disguising the information to be transmitted or the digital copyright information waiting for steganography and writing it into a carrier image that carries the information to be steganographic, so as to obtain the information that contains the information to be steganographic.
  • the method of steganographic image of information to be steganographic is a carrier image that carries the information to be steganographic.
  • the current image steganography method is mainly to study how to hide a binary sequence into the carrier image, that is to say, all formats, such as image, video, text and other formats to be steganographic information need to be serialized into a binary sequence, and The binary sequence is written into the carrier image to obtain the corresponding steganographic image.
  • the current image steganography method for binary sequence steganography mainly has the following shortcomings: 1.
  • the steganographic object it is aimed at that is, the information to be steganographic must be converted into a binary sequence.
  • this conversion is usually only a simple numerical conversion.
  • the image steganography method is based on the binary sequence, it is generally The amount of data that can be steganographically written is limited. As the amount of data increases, the steganographic effect of the steganographic image obtained will become worse and worse, and the probability of the steganographic information in the steganographic image obtained will also be destroyed. Will increase with it, which is not conducive to the safe transmission of the steganographic information.
  • the present application provides an image steganography method to solve the problems of large calculation amount, low efficiency and easy destruction of the steganographic information in the obtained steganographic image when the existing image steganography method performs steganography.
  • This application provides an image steganography method, including:
  • the compressing the information to be embedded to obtain the target measurement value corresponding to the information to be embedded includes:
  • the target sampling convolutional neural network model is used to process the to-be-embedded information to obtain the target measurement value.
  • the acquiring the target sampling convolutional neural network model includes:
  • the original sampling convolutional neural network model is trained to converge the original sampling convolutional neural network model, and the converged original sampling convolutional neural network model is used as the target sampling convolutional neural network model.
  • the training the original sampled convolutional neural network model to cause the original sampled convolutional neural network model to converge includes:
  • Joint training is performed on the guidance convolutional neural network model and the original sampling convolutional neural network model, so that the original sampling convolutional neural network model converges.
  • the joint training of the guidance convolutional neural network model and the original sampling convolutional neural network model to converge the original sampling convolutional neural network model includes:
  • the parameters of the guiding convolutional neural network model and the parameters of the original sampling convolutional neural network model are adjusted by the loss function of the guiding convolutional neural network model, so that the original sampling convolutional neural network model converges.
  • the inputting the original training sample information into the original sampling convolutional neural network model to obtain the original training measurement value corresponding to the original training sample information includes:
  • the at least one original training sample sub-information is input into the original sampling convolutional neural network model to obtain at least one original training sub-measurement value.
  • the inputting the original training measurement value into the guidance convolutional neural network model to obtain guidance sample information corresponding to the original training sample information includes:
  • the at least one original training sub-measurement value is input into the guidance convolutional neural network model, and guidance sample information corresponding to the original training sample information is obtained.
  • the guided convolutional neural network model includes a fully connected layer and at least one residual block
  • the inputting the at least one original training sub-measurement value into the guidance convolutional neural network model to obtain guidance sample information corresponding to the original training sample information includes:
  • the at least one residual block is used to process the pre-processing sample information to obtain guide sample information corresponding to the original training sample information.
  • the preprocessing of the at least one original training sub-measurement value using the fully connected layer to obtain preprocessing sample information of the same size as the original training sample information includes:
  • the steganography of the target measurement value into the carrier image to obtain the target steganography image includes:
  • the target measurement value and the carrier image are input into the target steganography convolutional neural network model to obtain the target steganography image.
  • the target steganographic convolutional neural network model includes a fully connected layer and at least one residual block
  • the inputting the carrier image and the target measurement value into the target steganography convolutional neural network model to obtain the target steganography image includes:
  • the performing stitching processing on the carrier image and the target measurement value to obtain the stitched image to be processed includes:
  • the feature information image includes texture distribution information of the carrier image
  • the re-layout of the target measurement value according to the characteristic information image to obtain the re-layout measurement value corresponding to the characteristic information in the carrier image includes:
  • the steganography of the target measurement value into the carrier image to obtain the target steganography image includes:
  • the obtaining the target steganographic convolutional neural network model includes:
  • the training the original steganographic convolutional neural network model to converge the original steganographic convolutional neural network model includes:
  • distillation convolutional neural network model is used to obtain input data from the original steganographic convolutional neural network model from the output data of the original steganographic convolutional neural network model Corresponding measurement value;
  • Joint training is performed on the distillation convolutional neural network model and the original steganographic convolutional neural network model, so that the original steganographic convolutional neural network model and the distillation convolutional neural network model converge.
  • the joint training is performed on the distillation convolutional neural network model and the original steganographic convolutional neural network model, so that the original steganographic convolutional neural network model and the distillation convolutional neural network model Convergence, including:
  • the loss functions corresponding to the two convolutional neural network models are used to adjust the parameters of the two convolutional neural network models, so that the original steganographic convolutional neural network model and the distillation convolutional neural network model converge.
  • the distillation convolutional neural network model includes a fully connected layer and at least one residual block
  • the inputting the original training steganography image into the distillation convolutional neural network model to obtain the distillation measurement value corresponding to the original measurement value to be steganographic includes:
  • the fully connected layer is used to process the distillation measurement value to be processed, so that the dimension and size of the distillation measurement value to be processed are consistent with the dimension and size of the original measurement value to be steganographic.
  • the compressing the information to be embedded to obtain the target measurement value corresponding to the information to be embedded includes:
  • the compressed sensing technology is used to process the information to be embedded, and the measurement value corresponding to the information to be embedded is obtained.
  • This application also provides an image steganography device, including:
  • the information acquisition unit is used to acquire the carrier image and acquire the information to be embedded;
  • the measurement value obtaining unit is configured to perform compression processing on the information to be embedded, and obtain a target measurement value corresponding to the information to be embedded;
  • the steganographic image acquisition unit is used for steganographically writing the target measurement value into the carrier image to acquire the target steganographic image.
  • This application also provides an electronic device, including:
  • the memory is used to store the program of the image steganography method. After the device is powered on and runs the program of the image steganography method through the processor, the following steps are executed:
  • This application also provides a storage device
  • a program for the image steganography method is stored, and the program is run by the processor to perform the following steps:
  • This application also provides an image extraction method, including:
  • the measurement value is input into a reconstructed convolutional neural network model for reconstructing an image, and the original steganographic information corresponding to the measurement value is obtained.
  • the acquiring the embedded measurement value from the image to be detected includes:
  • the image to be detected is input into the target distillation convolutional neural network model, and the measured value steganographic in the image to be detected is obtained.
  • This application also provides an image extraction device, including:
  • the image acquisition unit is used to acquire the image to be detected
  • the measurement value obtaining unit is configured to obtain the steganographic measurement value from the image to be detected, wherein the measurement value is obtained after compressing the original steganographic information;
  • the original steganographic information acquisition unit is configured to input the measured value into a reconstructed convolutional neural network model for reconstructing an image, and obtain the original steganographic information corresponding to the measured value.
  • This application also provides an electronic device, including:
  • the memory is used to store the image extraction method program. After the device is powered on and runs the image extraction method program through the processor, the following steps are executed:
  • the measurement value is input into a reconstructed convolutional neural network model for reconstructing an image, and the original steganographic information corresponding to the measurement value is obtained.
  • This application also provides a storage device
  • the program of the image extraction method is stored, and the program is run by the processor to perform the following steps:
  • the measurement value is input into a reconstructed convolutional neural network model for reconstructing an image, and the original steganographic information corresponding to the measurement value is obtained.
  • An image steganography method provided by the present application includes: obtaining a carrier image and obtaining information to be embedded; compressing the information to be embedded to obtain a target measurement value corresponding to the information to be embedded; The measured value is steganographically written into the carrier image, and the target steganographic image is obtained.
  • the image steganography method performs compression processing on the information to be embedded to obtain the target measurement value corresponding to the information to be embedded, and then, obtains the target hidden image by steganographically writing the target measurement value into the carrier image. Write images.
  • the image steganography method removes redundant information in the information to be embedded by compressing the information to be embedded, which not only reduces the calculation amount of the steganography processing, but also improves the steganography efficiency of the image; in addition, Because the target measurement value corresponding to the information to be embedded is finally steganographic into the carrier image, rather than the information to be embedded, the target measurement value is equivalent to the key of the information to be embedded, that is, the method described in this application is equivalent to Encrypt the embedded information first, and then perform steganography on the encrypted information to be embedded, so that the information to be embedded in the final steganographic image obtained is not easy to be destroyed, which greatly improves the security of the information to be embedded Sex.
  • FIG. 1 is a schematic diagram of an application scenario of an image steganography method provided by the first embodiment of the present application
  • FIG. 2 is a schematic diagram of the first existing image steganography method provided by the first embodiment of the present application.
  • FIG. 3 is a schematic diagram of a second existing image steganography method provided by the first embodiment of the present application.
  • FIG. 4 is a schematic diagram of a third existing image steganography method provided by the first embodiment of the present application.
  • FIG. 5 is a schematic diagram of a fourth existing image steganography method provided by the first embodiment of the present application.
  • Fig. 6 is a flowchart of the image steganography method of the present application provided by the first embodiment of the present application;
  • FIG. 7 is a schematic diagram of the framework of the image steganography method of the present application provided by the first embodiment of the present application;
  • FIG. 8 is a schematic diagram of the comparison of the steganographic effects of various image steganographic methods provided by the first embodiment of the present application;
  • FIG. 9 is a schematic diagram of the peak signal-to-noise ratio comparison of various image steganography methods provided in the first embodiment of the present application.
  • FIG. 10 is a schematic diagram of the structural similarity comparison of various image steganography methods on different data sets provided by the first embodiment of the present application;
  • FIG. 11 is a schematic diagram of a comparison of the steganography effect of the image steganography method of the present application provided by the first embodiment of the present application before and after the attention mechanism is introduced;
  • FIG. 12 is a schematic diagram of comparison of steganographic regions of various image steganographic methods provided by the first embodiment of the present application;
  • FIG. 13 is a schematic diagram of the results of the image steganography method of the present application provided by the first embodiment of the present application at different sampling rates;
  • FIG. 14 is a flowchart of an image extraction method provided by the second embodiment of the present application.
  • 15 is a schematic diagram of the image steganography device provided by the third embodiment of the present application.
  • FIG. 16 is a schematic diagram of an electronic device provided by a fourth embodiment of the present application.
  • FIG. 17 is a schematic diagram of an image extraction device provided by a sixth embodiment of the present application.
  • FIG. 18 is a schematic diagram of another electronic device provided by the seventh embodiment of the present application.
  • FIG. 1 is a schematic diagram of an application scenario of an image steganography method provided in the first embodiment of this application.
  • the client first establishes a connection with the server. After the connection, the client sends the carrier image and the information to be embedded to the server.
  • the server obtains the carrier image and obtains the information to be embedded, and then compresses the information to be embedded to obtain The target measurement value corresponding to the information to be embedded; afterwards, the target measurement value is steganographically written into the carrier image, the target steganography image is obtained, and the target steganography image is sent to the client; afterwards, The client receives the target steganographic image.
  • the client can be a mobile terminal device, such as a mobile phone, a tablet computer, etc., or a commonly used computer device.
  • the image steganography method described in this application can also be separately applied to the client or server device. For example, after the client obtains the carrier image and the information to be embedded, it directly passes the client The corresponding application program installed in the system performs processing and obtains the target steganographic image; of course, after obtaining the carrier image and the information to be embedded, the server directly stores the obtained target steganographic image in itself In storage or remote storage, without sending to the client.
  • the above application scenario is only a specific embodiment of the image steganography method described in this application. The purpose of providing the application scenario embodiment is to facilitate the understanding of the image steganography method of this application, and is not used to limit the image steganography method of this application. Writing method.
  • the first embodiment of the present application provides an image steganography method, which is described below with reference to FIGS. 2-13.
  • the image steganography method is a steganography method based on the generation of confrontation networks (ISGAN, Invisible Steganography via Generative Adversarial Networks).
  • the method mainly combines a grayscale image, that is, the color of the pixels in the image has only one sample color
  • the image is embedded in a color image. Since the method is a prior art, its details will not be described in detail here.
  • the method mainly has the following problems: 1.
  • the method does not consider the information to be embedded, that is, the compressibility of the grayscale image, that is, it does not consider that there may be a large amount of redundant information in the grayscale image.
  • GAN Generative Adversarial Networks
  • PSNR Peak Signal to Noise Ratio
  • the image steganography method (Atique's, End-to-end Trained CNN Encode-Decoder Networks for Image Steganography) mainly includes two parts: an encoder and a decoder. Among them, the main task of the encoder is to steganographically write the information to be embedded into the carrier image; the main task of the decoder is to extract the embedded information from the steganographic image. Since the method is a prior art, its details will not be described in detail here. The method mainly has the following problems: 1. The encoder used in the method has been widely used in other fields.
  • the method also does not consider the compressibility of the information to be embedded, and the redundant information in the information to be embedded is also steganographic, and these redundant information will inevitably increase the amount of calculation for steganographic processing , Reduce the efficiency of steganography processing; 3.
  • the method does not perform steganography for the characteristics of the information to be embedded. For example, when performing steganography for image information, it does not consider the difference in texture distribution information of the image to select steganography The location is simply a global steganography, and the steganographic information in the steganographic image finally obtained is easily destroyed.
  • FIG. 4 it is a schematic diagram of a third existing image steganography method provided by the first embodiment of this application.
  • the image steganography method (StegNet, Image-into-Image Steganography Using Deep Convolutional Network) is mainly to steganography the information to be embedded, mainly color image information, into the carrier image.
  • the method also includes two parts: the encoder And decoder. Since the method is a prior art, its details will not be described in detail here.
  • the method mainly has the following problems: 1.
  • the network structure of the convolutional neural network model (CNN, Convolutional Neural Networks) used in the method is relatively simple, and the steganography effect of the steganographic image finally obtained is not good enough; 2.
  • the method also does not consider the compressibility of the information to be embedded; 3.
  • the method also does not perform steganography for the characteristics of the information to be embedded. For example, when the image information is steganographic, the image is not considered.
  • the different texture distribution information is used to select the steganographic position, but simply perform global steganography, and the steganographic information in the steganographic image finally obtained is easily destroyed.
  • the image steganography method (Hiding Images in Plain Sight: Deep Steganography) mainly uses a convolutional neural network model to steganographically write information to be embedded into a carrier image.
  • the method mainly includes three convolutional neural network models: a preprocessed convolutional neural network model, a steganographic convolutional neural network model, and an extraction convolutional neural network model. Since the method is a prior art, its details will not be described in detail here.
  • the method mainly has the following problems: 1. The method also does not consider the compressibility of the information to be embedded; 2.
  • the method also does not perform steganography for the characteristics of the information to be embedded, such as when hiding image information.
  • the method performs steganography on the pre-processing to-be-embedded information output by the pre-processing convolutional neural network model, and does not destroy the characteristic information of the to-be-embedded information. Therefore, the steganographic effect of the steganographic image finally obtained by the method is not good enough. , The steganographic information in the steganographic image is also easily destroyed.
  • the first embodiment of the present application provides an image steganography method, as shown in FIG. 6, which is a flowchart of the image steganography method provided by the first embodiment of the present application , As shown in FIG. 7, which is a schematic diagram of the framework of the image steganography method of the present application provided by the first embodiment of the present application. It will be described in detail below.
  • step S601 the carrier image is obtained, and the information to be embedded is obtained.
  • the carrier image may specifically be an image or a certain video frame in the video resource.
  • the information to be embedded refers to information that needs to be concealedly transmitted or information used to provide digital copyright protection, and it can specifically be a piece of text information or image information, such as company LOGO, scanned copy of contract documents, and other information.
  • the information to be steganographically written is information in an image format.
  • the image steganography method provided in the first embodiment of the present application can also be processed.
  • the information to be embedded is an example of information in an image format.
  • Step S602 Perform compression processing on the information to be embedded, and obtain a target measurement value corresponding to the information to be embedded.
  • the unique attributes of the information to be embedded such as the local similarity, non-local self-similarity, compressibility and other characteristics of the image information
  • compressibility and other characteristics of the image information in order to remove the redundant information in the information to be embedded, in the first embodiment of the present application, compress the information to be embedded first, and obtain a target measurement value corresponding to the information to be embedded.
  • the compressing the information to be embedded and obtaining the target measurement value corresponding to the information to be embedded refers to compressing the information to be embedded using compressed sensing technology, and obtaining the information corresponding to the information to be embedded.
  • the target measurement value corresponding to the embedded information in addition to the use of compressed sensing technology to process the information to be embedded in the image format, discrete cosine transform (DCT, DCT for Discrete Cosine Transform), wavelet transform (WT, wavelet transform) Transform) and other technologies process the information to be embedded in the image format; in addition, the information to be embedded in other formats can also be processed using the compression technology corresponding to the format.
  • DCT discrete cosine transform
  • WT wavelet transform
  • the compressed sensing technology Compared with the traditional signal acquisition technology, the compressed sensing technology first performs signal sampling and then performs signal compression processing.
  • the compressed sensing technology is a technology that directly processes signal sampling and signal compression together. At the same time, the signal is directly compressed, which can not only increase the sampling speed of the signal, but also remove the redundant information in the signal to a certain extent.
  • Compressed sensing technology mainly includes three parts: 1.
  • the sparse representation of the signal, that is, the original signal is expressed in the form of a sparse matrix; 2.
  • the sampling matrix is designed, that is, the sampling matrix is used to compress the original signal to reduce its dimensionality. Obtain its corresponding measurement value, and while reducing its dimensionality, it is ensured that the information loss of the original signal is minimized.
  • the sampling matrix also called the measurement matrix
  • the measurement matrix is a numerical matrix obtained by calculation, which is mainly used to sample the original signal, and at the same time, it also saves the effective information in the original signal during the sampling process.
  • the measured value It is a measured value obtained after compressing the original signal, and its dimension is smaller than that of the original signal; 3. Design a signal recovery algorithm, that is, according to the measured value of the original signal, through the corresponding algorithm, Obtain the original signal.
  • the target measurement value refers to a measurement value obtained after dimensionality reduction and compression processing is performed on the information to be embedded using compressed sensing technology, and its dimension is compared with the dimension of the information to be embedded To be small.
  • the compressing the information to be embedded to obtain the target measurement value corresponding to the information to be embedded includes: obtaining a target sampling convolutional neural network model, wherein the target sampling convolutional neural network model is used to The information to be embedded is compressed; the target sampling convolutional neural network model is used to process the information to be embedded to obtain the target measurement value.
  • the information to be embedded is image information
  • the compression processing on the information to be embedded refers to obtaining a sampling matrix with good performance to compress the information to be embedded.
  • the sampling matrix ⁇ is an M ⁇ N matrix (M>0, N>0, and M ⁇ N)
  • M/N is called the sampling rate
  • the to-be-embedded information x is an N-dimensional information
  • the process of compressing the information to be embedded using the sampling matrix to obtain the target measurement value corresponding to the information to be embedded can be regarded as a convolution operation, and each row of the sampling matrix corresponds to a volume.
  • the product kernel that is, the number of convolution kernels is M. Therefore, in the first embodiment of the present application, the information to be embedded is compressed by obtaining a target sampling convolutional neural network model corresponding to the sampling matrix.
  • the convolution The number of product cores is N and the number is M.
  • the obtaining of the target sampling convolutional neural network model includes: obtaining an original sampling convolutional neural network model; training the original sampling convolutional neural network model to make the original sampling convolutional neural network model converge and converge The latter original sampling convolutional neural network model is used as the target sampling convolutional neural network model.
  • the first embodiment of the present application introduces a guided convolutional neural network model when training the original sampled convolutional neural network model. To perform joint training with the original sampling convolutional neural network model.
  • the training the original sampled convolutional neural network model to converge the original sampled convolutional neural network model includes: obtaining a guided convolutional neural network model, wherein the guided convolutional neural network model uses To restore the output data of the original sampled convolutional neural network model to the input data of the original sampled convolutional neural network model; to combine the guidance convolutional neural network model and the original sampled convolutional neural network model Training to make the original sampling convolutional neural network model converge.
  • one of the methods is to reverse the obtained original training measurement values, and compare the original training sample information and reverse the measurement values.
  • the data obtained after restoration is compared, and the performance of the original sampled convolutional neural network model is evaluated through the comparison results, and the parameters of the original sampled convolutional neural network model are adjusted through the comparison results to make it converge.
  • the obtained original training measurement value is subjected to reverse restoration processing by obtaining a guided convolutional neural network model.
  • the joint training of the guidance convolutional neural network model and the original sampled convolutional neural network model to make the original sampled convolutional neural network model converge includes: obtaining original training sample information; The sample information is input into the original sampling convolutional neural network model to obtain the original training measurement value corresponding to the original training sample information; the original training measurement value is input into the guidance convolutional neural network model to obtain The guidance sample information corresponding to the original training sample information; the parameters of the guidance convolutional neural network model and the parameters of the original sampling convolutional neural network model are adjusted by the loss function of the guidance convolutional neural network model, so that The original sampling convolutional neural network model converges.
  • is used to represent the original sampling convolutional neural network model
  • g is used to represent the guidance convolutional neural network model
  • xi is used to represent the original training sample information
  • ⁇ x i is used to represent the measured value corresponding to x i
  • z i represents the guide sample information corresponding to x i .
  • the loss function corresponding to the original sampled convolutional neural network model and the guidance convolutional neural network model can be expressed as:
  • ⁇ g is the parameter of the guiding convolutional neural network model
  • block-based sampling (BCS, Block -based Compressed Sensing). That is, inputting the original training sample information into the original sampling convolutional neural network model to obtain the original training measurement value corresponding to the original training sample information includes: dividing the original training sample information into at least One original training sample sub-information; the at least one original training sample sub-information is input into the original sampling convolutional neural network model to obtain at least one original training sub-measurement value.
  • the original training sample information is divided into several sub-information, and the measurement values corresponding to the several sub-information are obtained respectively.
  • N the size of the block
  • B*B the size of the block
  • the inputting the original training measurement value into the guidance convolutional neural network model to obtain guidance sample information corresponding to the original training sample information includes: inputting the at least one original training sub-measurement value In the guidance convolutional neural network model, the guidance sample information corresponding to the original training sample information is obtained.
  • the guided convolutional neural network model includes a fully connected layer and at least one residual block; the at least one original training sub-measurement value is input into the guided convolutional neural network model, and the information obtained from the original training
  • the guidance sample information corresponding to the sample information includes: using the fully connected layer to preprocess the at least one original training sub-measurement value to obtain preprocessing sample information of the same size as the original training sample information; using the At least one residual block processes the pre-processing sample information to obtain guide sample information corresponding to the original training sample information.
  • the residual block usually refers to a combination of multiple convolutional layers including shortcut connections.
  • the jump connection also called short-circuit connection, evolved from the jump connection in Recurrent Neural Network (RNN) and various gating algorithms, and is used to alleviate the problem of gradient disappearance in the deep structure Technology.
  • RNN Recurrent Neural Network
  • the residual blocks in the convolutional neural network model appearing above and the convolutional neural network model appearing in subsequent processing are all composed of at least one convolution layer (convolution) and one activation layer It is composed of at least one batch normalization layer, the size of the convolution kernel is 3*3, the step size is 1, and the padding is 1.
  • the size of the convolution kernel is 3*3
  • the step size is 1
  • the padding is 1.
  • the corresponding at least one M-dimensional measurement value is obtained, in order to use the guided convolutional neural network model to restore the original training
  • the guided convolutional neural network model we first use a fully connected layer to preprocess the at least one M-dimensional measurement value to obtain a sample with the same size as the original training sample information. The sample information is preprocessed, and then at least one residual block is used to process the preprocessed sample information to obtain guide sample information corresponding to the original training sample information.
  • using the fully connected layer to preprocess the at least one original training sub-measurement value to obtain preprocessed sample information of the same size as the original training sample information includes: performing the preprocessing on the at least one original training sub-measurement value. Dimensionality adjustment processing is performed on the sub-measurement values, so that the dimensionality of the at least one original training sub-measurement value is consistent with the dimensionality of the original training sample information; The measurement values are subjected to shape reshaping processing, so that the size of the at least one original training sub-measurement value is consistent with the size of the original training sample sub-information; The at least one original training sub-measurement value is spliced to obtain pre-processed sample information of the same size as the original training sample information.
  • the performing dimensionality adjustment processing, shape reshaping processing, and splicing processing on the at least one original training sub-measurement value mainly corresponds to the "upsampling" processing in FIG.
  • At least one M-dimensional measurement value undergoes dimensionality adjustment processing, that is, an upsampling operation, which increases it from M-dimensional to N-dimensional, so that it is consistent with the dimensionality of the corresponding original training sample sub-information; and then, Reshape it again to make it consistent with the size of the corresponding original training sample sub-information.
  • dimensionality adjustment processing that is, an upsampling operation, which increases it from M-dimensional to N-dimensional, so that it is consistent with the dimensionality of the corresponding original training sample sub-information; and then, Reshape it again to make it consistent with the size of the corresponding original training sample sub-information.
  • the size of the at least one original training sample sub-information is B*B
  • it is reshaped to a measurement value of B*B size.
  • the reshaping can specifically use an edge filling method to make the two sizes consistent. Of course, other methods can also be used, which will not be repeated here.
  • the at least one original training sub-measurement value that has undergone the dimensionality adjustment processing and the shape reshaping processing is spliced, and preprocessed sample information having the same size as the original training sample information is obtained.
  • the pre-processed sample information is also put into at least one residual block in the guidance convolutional neural network model.
  • the purpose of processing is to make the finally obtained guide sample information better.
  • the target sampled convolutional neural network model obtained in the first embodiment of the present application is jointly trained by its corresponding original sampled convolutional neural network model and a corresponding guided convolutional neural network model.
  • the performance of the target sampling convolutional neural network model finally obtained is improved; and when the original training sample information is sampled, the block-based sampling can also reduce the target sampling convolutional neural network model. the size of.
  • step S603 is performed to steganographically write the target measurement value into the carrier image to obtain a target steganographic image.
  • the target measurement value corresponding to the information to be embedded is obtained, and then the target measurement value can be steganographically written into the carrier image.
  • said steganographically writing the target measurement value into the carrier image to obtain the target steganography image includes: obtaining a target steganographic convolutional neural network model; and inputting the target measurement value and the carrier image into In the target steganographic convolutional neural network model, the target steganographic image is acquired.
  • the acquiring the target steganographic convolutional neural network model includes: acquiring an original steganographic convolutional neural network model for generating a steganographic image; The model is trained to converge the original steganographic convolutional neural network model, and the converged original steganographic convolutional neural network model is used as the target steganographic convolutional neural network model.
  • the first embodiment of this application introduces a distillation convolutional neural network model when training the original steganographic convolutional neural network model. To perform joint training with the original steganographic convolutional neural network model.
  • the training the original steganographic convolutional neural network model to converge the original steganographic convolutional neural network model includes: obtaining a distillation convolutional neural network model, wherein the distillation convolutional neural network model The model is used to obtain measured values corresponding to the input data of the original steganographic convolutional neural network model from the output data of the original steganographic convolutional neural network model; combine the distillation convolutional neural network model and the The original steganographic convolutional neural network model is jointly trained to converge the original steganographic convolutional neural network model and the distillation convolutional neural network model.
  • the processing process is specifically as follows: firstly, obtain the original training carrier image and obtain the original measurement value to be steganographic; then, input the original training carrier image and the original measurement value to be steganographic into the original steganographic convolution
  • the neural network model obtain the original training steganography image; then, input the original training steganography image into the distillation convolutional neural network model to obtain the distillation measurement value corresponding to the original measurement value to be steganographic ;
  • using the loss functions corresponding to the two convolutional neural network models to adjust the parameters of the two convolutional neural network models, so that the original steganographic convolutional neural network model and the distillation convolutional neural network The model converges.
  • the joint training of the distillation convolutional neural network model and the original steganographic convolutional neural network model to converge the original steganographic convolutional neural network model and the distillation convolutional neural network model includes: Obtain the original training carrier image, and obtain the original measurement value to be steganographic; input the original training carrier image and the original measurement value to be steganographic into the original steganography convolutional neural network model to obtain the original training steganography Image; input the original training steganographic image into the distillation convolutional neural network model to obtain the distillation measurement value corresponding to the original measurement value to be steganographic; use the two convolutional neural network models corresponding to The loss function adjusts the parameters of the two convolutional neural network models, so that the original steganographic convolutional neural network model and the distillation convolutional neural network model converge.
  • H represents the original steganographic convolutional neural network model
  • D represents the distillation convolutional neural network model
  • ⁇ H represents the parameters of the original steganographic convolutional neural network model
  • ⁇ D represents the distillation convolutional neural network model parameters
  • c 'i and c i represents a corresponding original training stegoimage
  • ⁇ x i indicates the original measured value to be steganography
  • the original Loss function of steganographic convolutional neural network model It can be expressed as:
  • the loss function of the distillation convolutional neural network model It can be expressed as:
  • the distillation convolutional neural network model includes a fully connected layer and at least one residual block; said inputting the original training steganographic image into the distillation convolutional neural network model, and obtaining the original
  • the distillation measurement value corresponding to the steganographic measurement value includes: using the at least one residual block to process the original training steganography image to obtain the to-be-processed distillation measurement value corresponding to the original measurement value to be steganographic; using The fully connected layer processes the distillation measurement value to be processed so that the size of the distillation measurement value to be processed is consistent with the size of the original measurement value to be steganographic.
  • the processing process corresponds to the down-sampling processing in FIG. 7, specifically: performing dimensionality adjustment processing on the distillation measurement value to be processed, so that the dimension of the distillation measurement value to be processed is the same as the original measurement value to be steganographic.
  • the dimension of the value is consistent; the distilled measurement value to be processed after the dimension adjustment process is reshaped, so that the size of the distilled measurement value to be processed is consistent with the size of the original measurement value to be steganographic ; After the dimensionality adjustment processing and the shape reshaping processing, the distillation measurement value to be processed is spliced, so that the size of the distillation measurement value to be processed and the size of the original measurement value to be steganographic Consistent, the detailed processing process is basically similar to the above-mentioned up-sampling processing, and will not be repeated here.
  • a target steganographic convolutional neural network model with good performance can be finally obtained.
  • the target steganography convolutional neural network model is used to perform steganography processing of the image, that is, the target measurement value is steganographically written into the carrier image.
  • the target steganographic convolutional neural network model includes a fully connected layer and at least one residual block; the carrier image and the target measurement value are input to the target steganographic convolutional neural network
  • acquiring the target steganographic image includes: using the fully connected layer to splice the carrier image and the measured value to obtain a spliced image to be processed; and input the spliced image value to be processed into the In at least one residual block, the target steganographic image is obtained.
  • performing stitching processing on the carrier image and the target measurement value to obtain the stitched image to be processed includes: acquiring a characteristic information image corresponding to the carrier image, wherein the characteristic information image includes the carrier The texture distribution information of the image; according to the characteristic information image, the target measurement value is rearranged, and the rearrangement measurement value corresponding to the characteristic information in the carrier image is obtained; the carrier image and the rearrangement The measurement value is spliced, and the spliced image to be processed is obtained.
  • the re-layout of the target measurement value according to the characteristic information image to obtain the re-layout measurement value corresponding to the characteristic information in the carrier image includes: performing dimensionality adjustment processing on the target measurement value, Make the dimensionality of the target measurement value consistent with the dimensionality of the feature information image; perform shape reshaping processing on the target measurement value after the dimensionality adjustment processing, so that the size of the target measurement value is consistent with the size of the target measurement value.
  • the size of the characteristic information image is the same; the target measurement value after the dimensionality adjustment processing and the shape reshaping processing and the characteristic information image are subjected to a pixel product operation to obtain the same in the carrier image
  • the characteristic information corresponding to the re-layout measurement value includes: performing dimensionality adjustment processing on the target measurement value, Make the dimensionality of the target measurement value consistent with the dimensionality of the feature information image; perform shape reshaping processing on the target measurement value after the dimensionality adjustment processing, so that the size of the target measurement value is
  • the target measurement value is steganographically written into the carrier image using the target steganography convolutional neural network model
  • the unique attributes of the carrier image are considered, that is, the texture distribution information of different carrier images is different.
  • the steganography of the target measurement value to a region where the texture distribution of the carrier image is relatively complex will inevitably bring about a better image steganography effect.
  • an attention mechanism is introduced to perform image steganography processing.
  • the attention mechanism refers to the unique attributes of the carrier image, and obtains some information that needs to be paid attention to when performing steganographic processing, such as the texture distribution information and brightness information of the image. Specifically in this embodiment, it means that before performing specific steganography processing, it is necessary to obtain a characteristic information image corresponding to the carrier image, and then use the characteristic information image to guide the steganography processing.
  • the characteristic information image is an image including texture distribution information of the carrier image.
  • the method for acquiring the characteristic information image may specifically use different characteristic extraction operators to process the carrier image to obtain the characteristic information image.
  • edge extraction operators such as Sobel operator, Laplacian operator, Canny operator, Rebort operator, etc.
  • Sobel operator Laplacian operator
  • Canny operator Canny operator
  • Rebort operator etc.
  • a convolutional neural network model can also be used to obtain the feature information image, since it is a prior art, it will not be repeated here.
  • the target measurement value is subjected to dimensionality adjustment processing and shape reshaping processing, so that the size of the target measurement value is consistent with the characteristic information
  • the size of the image is the same.
  • the target measurement value and the characteristic information image after the dimensionality adjustment processing and the shape reshaping processing are subjected to a pixel product operation to obtain a pixel value that is compared with that in the carrier image.
  • the re-layout measurement value corresponding to the feature information and then, by splicing the re-layout measurement value and the carrier image, specifically, the pixel points in the re-layout measurement value and the pixels in the carrier image
  • the values of the points are added; and then, the value of the stitched image to be processed is input into at least one residual block of the target steganographic convolutional neural network model to obtain the target steganographic image.
  • the size of the original training sample sub-information is set to 32, that is to say, one original training sample information is split into 32*32 sub-information; in addition, the original training sample
  • the sources of training sample information are mainly four data sets: Set14, LIVE1, the test set in VOC2012, and the test set in ImageNet.
  • FIG. 8 it is a schematic diagram of the comparison of the steganographic effects of various image steganography methods provided by the first embodiment of the application; as shown in FIG. 9, it is the various image steganography provided by the first embodiment of the application.
  • PSNR represents the peak signal-to-noise ratio; as shown in Figure 10, it is the structure of the various image steganography methods provided in the first embodiment of the application on different data sets.
  • Similarity comparison diagram in which structural similarity (SSIM, structural similarity index) is an index to measure the similarity of two images; in addition, the method of participating in the comparison is mainly the four existing image steganography described briefly above method.
  • SSIM structural similarity index
  • the image steganography method described in the first embodiment of the present application compresses the information to be embedded to remove redundant information in the information to be embedded, and Obtaining the target measurement value corresponding to the information to be embedded after removing redundant information, and then steganographically writing the target measurement value into the carrier image, not only can reduce the calculation amount of steganography processing, but also improve the steganography Efficiency can also improve the steganography effect to a certain extent.
  • FIG. 11 it is a schematic diagram of the comparison of the steganography effect of the image steganography method of the present application provided by the first embodiment of the application before and after the attention mechanism is introduced; as shown in FIG. 12, it is the first embodiment of the application.
  • the image steganography method provided by the first embodiment of the present application introduces an attention mechanism when obtaining the target measurement value corresponding to the information to be embedded, and when performing steganography processing for the target measurement value, that is, through Acquire a feature information image containing texture distribution information of the carrier image, and use the feature information image to guide the steganography process, so that the target measurement value can be steganographically written to the texture of the carrier image Complicated areas, thereby improving the steganographic effect, and reducing the probability that the steganographic information in the target steganographic image finally obtained is destroyed.
  • FIG. 13 it is a schematic diagram of the results of the image steganography method of the present application provided by the first embodiment of the present application at different sampling rates.
  • the image steganography method provided by the first embodiment of the present application uses the target sampling convolutional neural network model to sample the information to be embedded and obtain its corresponding target measurement value, which is equivalent to changing the relative
  • the information to be embedded is subjected to an encryption process, and the target measurement value is equivalent to the key of the information to be embedded.
  • the visualization characteristics of the information to be embedded can also be destroyed, and the distribution of its original data can be destroyed to a certain extent, which can greatly improve the steganography effect and avoid the acquisition of The probability that the steganographic information in the target steganographic image is destroyed.
  • the target measurement value obtained after compression processing of the information to be embedded is embedded into the carrier image.
  • the carrier image can also be compressed to obtain the carrier measurement value corresponding to the carrier image, and then the target measurement value corresponding to the information to be embedded is steganographic to the corresponding carrier image.
  • the image steganography method described in the first embodiment of the present application can be classified according to different security levels, and the image steganography methods corresponding to the different security levels can be authorized to Different levels of users.
  • the image steganography method is divided into levels according to the convolutional neural network model with different functions used in the image steganography method.
  • the image steganography method that only includes the target sampling convolutional neural network model, the target steganographic convolutional neural network model, and the reconstructed convolutional neural network model is divided into level 1; the target sampling convolutional neural network model will be included .
  • the image steganography method of guiding convolutional neural network model, target steganography convolutional neural network model and reconstructing convolutional neural network model is divided into level 2; will include target sampling convolutional neural network model, guiding convolutional neural network model,
  • the image steganography methods of target steganography convolutional neural network model, target distillation convolutional neural network model and reconstruction convolutional neural network model are divided into level 3.
  • authorize the image steganography method corresponding to the highest level of the included convolutional neural network model to advanced users for example, authorize level 3 to advanced users; authorize the image steganography method corresponding to the next level to sub-advanced users, For example, level 2 is authorized to sub-advanced users; the image steganography method corresponding to the lowest level is authorized to ordinary users, such as level 1 is authorized to ordinary users.
  • the image steganography method described in the first embodiment of the present application is classified into levels.
  • the image steganography method that only compresses the target measurement value of the embedded information and does not introduce the attention mechanism is divided into level 1; both the carrier image and the information to be embedded are processed Compression processing is performed, and the target measurement value corresponding to the information to be embedded is steganographic into the carrier measurement value corresponding to the carrier image.
  • the image steganography method that does not introduce the attention mechanism is divided into level 2; only treat the embedded information
  • the target measurement value after compression processing is subjected to steganographic processing, and the image steganography method that introduces the attention mechanism is divided into level 3.
  • the target that is to be embedded in the carrier image is to be steganographically written
  • the region is locally compressed, and the information to be embedded is compressed, and the target measurement value corresponding to the information to be embedded is written into the local carrier measurement value corresponding to the carrier image.
  • the image steganography method is divided into level 4.
  • authorize the image steganography method corresponding to the highest level of complexity to advanced users such as level 4 to advanced users
  • authorize the image steganography method corresponding to the next level to sub-advanced users such as level 3 authorization To sub-advanced users
  • authorize the image steganography method corresponding to the next level to advanced users of the next level such as authorizing level 2 to sub-advanced users
  • authorize the image steganography method corresponding to the lowest level to ordinary users such as Authorize level 1 to ordinary users.
  • the security level classification methods of the above two perspectives can be divided more specifically according to actual needs; or, according to actual needs, the security level of the image steganography method described in this application can be divided from other perspectives.
  • the above-mentioned two perspective division methods are combined to obtain a more fine-grained security level, which will not be described one by one here.
  • the image steganography method described in the first embodiment of the present application includes: obtaining a carrier image and obtaining information to be embedded; compressing the information to be embedded to obtain a target corresponding to the information to be embedded Measured value; steganographically write the target measured value into the carrier image to obtain the target steganographic image.
  • the image steganography method described in the first embodiment of the present application performs image steganography
  • what is steganographic in the carrier image is the target measurement value obtained after compression processing for the information to be embedded, rather than the information to be embedded Itself; on the one hand, this processing greatly reduces the redundant information in the information to be embedded, reduces the amount of calculation of the subsequent steganography processing, and can improve the steganography efficiency; on the other hand, because the steganography is related to the information to be embedded
  • the target measurement value corresponding to the embedded information, rather than the information to be embedded the target measurement value is equivalent to the key of the information to be embedded, that is, the method is equivalent to first Encryption, and then steganography of the encrypted information to be embedded, so that the information to be embedded in the finally obtained target steganographic image is not easily destroyed, which greatly improves the security of the information to be embedded ; And, when performing specific steganographic processing, an attention mechanism is introduced.
  • the target measurement value can be better steganographically written to all
  • the complex texture area of the carrier image can also improve the steganography effect to a certain extent, so that the information to be embedded in the finally obtained target steganography image is not easily destroyed, which greatly improves the security of the information to be embedded Sex.
  • an image steganography method is provided.
  • this application also provides an image extraction method.
  • a flowchart of an image extraction method some of the steps have been described in detail in the first embodiment above, so the description here is relatively simple.
  • the image steganography method provided in the first embodiment of this application Partial description is sufficient, and the processing procedure described below is only illustrative.
  • FIG. 14 it is a flowchart of an image extraction method provided by the second embodiment of this application, which is described below in conjunction with FIG. 7 and FIG. 14.
  • Step S1401 acquiring an image to be detected
  • Step S1402 Obtain a steganographic measurement value from the image to be detected, where the measurement value is obtained after compressing the original steganographic information;
  • Step S1403 Input the measured value into the reconstructed convolutional neural network model used to reconstruct the image, and obtain the original steganographic information corresponding to the measured value.
  • the obtaining the steganographic measurement value from the image to be detected includes:
  • Obtain a target distillation convolutional neural network model where the target distillation convolutional neural network model is used to extract the steganographic measurement value from the image to be detected in which the measurement value is steganographically written; Input into the target distillation convolutional neural network model to obtain the steganographic measurement value in the image to be detected.
  • the obtaining of the target distillation convolutional neural network model includes: obtaining the distillation convolutional neural network model; training the distillation convolutional neural network model to make the distillation convolutional neural network model converge, and the convergence of the convolutional neural network model
  • the distillation convolutional neural network model is used as the target distillation convolutional neural network model.
  • the target distilled convolutional neural network model is obtained by introducing the distilled convolutional neural network model and convolving the distilled convolutional neural network when the target steganographic convolutional neural network model described in the first embodiment is obtained.
  • the neural network model and the original steganographic convolutional neural network model described in the above-mentioned first embodiment are jointly trained, so that the original steganographic convolutional neural network model and the distillation convolutional neural network model are converged, and after the convergence
  • the distillation convolutional neural network model is used as the target distillation convolutional neural network model. Since the detailed processing procedure has been described in detail in the above-mentioned first embodiment, it will not be repeated here. For details, please refer to the description in the above-mentioned first embodiment.
  • the reconstructed convolutional neural network model is obtained through the following steps:
  • the training the original reconstructed convolutional neural network model to cause the original reconstructed convolutional neural network model to converge includes: obtaining original training detection measurement values; and inputting the original training detection measurement values to the In the original reconstructed convolutional neural network model, an original reconstructed image is obtained; the loss function corresponding to the original reconstructed convolutional neural network model is used to adjust the parameters of the original reconstructed convolutional neural network model, so that the original reconstructed convolutional neural network The model converges.
  • the acquisition of the original training detection measurement value refers to the acquisition of the original training sample information through the target sampling convolutional neural network model in the foregoing first embodiment after sampling processing.
  • the source of the original training sample information is mainly four data sets: Set14, LIVE1, the test set in VOC2012, and the test set in ImageNet.
  • ⁇ R represents the parameters of the original reconstructed convolutional neural network model
  • y" represents the original training detection measurement value
  • x i represents the original training sample information
  • the original training detection measurement value includes a fully connected layer and at least one residual block; the inputting the original training detection measurement value into the original reconstruction convolutional neural network model to obtain the original reconstruction image includes: using The fully connected layer processes the original training detection measurement value so that the dimension and size of the original training detection measurement value are consistent with the dimension and size of the image to be detected; using the at least one residual block The original training detection measurement value consistent with the dimension and size of the image to be detected is processed, and an original reconstructed image corresponding to the original training detection measurement value is obtained.
  • the structure of the residual block is the same as the structure of the residual block used in the convolutional neural network model described in the first embodiment, and will not be repeated here.
  • this application also provides an image steganography device, please refer to FIG. 15, which is the image steganography device provided by the third embodiment of the application.
  • FIG. 15 is the image steganography device provided by the third embodiment of the application.
  • An image steganography device provided by the third embodiment of the present application includes the following parts:
  • the information obtaining unit 1501 is used to obtain the carrier image and obtain the information to be embedded.
  • the measurement value obtaining unit 1502 is configured to compress the information to be embedded, and obtain the target measurement value corresponding to the information to be embedded.
  • the steganographic image acquisition unit 1503 is used for steganographically writing the target measurement value into the carrier image to acquire the target steganographic image.
  • FIG. 16 is a schematic diagram of an electronic device provided by the fourth embodiment of the application.
  • the device embodiment is basically similar to the method embodiment, so the description is relatively simple. For related parts, please refer to the part of the description of the method embodiment.
  • the electronic device embodiment described below is only illustrative.
  • An electronic device provided by the fourth embodiment of the present application includes:
  • the memory 1602 is used to store the program of the image steganography method. After the device is powered on and runs the program of the image steganography method through the processor, the following steps are executed:
  • this application also provides a storage device. Since the storage device embodiment is basically similar to the method embodiment, the description is relatively simple. For related details, please refer to the method implementation Part of the description of the example is sufficient, and the storage device embodiment described below is only illustrative.
  • a storage device provided by the fifth embodiment of the present application stores a program of an image steganography method, and the program is run by a processor to perform the following steps:
  • this application also provides an image extraction device.
  • FIG. 17 is an example of an image extraction device provided by the sixth embodiment of this application.
  • the description is relatively simple.
  • the device embodiment described below is only illustrative.
  • An image extraction device provided by the sixth embodiment of the present application includes the following parts:
  • the image acquisition unit 1701 is used to acquire the image to be detected
  • the measurement value obtaining unit 1702 is configured to obtain the steganographic measurement value from the image to be detected, where the measurement value is obtained after compressing the original steganographic information;
  • the original steganographic information acquisition unit 1703 is configured to input the measured value into a reconstructed convolutional neural network model for reconstructing an image, and acquire the original steganographic information corresponding to the measured value.
  • FIG. 18 is a schematic diagram of the electronic device provided in the seventh embodiment of the application.
  • the device embodiment is basically similar to the method embodiment, so the description is relatively simple. For related parts, please refer to the part of the description of the method embodiment.
  • the electronic device embodiment described below is only illustrative.
  • An electronic device provided by the seventh embodiment of the present application includes:
  • the memory 1802 is used to store the image extraction method program. After the device is powered on and runs the image extraction method program through the processor, the following steps are executed:
  • the measurement value is input into a reconstructed convolutional neural network model for reconstructing an image, and the original steganographic information corresponding to the measurement value is obtained.
  • this application also provides a storage device. Since the storage device embodiment is basically similar to the method embodiment, the description is relatively simple. For related details, please refer to the method implementation Part of the description of the example is sufficient, and the storage device embodiment described below is only illustrative.
  • a storage device provided by the eighth embodiment of the present application stores a program of an image extraction method, and the program is run by a processor to perform the following steps:
  • the measurement value is input into a reconstructed convolutional neural network model for reconstructing an image, and the original steganographic information corresponding to the measurement value is obtained.
  • the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in a computer readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include non-transitory computer-readable media (transitory media), such as modulated data signals and carrier waves.
  • this application can be provided as a method, a system, or a computer program product. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Biomedical Technology (AREA)
  • Computer Hardware Design (AREA)
  • Technology Law (AREA)
  • Bioethics (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种图像隐写及提取方法、装置及电子设备,所述图像隐写方法通过对待嵌入信息进行压缩处理,以获取与所述待嵌入信息对应的目标测量值(S602);并通过将所述目标测量值隐写到载体图像中,来获得目标隐写图像(S603)。所述方法通过对所述待嵌入信息进行压缩处理来去除所述待嵌入信息中的冗余信息,不仅减少了隐写处理的计算量,还可以提高图像的隐写效率;另外,因为最终是将与所述待嵌入信息对应的目标测量值隐写到所述载体图像中,而非是待嵌入信息本身,相当于是先对待嵌入信息做了一个加密,然后再对加密后的待嵌入信息进行隐写,进而使最终获得的目标隐写图像中的待嵌入信息不容易被破坏,大大的提高了待嵌入信息的安全性。

Description

图像隐写及提取方法、装置及电子设备
本申请要求2019年09月10日递交的申请号为201910850963.3、发明名称为“图像隐写及提取方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种图像隐写方法、装置及电子设备。本申请同时涉及一种图像提取方法、装置及电子设备。
背景技术
随着数字媒体技术以及计算机网络技术的不断发展,在信息传递或针对图像、视频等资源提供数字版权保护时,通过图像隐写方法来隐蔽的传输信息这一方式受到了人们越来越多的关注,所述图像隐写方法,通常是将待传递信息或数字版权信息等待隐写信息通过伪装的方式隐蔽的写入用于承载所述待隐写信息的载体图像中,以获得包含所述待隐写信息的隐写图像的方法。
目前的图像隐写方法主要是研究如何将一个二进制序列隐藏到载体图像中,也就是说,需要将所有格式,如图像、视频、文本等格式的待隐写信息序列化为二进制序列,并将该二进制序列写入载体图像以获取对应的隐写图像。
目前的针对二进制序列进行隐写的图像隐写方法主要有以下缺点:1、其针对的隐写对象,即待隐写信息必须转换成二进制序列,然而,这种转换通常仅为简单的数值转换,其并没有考虑原始待隐写信息的特有属性,例如,图像数据中通常存在大量的冗余信息,将一幅图像序列化为二进制序列之后,所述二进制序列中仍然存在大量的冗余信息,将包含所述冗余信息的二进制序列进行隐写,势必会增加隐写处理的计算量,降低图像隐写的效率;2、因为该图像隐写方法是基于二进制序列的,因此,其一般能够隐写的数据量是有限的,随着数据量的增加,其获得的隐写图像的隐写效果将会越来越差,其获得的隐写图像中的隐写信息被破坏的几率也会随之增加,不利于待隐写信息的安全传递。
发明内容
本申请提供一种图像隐写方法,以解决现有的图像隐写方法在进行隐写时存在的计算量大、效率低以及获得的隐写图像中的隐写信息容易被破坏的问题。
本申请提供一种图像隐写方法,包括:
获取载体图像,并获取待嵌入信息;
对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
可选的,所述对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值,包括:
获取目标采样卷积神经网络模型,其中,所述目标采样卷积神经网络模型用于对所述待嵌入信息进行压缩处理;
使用所述目标采样卷积神经网络模型对所述待嵌入信息进行处理,获取所述目标测量值。
可选的,所述获取目标采样卷积神经网络模型,包括:
获取原始采样卷积神经网络模型;
对所述原始采样卷积神经网络模型进行训练,使所述原始采样卷积神经网络模型收敛,并将收敛后的所述原始采样卷积神经网络模型作为所述目标采样卷积神经网络模型。
可选的,所述对所述原始采样卷积神经网络模型进行训练,使所述原始采样卷积神经网络模型收敛,包括:
获取指导卷积神经网络模型,其中,所述指导卷积神经网络模型用于将所述原始采样卷积神经网络模型的输出数据恢复为所述原始采样卷积神经网络模型的输入数据;
将所述指导卷积神经网络模型和所述原始采样卷积神经网络模型进行联合训练,使所述原始采样卷积神经网络模型收敛。
可选的,所述将所述指导卷积神经网络模型和所述原始采样卷积神经网络模型进行联合训练,使所述原始采样卷积神经网络模型收敛,包括:
获取原始训练样本信息;
将所述原始训练样本信息输入到所述原始采样卷积神经网络模型中,获取与所述原始训练样本信息对应的原始训练测量值;
将所述原始训练测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息;
通过所述指导卷积神经网络模型的损失函数调整所述指导卷积神经网络模型的参数和所述原始采样卷积神经网络模型的参数,使所述原始采样卷积神经网络模型收敛。
可选的,所述将所述原始训练样本信息输入到所述原始采样卷积神经网络模型中, 获取与所述原始训练样本信息对应的原始训练测量值,包括:
将所述原始训练样本信息划分为至少一个原始训练样本子信息;
将所述至少一个原始训练样本子信息输入到所述原始采样卷积神经网络模型中,获取至少一个原始训练子测量值。
可选的,所述将所述原始训练测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息,包括:
将所述至少一个原始训练子测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息。
可选的,所述指导卷积神经网络模型包括一个全连接层和至少一个残差块;
所述将所述至少一个原始训练子测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息,包括:
使用所述全连接层对所述至少一个原始训练子测量值进行预处理,获取与所述原始训练样本信息的大小相同的预处理样本信息;
使用所述至少一个残差块对所述预处理样本信息进行处理,获取与所述原始训练样本信息对应的指导样本信息。
可选的,所述使用所述全连接层对所述至少一个原始训练子测量值进行预处理,获取与所述原始训练样本信息的大小相同的预处理样本信息,包括:
对所述至少一个原始训练子测量值进行维数调整处理,使所述至少一个原始训练子测量值的维数与所述原始训练样本信息的维数一致;
对经过维数调整处理后的所述至少一个原始训练子测量值进行形状重塑处理,使所述至少一个原始训练子测量值的大小与所述原始训练样本子信息的大小一致;
对经过所述维数调整处理和所述形状重塑处理后的所述至少一个原始训练子测量值进行拼接处理,获取与所述原始训练样本信息的大小相同的预处理样本信息。
可选的,所述将所述目标测量值隐写到所述载体图像中,获取目标隐写图像,包括:
获取目标隐写卷积神经网络模型;
将所述目标测量值和所述载体图像输入到所述目标隐写卷积神经网络模型中,获取所述目标隐写图像。
可选的,所述目标隐写卷积神经网络模型包括一个全连接层和至少一个残差块;
所述将所述载体图像和所述目标测量值输入到所述目标隐写卷积神经网络模型中,获取目标隐写图像,包括:
使用所述全连接层对所述载体图像和所述测量值进行拼接处理,获取待处理拼接图像;
将所述待处理拼接图像值输入到所述至少一个残差块中,获取目标隐写图像。
可选的,所述对所述载体图像和所述目标测量值进行拼接处理,获取待处理拼接图像,包括:
获取与所述载体图像对应的特征信息图像,其中,所述特征信息图像包括所述载体图像的纹理分布信息;
根据所述特征信息图像,对所述目标测量值进行重新布局,获取与所述载体图像中的特征信息对应的重布局测量值;
对所述载体图像和所述重布局测量值进行拼接处理,获取所述待处理拼接图像。
可选的,所述根据所述特征信息图像,对所述目标测量值进行重新布局,获取与所述载体图像中的特征信息对应的重布局测量值,包括:
对所述目标测量值进行维数调整处理,使所述目标测量值的维数与所述特征信息图像的维数一致;
对经过维数调整处理后的所述目标测量值进行形状重塑处理,使所述目标测量值的大小与所述特征信息图像的大小一致;
对经过所述维数调整处理和所述形状重塑处理后的所述目标测量值和所述特征信息图像进行像素点的乘积运算,获取与所述载体图像中的特征信息对应的重布局测量值。
可选的,所述将所述目标测量值隐写到所述载体图像中,获取目标隐写图像,包括:
获取目标隐写卷积神经网络模型;
通过所述目标隐写卷积神经网络模型将所述目标测量值隐写到所述载体图像中,获取所述目标隐写图像。
可选的,所述获取目标隐写卷积神经网络模型,包括:
获取用于生成隐写图像的原始隐写卷积神经网络模型;
对所述原始隐写卷积神经网络模型进行训练,使所述原始隐写卷积神经网络模型收敛,并将收敛后的所述原始隐写卷积神经网络模型作为所述目标隐写卷积神经网络模型。
可选的,所述对所述原始隐写卷积神经网络模型进行训练,使所述原始隐写卷积神经网络模型收敛,包括:
获取蒸馏卷积神经网络模型,其中,所述蒸馏卷积神经网络模型用于从所述原始隐写卷积神经网络模型的输出数据中获取与所述原始隐写卷积神经网络模型的输入数据对 应的测量值;
将所述蒸馏卷积神经网络模型和所述原始隐写卷积神经网络模型进行联合训练,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛。
可选的,所述将所述蒸馏卷积神经网络模型和所述原始隐写卷积神经网络模型进行联合训练,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛,包括:
获取原始训练载体图像,并获取原始待隐写测量值;
将所述原始训练载体图像和所述原始待隐写测量值输入到所述原始隐写卷积神经网络模型中,获取原始训练隐写图像;
将所述原始训练隐写图像输入到所述蒸馏卷积神经网络模型中,获取与所述原始待隐写测量值对应的蒸馏测量值;
使用所述两个卷积神经网络模型对应的损失函数调整所述两个卷积神经网络模型的参数,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛。
可选的,所述蒸馏卷积神经网络模型包括一个全连接层和至少一个残差块;
所述将所述原始训练隐写图像输入到所述蒸馏卷积神经网络模型中,获取与所述原始待隐写测量值对应的蒸馏测量值,包括:
使用所述至少一个残差块对所述原始训练隐写图像进行处理,获取与所述原始待隐写测量值对应的待处理蒸馏测量值;
使用所述全连接层对所述待处理蒸馏测量值进行处理,使所述待处理蒸馏测量值的维数和大小与所述原始待隐写测量值的维数和大小一致。
可选的,所述对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值,包括:
使用压缩感知技术对所述待嵌入信息进行处理,获取与所述待嵌入信息对应的测量值。
本申请还提供一种图像隐写装置,包括:
信息获取单元,用于获取载体图像,并获取待嵌入信息;
测量值获取单元,用于对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
隐写图像获取单元,用于将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
本申请还提供一种电子设备,包括:
处理器;
存储器,用于存储图像隐写方法的程序,该设备通电并通过所述处理器运行所述图像隐写方法的程序后,执行下述步骤:
获取载体图像,并获取待嵌入信息;
对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
本申请还提供一种存储设备,
存储有图像隐写方法的程序,该程序被处理器运行,执行下述步骤:
获取载体图像,并获取待嵌入信息;
对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
本申请还提供一种图像提取方法,包括:
获取待检测图像;
从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
可选的,所述从所述待检测图像中获取被嵌入的测量值,包括:
获取目标蒸馏卷积神经网络模型,其中,所述目标蒸馏卷积神经网络模型用于从隐写入测量值的待检测图像中,提取所述被隐写的测量值;
将所述待检测图像输入到所述目标蒸馏卷积神经网络模型中,获取所述待检测图像中被隐写的测量值。
本申请还提供一种图像提取装置,包括:
图像获取单元,用于获取待检测图像;
测量值获取单元,用于从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
原始隐写信息获取单元,用于将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
本申请还提供一种电子设备,包括:
处理器;
存储器,用于存储图像提取方法的程序,该设备通电并通过所述处理器运行所述图像提取方法的程序后,执行下述步骤:
获取待检测图像;
从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
本申请还提供一种存储设备,
存储有图像提取方法的程序,该程序被处理器运行,执行下述步骤:
获取待检测图像;
从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
与现有技术相比,本申请具有以下优点:
本申请提供的一种图像隐写方法,包括:获取载体图像,并获取待嵌入信息;对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。所述图像隐写方法通过对待嵌入信息进行压缩处理,以获取与所述待嵌入信息对应的目标测量值,之后,通过将所述目标测量值隐写到所述载体图像中,来获得目标隐写图像。
所述图像隐写方法通过对所述待嵌入信息进行压缩处理来去除所述待嵌入信息中的冗余信息,不仅减少了隐写处理的计算量,还可以提高图像的隐写效率;另外,因为最终是将与待嵌入信息对应的目标测量值隐写到载体图像中,而非是待嵌入信息本身,所述目标测量值相当于是待嵌入信息的密钥,即本申请所述方法相当于是先对待嵌入信息做了一个加密,然后再对加密后的待嵌入信息进行隐写,进而使最终获得的目标隐写图像中的待嵌入信息不容易被破坏,大大的提高了待嵌入信息的安全性。
附图说明
图1是本申请第一实施例提供的一种图像隐写方法的应用场景的示意图;
图2是本申请第一实施例提供的第一种现有的图像隐写方法的示意图;
图3是本申请第一实施例提供的第二种现有的图像隐写方法的示意图;
图4是本申请第一实施例提供的第三种现有的图像隐写方法的示意图;
图5是本申请第一实施例提供的第四种现有的图像隐写方法的示意图;
图6是本申请第一实施例提供的本申请的图像隐写方法的流程图;
图7是本申请第一实施例提供的本申请的图像隐写方法的框架示意图;
图8是本申请第一实施例提供的各种图像隐写方法的隐写效果对比示意图;
图9是本申请第一实施例提供的各种图像隐写方法的峰值信噪比对比示意图;
图10是本申请第一实施例提供的各种图像隐写方法在不同数据集上的结构相似性对比示意图;
图11是本申请第一实施例提供的本申请的图像隐写方法在引入注意力机制前后的隐写效果对比示意图;
图12是本申请第一实施例提供的各种图像隐写方法的隐写区域的对比示意图;
图13是本申请第一实施例提供的本申请的图像隐写方法在不同采样率下的结果示意图;
图14是本申请第二实施例提供的一种图像提取方法的流程图;
图15是本申请第三实施例提供的图像隐写装置的示意图;
图16是本申请第四实施例提供的一种电子设备的示意图;
图17是本申请第六实施例提供的一种图像提取装置的示意图;
图18是本申请第七实施例提供的另一种电子设备的示意图。
具体实施方式
在下面的描述中阐述了很多具体细节以便于充分理解本申请。但是本申请能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本申请内涵的情况下做类似推广,因此本申请不受下面公开的具体实施的限制。
为了使本领域的技术人员更好的理解本申请方案,下面基于本申请提供的图像隐写方法,对其实施例的具体应用场景进行详细描述。本申请所述图像隐写方法可以应用与客户端与服务器交互的场景,如图1所示,其为本申请第一实施例提供的一种图像隐写方法的应用场景示意图。
客户端首先与服务器建立连接,连接之后客户端发送载体图像以及待嵌入信息到服务器,服务器获取所述载体图像,并获取所述待嵌入信息之后,先对所述待嵌入信息进 行压缩处理,获取与所述待嵌入信息对应的目标测量值;之后,将所述目标测量值隐写到所述载体图像中,获取目标隐写图像,并将所述目标隐写图像发送给客户端;之后,客户端接收所述目标隐写图像。
需要说明的是,所述客户端可以是移动终端设备,如手机、平板电脑等,也可以是常用的计算机设备。另外,在具体实施时,也可以将本申请所述图像隐写方法单独应用于客户端或服务器设备中,例如,客户端在获取所述载体图像和所述待嵌入信息之后,直接通过客户端中安装的相应的应用程序进行处理,并获取目标隐写图像;当然也可以是服务器在获取所述载体图像和所述待嵌入信息之后,直接将获取到的所述目标隐写图像存储在自身存储或远程存储中,而不需要发送给客户端。上述应用场景仅仅是本申请所述的图像隐写方法的一个具体实施例,提供所述应用场景实施例的目的是便于理解本申请的图像隐写方法,而并非用于限定本申请的图像隐写方法。
本申请第一实施例提供一种图像隐写方法,以下结合图2—图13进行说明。
在介绍本申请所述图像隐写方法之前,先针对现有技术中的一些图像隐写方法进行简单的介绍。
如图2所示,其为本申请第一实施例提供的第一种现有的图像隐写方法的示意图。所述图像隐写方法是一个基于生成对抗网络的隐写方法(ISGAN,Invisible Steganography via Generative Adversarial Networks),所述方法主要是将一个灰度图像,即图像中的像素的颜色只有一个采样颜色的图像嵌入到一个彩色图像中。由于所述方法为现有技术,此处不对其细节进行详细描述。所述方法主要存在如下问题:1、所述方法中并没有考虑到待嵌入信息,即灰度图像的可压缩性,即并没有考虑到灰度图像中是可能存在大量的冗余信息的,而这些冗余信息势必会增加隐写处理的计算量,降低隐写处理的效率;2、在所述方法中,利用生成式对抗网络(GAN,Generative Adversarial Networks)对图像进行隐写处理,虽然能够提升最终获得的隐写图像的视觉效果,但是所述方法往往会降低最终隐藏结果的峰值信噪比(PSNR,Peak Signal to Noise Ratio),其中,所述峰值信噪比主要是指一种评价图像的客观标准,因其为现有技术,此处不再赘述。
如图3所示,其为本申请第一实施例提供的第二种现有的图像隐写方法的示意图。所述图像隐写方法(Atique's,End-to-end Trained CNN Encode-Decoder Networks for Image Steganography)主要包括两个部分:编码器和解码器。其中,编码器的主要任务是将待嵌入信息隐写到载体图像中去;解码器的主要任务是从隐写图像中提取出被嵌入的信息。由于所述方法为现有技术,此处不对其细节进行详细描述。所述方法主要存在如下问题: 1、所述方法中使用的所述编码器已经被广泛的应用于其它领域中,因此,使用所述编码器获得的隐写图像中的隐写信息容易被破坏;2、所述方法中同样没有考虑到待嵌入信息的可压缩性,将所述待嵌入信息中的冗余信息也进行了隐写,而这些冗余信息势必会增加隐写处理的计算量,降低隐写处理的效率;3、所述方法并没有针对待嵌入信息的特性进行隐写,如在针对图像信息进行隐写时,并没有考虑到图像的纹理分布信息的不同来选择隐写位置,而仅是简单的进行全局隐写,其最终获得的隐写图像中的隐写信息容易被破坏。
如图4所示,其为本申请第一实施例提供的第三种现有的图像隐写方法的示意图。所述图像隐写方法(StegNet,Image-into-Image Steganography Using Deep Convolutional Network)主要是将待嵌入信息,主要是彩色图像信息隐写到载体图像中,所述方法也包括两个部分:编码器和解码器。由于所述方法为现有技术,此处不对其细节进行详细描述。所述方法主要存在如下问题:1、所述方法中所使用的卷积神经网络模型(CNN,Convolutional Neural Networks)的网络结构较为简单,其最终获得的隐写图像的隐写效果并不够好;2、所述方法同样没有考虑到待嵌入信息的可压缩性;3、所述方法同样也没有针对待嵌入信息的特性进行隐写,如在针对图像信息进行隐写时,并没有考虑到图像的纹理分布信息的不同来选择隐写位置,而仅是简单的进行全局隐写,其最终获得的隐写图像中的隐写信息容易被破坏。
如图5所示,其为本申请第一实施例提供的第四种现有的图像隐写方法的示意图。所述图像隐写方法(Deep-Steg,Hiding Images in Plain Sight:Deep Steganography),主要是利用卷积神经网络模型将待嵌入信息隐写到载体图像中。所述方法主要包括三个卷积神经网络模型:预处理卷积神经网络模型、隐写卷积神经网络模型和提取卷积神经网络模型。由于所述方法为现有技术,此处不对其细节进行详细描述。所述方法主要存在如下问题:1、所述方法同样没有考虑到待嵌入信息的可压缩性;2、所述方法同样也没有针对待嵌入信息的特性进行隐写,如在针对图像信息进行隐写时,并没有考虑到图像的纹理分布信息的不同来选择隐写位置,而仅是简单的进行全局隐写,其最终获得的隐写图像中的隐写信息容易被破坏;3、所述方法将所述预处理卷积神经网络模型输出的预处理待嵌入信息进行隐写,并没有破坏待嵌入信息的特性信息,因此,所述方法最终获得的隐写图像的隐写效果也不够好,所述隐写图像中的隐写信息同样容易被破坏。
为了解决现有的图像隐写方法存在的问题,本申请第一实施例提供一种图像隐写方法,如图6所示,其为本申请第一实施例提供的图像隐写方法的流程图,如图7所示, 其为本申请第一实施例提供的本申请的图像隐写方法的框架示意图。以下予以详细介绍。
步骤S601,获取载体图像,并获取待嵌入信息。
所述载体图像,具体可以是一个图像,也可以是视频资源中的某个视频帧。
所述待嵌入信息,是指需要隐蔽传递的信息或用于提供数字版权保护的信息,其具体可以是一段文字信息,也可以是图像信息,如公司LOGO、合同文件扫描件等信息。在本申请第一实施例中,所述待隐写信息为图像格式的信息。当然针对其它格式的信息,如文本、PDF等格式的信息,本申请第一实施例所提供的图像隐写方法同样可以处理。在以下描述中,以所述待嵌入信息为图像格式的信息举例进行说明。
步骤S602,对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值。
针对待嵌入信息的特有属性,如图像信息的局部相似性、非局部自相似、可压缩性等特性,为了去除所述待嵌入信息中的冗余信息,在本申请第一实施例中,在针对所述待嵌入信息进行隐写处理之前,先对所述待嵌入信息进行压缩处理,并获取与所述待嵌入信息对应的目标测量值。
其中,所述对所述待嵌入信息进行压缩处理,并获取与所述待嵌入信息对应的目标测量值,是指使用压缩感知技术对所述待嵌入信息进行压缩处理,并获取与所述待嵌入信息对应的目标测量值,当然,除了使用压缩感知技术主对图像格式的待嵌入信息进行处理之外,还可以使用离散余弦变换(DCT,DCT for Discrete Cosine Transform)、小波变换(WT,wavelet transform)等技术针对图像格式的待嵌入信息进行处理;此外,针对其它格式的待嵌入信息也可以使用与所述格式对应的压缩技术对其进行处理。
所述压缩感知技术,相较于传统的信号采集技术先进行信号采样,再进行信号压缩的处理过程,压缩感知技术是直接将信号采样与信号压缩一起进行处理的一种技术,即在采样的同时直接对信号进行压缩,这样不仅可以提升信号的采样速度,还可以在一定程度上去除信号中的冗余信息。压缩感知技术主要包括三个部分:1、信号的稀疏表示,即将原始信号以稀疏矩阵的形式表示;2、设计采样矩阵,即使用采样矩阵对所述原始信号进行降低其维数的压缩处理,获取其对应的测量值,并在降低其维数的同时保证所述原始信号的信息损失最小。其中,所述采样矩阵,也称为测量矩阵,是通过计算获得的一个数值矩阵,其主要用来对原始信号进行采样,同时还采样的过程中保存原始信号中的有效信息,所述测量值是对原始信号进行压缩处理后获取的一个测量数值,其维数相较于原始信号的维数要小;3、设计信号恢复算法,即根据所述原始信号的测量值,通过 相应的算法,获得所述原始信号。
在本实施例中,所述目标测量值,是指利用压缩感知技术对所述待嵌入信息进行降维压缩处理后获得的一个测量数值,其维数相较于所述待嵌入信息的维数要小。
所述对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值,包括:获取目标采样卷积神经网络模型,其中,所述目标采样卷积神经网络模型用于对所述待嵌入信息进行压缩处理;使用所述目标采样卷积神经网络模型对所述待嵌入信息进行处理,获取所述目标测量值。
在本申请第一实施例中,所述待嵌入信息为图像信息,所述对所述待嵌入信息进行压缩处理,是指获取一个性能良好的采样矩阵来对所述待嵌入信息进行压缩处理。通常,使用采样矩阵对所述待嵌入信息的采样过程可以视为一个线性映射,即y=Фx,其中,x为待嵌入信息,Ф为采样矩阵,y为使用Ф对x进行采样后获得的测量值。假设采样矩阵Ф为一个M×N的矩阵(M>0,N>0,且M<<N),则M/N称为采样率,所述待嵌入信息x为一个N维的信息,使用采样矩阵Ф对x进行采样处理之后,就会得到一个M维的测量值y。
根据上述描述可知,使用采样矩阵对所述待嵌入信息进行压缩处理,以获得所述待嵌入信息对应的目标测量值的过程,可以视为是一个卷积操作,采样矩阵的每一行对应一个卷积核,即卷积核的个数为M。因此,在本申请第一实施例中,通过获取一个与采样矩阵对应的目标采样卷积神经网络模型来对所述待嵌入信息进行压缩处理,在所述目标采样卷积神经网络模型中,卷积核的个数为N,个数为M。
例如:在获取一个与采样矩阵对应的目标采样卷积神经网络模型之后,将所述待嵌入信息,如图像S_Img01,输入到所述目标采样卷积神经网络模型中,即可获得所述图像S_Img01对应的目标测量值D_Data01。
所述获取目标采样卷积神经网络模型,包括:获取原始采样卷积神经网络模型;对所述原始采样卷积神经网络模型进行训练,使所述原始采样卷积神经网络模型收敛,并将收敛后的所述原始采样卷积神经网络模型作为所述目标采样卷积神经网络模型。
为了获取一个性能良好的目标采样卷积神经网络模型,在进行图像隐写之前,需要先获取原始采样卷积神经网络模型,并使用大量的训练样本信息对所述原始采样卷积神经网络模型进行训练,以使所述原始采样卷积神经网络模型收敛,在所述原始采样卷积神经网络模型收敛之后,即可将收敛后的所述原始采样卷积神经网络模型作为目标采样卷积神经网络模型,之后,在进行图形隐写处理时,即可使用所述目标采样卷积神经网 络模型来获取待嵌入信息对应的目标测量值。
同时,为了能够更高效的获取一个性能良好的目标采样卷积神经网络模型,本申请第一实施例在针对所述原始采样卷积神经网络模型进行训练时,通过引入一个指导卷积神经网络模型来和所述原始采样卷积神经网络模型进行联合训练。
即,所述对所述原始采样卷积神经网络模型进行训练,使所述原始采样卷积神经网络模型收敛,包括:获取指导卷积神经网络模型,其中,所述指导卷积神经网络模型用于将所述原始采样卷积神经网络模型的输出数据恢复为所述原始采样卷积神经网络模型的输入数据;将所述指导卷积神经网络模型和所述原始采样卷积神经网络模型进行联合训练,使所述原始采样卷积神经网络模型收敛。
需要说明的是,在单独使用大量训练样本信息对原始采样卷积神经网络模型进行训练时,通常可以获得大量的与所述训练样本信息对应的原始训练测量值,然而,我们并不能直观的评述获得的原始训练测量值的优劣。为了能够直观的评述获得的原始训练测量值的优劣,其中一个办法就是将获得的所述原始训练测量值进行反向还原处理,并通过比较原始训练样本信息和通过对其测量值进行反向还原后得到的数据进行比较,并通过比较结果来评述原始采样卷积神经网络模型的性能优劣,以及通过比较结果调整原始采样卷积神经网络模型的参数,以使其收敛。在本申请第一实施例中,通过获取一个指导卷积神经网络模型来将获得的所述原始训练测量值进行反向还原处理。
所述将所述指导卷积神经网络模型和所述原始采样卷积神经网络模型进行联合训练,使所述原始采样卷积神经网络模型收敛,包括:获取原始训练样本信息;将所述原始训练样本信息输入到所述原始采样卷积神经网络模型中,获取与所述原始训练样本信息对应的原始训练测量值;将所述原始训练测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息;通过所述指导卷积神经网络模型的损失函数调整所述指导卷积神经网络模型的参数和所述原始采样卷积神经网络模型的参数,使所述原始采样卷积神经网络模型收敛。
例如,以Ф来表示所述原始采样卷积神经网络模型,以g表示所述指导卷积神经网络模型,以x i表示原始训练样本信息,以Φx i表示与x i对应的测量值,以z i表示与x i对应的指导样本信息,将x i输入到原始采样卷积神经网络模型Ф之后,获取测量值Φx i,之后再将测量值Φx i输入到指导卷积神经网络模型g中,并获取与x i对应的z i,通过比较x i、z i的差值即可评述所述原始采样卷积神经网络模型的差异。
当然,我们通常是使用损失函数来对卷积神经网络模型的参数进行优化,此处,所述原始采样卷积神经网络模型和所述指导卷积神经网络模型对应的损失函数可以表示为:
Figure PCTCN2020113735-appb-000001
其中,θ g为所述指导卷积神经网络模型的参数,
Figure PCTCN2020113735-appb-000002
为所述原始采样卷积神经网络模型和所述指导卷积神经网络模型对应的损失函数。
另外,为了有效的减少采样矩阵,即减少最终获得的所述目标采样卷积神经网络模型的大小,在针对所述原始采样卷积神经网络模型进行训练时,采用基于块的采样(BCS,Block-based Compressed Sensing)。即所述将所述原始训练样本信息输入到所述原始采样卷积神经网络模型中,获取与所述原始训练样本信息对应的原始训练测量值,包括:将所述原始训练样本信息划分为至少一个原始训练样本子信息;将所述至少一个原始训练样本子信息输入到所述原始采样卷积神经网络模型中,获取至少一个原始训练子测量值。
具体来讲,在获得所述原始训练样本信息之后,针对所述原始训练样本信息,将其划分为若干个子信息,并分别获取这若干个子信息对应的测量值。例如,针对图像数据的原始训练样本信息,将所述图像数据划分为至少一个图像块,其中,每个图像块中的N个像素点表示一个N维的原始训练样本子信息,假设所述图像块的大小为B*B,即N=B*B,则针对每一个图像块进行采样处理后,即可获得至少一个M维的测量值。
另外,所述将所述原始训练测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息,包括:将所述至少一个原始训练子测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息。所述指导卷积神经网络模型包括一个全连接层和至少一个残差块;所述将所述至少一个原始训练子测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息,包括:使用所述全连接层对所述至少一个原始训练子测量值进行预处理,获取与所述原始训练样本信息的大小相同的预处理样本信息;使用所述至少一个残差块对所述预处理样本信息进行处理,获取与所述原始训练样本信息对应的指导样本信息。
所述残差块(residual block),通常是指包括跳跃连接(shortcut connection)的多个 卷积层的组合。其中,所述跳跃连接,也称短路连接,是根据循环神经网络(RNN,Recurrent Neural Network)中的跳跃连接以及各类门控算法演变而来的,是被用于缓解深度结构中梯度消失问题的技术。在申请中,如无特殊说明,则在上述出现的卷积神经网络模型及后续处理中出现的卷积神经网络模型中的残差块均是由至少一个卷积层(convolution)、一个激活层和至少一个批量归一化层(batch normalization)构成,其卷积核的大小为3*3,步长为1,填充(padding)为1。当然,在具体实施时,也可根据需要进行调整,此处不再赘述。
例如,在上述步骤中,针对所述至少一个N维的原始训练样本子信息,获得到其对应的至少一个M维的测量值,为了使用所述指导卷积神经网络模型还原得到所述原始训练样本子信息,我们在所述指导卷积神经网络模型中,先使用一个全连接层对所述至少一个M维的测量值进行预处理,以获得一个与所述原始训练样本信息的大小相同的预处理样本信息,之后,再使用至少一个残差块对所述预处理样本信息进行处理,以获取与所述原始训练样本信息对应的指导样本信息。
其中,所述使用所述全连接层对所述至少一个原始训练子测量值进行预处理,获取与所述原始训练样本信息的大小相同的预处理样本信息,包括:对所述至少一个原始训练子测量值进行维数调整处理,使所述至少一个原始训练子测量值的维数与所述原始训练样本信息的维数一致;对经过维数调整处理后的所述至少一个原始训练子测量值进行形状重塑处理,使所述至少一个原始训练子测量值的大小与所述原始训练样本子信息的大小一致;对经过所述维数调整处理和所述形状重塑处理后的所述至少一个原始训练子测量值进行拼接处理,获取与所述原始训练样本信息的大小相同的预处理样本信息。
如图7所示,所述对所述至少一个原始训练子测量值进行维数调整处理、形状重塑处理以及拼接处理,主要对应图7中的“上采样”处理,即,先对所述至少一个M维的测量值进行维数调整处理,即升维(Upsampling)操作,将其从M维升到N维,使其与其对应的原始训练样本子信息的维数一致;再之后,再对其进行形状重塑处理,使其与其对应的原始训练样本子信息的的大小一致。例如,假设所述至少一个原始训练样本子信息的大小为B*B,则将其重塑为B*B大小的测量值,所述重塑具体可以使用边缘填充的方法使两者大小一致,当然也可以使用其它方法,此处不再赘述。再之后,将经过所述维数调整处理和所述形状重塑处理后的所述至少一个原始训练子测量值进行拼接处理,获取与所述原始训练样本信息的大小相同的预处理样本信息。
需要说明的是,在上述处理中,之所以在获得所述预处理样本信息之后,还要将所 述预处理样本信息放到所述指导卷积神经网络模型中的至少一个残差块中去进行处理,是为了使最终获得的所述指导样本信息的效果更好。
综上所述,本申请第一实施例获取的所述目标采样卷积神经网络模型,通过将其对应的原始采样卷积神经网络模型和一个对应的指导卷积神经网络模型进行联合训练,大大的提高了最终获得的所述目标采样卷积神经网络模型的性能;并且在针对所述原始训练样本信息进行采样处理时,通过使用基于块的采样还能够减少所述目标采样卷积神经网络模型的大小。
请继续参看图6、图7,在步骤S602之后,执行步骤S603,将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
在上述步骤中,通过对所述待嵌入信息进行压缩处理,获取到了与所述待嵌入信息对应的目标测量值,之后,即可将所述目标测量值隐写到所述载体图像中。
其中,所述将所述目标测量值隐写到所述载体图像中,获取目标隐写图像,包括:获取目标隐写卷积神经网络模型;将所述目标测量值和所述载体图像输入到所述目标隐写卷积神经网络模型中,获取所述目标隐写图像。
即,在进行隐写处理之前,需要先获取用于进行图像隐写处理的目标隐写卷积神经网络模型。
在本申请第一实施例中,所述获取目标隐写卷积神经网络模型,包括:获取用于生成隐写图像的原始隐写卷积神经网络模型;对所述原始隐写卷积神经网络模型进行训练,使所述原始隐写卷积神经网络模型收敛,并将收敛后的所述原始隐写卷积神经网络模型作为所述目标隐写卷积神经网络模型。
为了获取一个性能良好的目标隐写卷积神经网络模型,需要先获取原始隐写卷积神经网络模型,并使用大量的原始待隐写测量值对所述原始隐写卷积神经网络模型进行训练,以使所述原始隐写卷积神经网络模型收敛,在所述原始隐写卷积神经网络模型收敛之后,即可将收敛后的所述原始隐写卷积神经网络模型作为目标隐写卷积神经网络模型。
为了能够更高效的获取一个性能良好的目标隐写卷积神经网络模型,本申请第一实施例在针对所述原始隐写卷积神经网络模型进行训练时,通过引入一个蒸馏卷积神经网络模型来和所述原始隐写卷积神经网络模型进行联合训练。
即,所述对所述原始隐写卷积神经网络模型进行训练,使所述原始隐写卷积神经网络模型收敛,包括:获取蒸馏卷积神经网络模型,其中,所述蒸馏卷积神经网络模型用于从所述原始隐写卷积神经网络模型的输出数据中获取与所述原始隐写卷积神经网络模 型的输入数据对应的测量值;将所述蒸馏卷积神经网络模型和所述原始隐写卷积神经网络模型进行联合训练,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛。
其处理过程具体为,首先,获取原始训练载体图像,并获取原始待隐写测量值;之后,将所述原始训练载体图像和所述原始待隐写测量值输入到所述原始隐写卷积神经网络模型中,获取原始训练隐写图像;再之后,将所述原始训练隐写图像输入到所述蒸馏卷积神经网络模型中,获取与所述原始待隐写测量值对应的蒸馏测量值;再之后,使用所述两个卷积神经网络模型对应的损失函数调整所述两个卷积神经网络模型的参数,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛。
所述将所述蒸馏卷积神经网络模型和所述原始隐写卷积神经网络模型进行联合训练,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛,包括:获取原始训练载体图像,并获取原始待隐写测量值;将所述原始训练载体图像和所述原始待隐写测量值输入到所述原始隐写卷积神经网络模型中,获取原始训练隐写图像;将所述原始训练隐写图像输入到所述蒸馏卷积神经网络模型中,获取与所述原始待隐写测量值对应的蒸馏测量值;使用所述两个卷积神经网络模型对应的损失函数调整所述两个卷积神经网络模型的参数,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛。
例如,以H表示所述原始隐写卷积神经网络模型,以D表示所述蒸馏卷积神经网络模型,θ H表示所述原始隐写卷积神经网络模型的参数,θ D表示所述蒸馏卷积神经网络模型的参数,以c i表示任一原始训练载体图像,c' i表示与c i对应的原始训练隐写图像,Φx i表示所述原始待隐写测量值,则所述原始隐写卷积神经网络模型的损失函数
Figure PCTCN2020113735-appb-000003
可以表示为:
Figure PCTCN2020113735-appb-000004
所述蒸馏卷积神经网络模型的损失函数
Figure PCTCN2020113735-appb-000005
可以表示为:
Figure PCTCN2020113735-appb-000006
其中,所述蒸馏卷积神经网络模型包括一个全连接层和至少一个残差块;所述将所述原始训练隐写图像输入到所述蒸馏卷积神经网络模型中,获取与所述原始待隐写测量值对应的蒸馏测量值,包括:使用所述至少一个残差块对所述原始训练隐写图像进行处理,获取与所述原始待隐写测量值对应的待处理蒸馏测量值;使用所述全连接层对所述待处理蒸馏测量值进行处理,使所述待处理蒸馏测量值的大小与所述原始待隐写测量值的大小一致。
所述使用所述蒸馏卷积神经网络模型中的全连接层对所述待处理蒸馏测量值进行处理,使所述待处理蒸馏测量值的大小与所述原始待隐写测量值的大小一致,所述处理过程对应于图7中的下采样处理,具体为:对所述待处理蒸馏测量值进行维数调整处理,使所述待处理蒸馏测量值的维数与所述原始待隐写测量值的维数一致;对经过维数调整处理后的所述待处理蒸馏测量值进行形状重塑处理,使所述待处理蒸馏测量值的大小与所述原始待隐写测量值的大小一致;对经过所述维数调整处理和所述形状重塑处理后的所述待处理蒸馏测量值进行拼接处理,使所述待处理蒸馏测量值的大小与所述原始待隐写测量值的大小一致,其详细处理过程基本相似于上述的上采样处理,此处不再赘述。
经过上述处理之后,通过将所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型进行联合训练,最终可获得一个性能良好的目标隐写卷积神经网络模型,之后,即可使用所述目标隐写卷积神经网络模型进行图像的隐写处理,即将所述目标测量值隐写到所述载体图像中。
需要说明的是,所述目标隐写卷积神经网络模型包括一个全连接层和至少一个残差块;所述将所述载体图像和所述目标测量值输入到所述目标隐写卷积神经网络模型中,获取目标隐写图像,包括:使用所述全连接层对所述载体图像和所述测量值进行拼接处理,获取待处理拼接图像;将所述待处理拼接图像值输入到所述至少一个残差块中,获取目标隐写图像。
其中,所述对所述载体图像和所述目标测量值进行拼接处理,获取待处理拼接图像,包括:获取与所述载体图像对应的特征信息图像,其中,所述特征信息图像包括所述载体图像的纹理分布信息;根据所述特征信息图像,对所述目标测量值进行重新布局,获取与所述载体图像中的特征信息对应的重布局测量值;对所述载体图像和所述重布局测量值进行拼接处理,获取所述待处理拼接图像。
所述根据所述特征信息图像,对所述目标测量值进行重新布局,获取与所述载体图像中的特征信息对应的重布局测量值,包括:对所述目标测量值进行维数调整处理,使 所述目标测量值的维数与所述特征信息图像的维数一致;对经过维数调整处理后的所述目标测量值进行形状重塑处理,使所述目标测量值的大小与所述特征信息图像的大小一致;对经过所述维数调整处理和所述形状重塑处理后的所述目标测量值和所述特征信息图像进行像素点的乘积运算,获取与所述载体图像中的特征信息对应的重布局测量值。
即,在使用所述目标隐写卷积神经网络模型将所述目标测量值隐写到所述载体图像中时,考虑到载体图像的特有属性,即不同的载体图像的纹理分布的信息是不相同的,而将所述目标测量值隐写到所述载体图像的纹理分布比较复杂的区域,势必能够带来更好的图像隐写效果。
因此,在本申请第一实施例中,引入了注意力机制来进行图像隐写处理。所述注意力机制,是指针对载体图像的特有属性,获取在进行隐写处理时需要注意的一些信息,如图像的纹理分布信息、亮度信息等信息。具体到本实施例中,是指在进行具体的隐写处理之前,需要先获取与所述载体图像对应的特征信息图像,然后再利用所述特征信息图像对所述隐写处理进行指导。
所述特征信息图像,是包括所述载体图像的纹理分布信息的图像。所述特征信息图像的获取方法,具体可以利用不同的特征提取算子对所述载体图像进行处理,以获取所述特征信息图像。
例如,可以使用边缘提取算子,如Sobel算子、Laplacian算子、Canny算子、Rebort算子等来提取所述载体图像的纹理边缘,以获取包含所述载体图像的纹理分别信息的边缘图像;当然,也可以使用一个卷积神经网络模型来获取所述特征信息图像,由于其为现有技术,此处不再赘述。
如图7所示,在获取到所述载体图像对应的特征信息图像之后,对所述目标测量值进行维数调整处理以及形状重塑处理,使所述目标测量值的大小与所述特征信息图像的大小一致,之后,对经过所述维数调整处理和所述形状重塑处理后的所述目标测量值和所述特征信息图像进行像素点的乘积运算,获取与所述载体图像中的特征信息对应的重布局测量值;再之后,通过将所述重布局测量值和所述载体图像进行拼接处理,具体为将所述重布局测量值中的像素点和所述载体图像中的像素点的值相加;再之后,将所述待处理拼接图像值输入到所述目标隐写卷积神经网络模型的至少一个残差块中,即可获取目标隐写图像。
需要说明的是,在本实施例中,所述原始训练样本子信息的大小被设置为32,也就是说一个原始训练样本信息被拆分为32*32大小的子信息;此外,所述原始训练样本信 息的来源主要为四个数据集:Set14,LIVE1,VOC2012中的测试集,ImageNet中的测试集。
如图8所示,其为本申请第一实施例提供的各种图像隐写方法的隐写效果对比示意图;如图9所示,其为本申请第一实施例提供的各种图像隐写方法的峰值信噪比对比示意图,在图中,以PSNR表示峰值信噪比;如图10所示,其为本申请第一实施例提供的各种图像隐写方法在不同数据集上的结构相似性对比示意图,其中,结构相似性(SSIM,structural similarity index)是一种衡量两幅图像相似度的指标;另外,参与比较的方法主要是上述进行简单描述的四个现有的图像隐写方法。根据图8、图9、图10可知,本申请第一实施例所述的图像隐写方法,通过对所述待嵌入信息进行压缩处理,以去除所述待嵌入信息中的冗余信息,并获取去除冗余信息后的、与所述待嵌入信息对应的目标测量值,再将所述目标测量值隐写到所述载体图像中,不仅可以减小隐写处理的计算量,提高隐写效率,还可以在一定程度上提升隐写效果。
另外,如图11所示,其为本申请第一实施例提供的本申请的图像隐写方法在引入注意力机制前后的隐写效果对比示意图;如图12所示,其为本申请第一实施例提供的各种图像隐写方法的隐写区域的对比示意图。本申请第一实施例提供的所述图像隐写方法在获得与所述待嵌入信息对应的目标测量值,并在针对所述目标测量值进行隐写处理时,引入了注意力机制,即通过获取包含所述载体图像的纹理分布信息的特征信息图像,并使用所述特征信息图像来对所述隐写处理进行指导,以使所述目标测量值可以被隐写到所述载体图像的纹理复杂区域,进而提升隐写效果,减小最终获得的目标隐写图像中的隐写信息被破坏的几率。
如图13所示,其为本申请第一实施例提供的本申请的图像隐写方法在不同采样率下的结果示意图。为了保证信息的安全,在对待嵌入信息进行隐写的同时,如果再对信息进行加密,将可以最大力度的保障信息的安全。本申请第一实施例提供的所述图像隐写方法,通过使用所述目标采样卷积神经网络模型来对所述待嵌入信息进行采样处理,并获得其对应的目标测量值,相当于变相对所述待嵌入信息进行了一个加密处理,所述目标测量值即相当于是所述待嵌入信息的密钥。另外,经过对所述待嵌入信息进行采样处理,还可以破坏所述待嵌入信息的可视化特性,以及在一定程度上破坏其原始数据的分布,可以极大的提升隐写效果,避免获取到的目标隐写图像中的隐写信息被破坏的几率。
需要说明的是,为了保证信息的安全,在本申请第一实施例中,将对待嵌入信息进行压缩处理后获得的目标测量值嵌入到载体图像中。在具体实施时,为了进一步提高信 息的安全性,还可以针对载体图像进行压缩处理,获取与载体图像对应的载体测量值,之后将与待嵌入信息对应的目标测量值隐写到与载体图像对应的载体测量值中;另外,为了再进一步的提高信息的安全性,还可以先确定载体图像中待隐写入待嵌入信息的目标区域,之后,针对所述目标区域进行局部压缩处理,获取与载体图像对应的局部载体测量值,之后,将与待嵌入信息对应的目标测量值隐写入与载体图像对应的局部载体测量值中。
另外,在具体实施本申请第一实施例所述图像隐写方法时,可对所述图像隐写方法按照不同安全级别进行级别划分,并将不同安全级别对应的图像隐写方法对应的授权给不同级别的用户。
例如,针对图像隐写方法中所使用到的具有不同功能的卷积神经网络模型对图像隐写方法进行级别划分。在具体实施时,将仅包括目标采样卷积神经网络模型、目标隐写卷积神经网络模型及重建卷积神经网络模型的图像隐写方法划分为级别1;将包括目标采样卷积神经网络模型、指导卷积神经网络模型、目标隐写卷积神经网络模型及重建卷积神经网络模型的图像隐写方法划分为级别2;将包括目标采样卷积神经网络模型、指导卷积神经网络模型、目标隐写卷积神经网络模型、目标蒸馏卷积神经网络模型及重建卷积神经网络模型的图像隐写方法划分为级别3。之后,将包括的卷积神经网络模型最多的级别所对应的图像隐写方法授权给高级用户,如将级别3授权给高级用户;将次一级别对应的图像隐写方法授权给次高级用户,如将级别2授权给次高级用户;将最低级别对应的图像隐写方法授权给普通用户,如将级别1授权给普通用户。
又例如,按照图像隐写处理的复杂度的不同,对本申请第一实施例所述图像隐写方法进行级别划分。在具体实施时,将仅对待嵌入信息进行压缩处理后的目标测量值进行隐写处理,同时并未引入注意力机制的图像隐写方法划分为级别1;将对载体图像和待嵌入信息均进行了压缩处理,并将与待嵌入信息对应的目标测量值隐写到与载体图像对应的载体测量值中,同时并未引入注意力机制的图像隐写方法划分为级别2;将仅对待嵌入信息进行压缩处理后的目标测量值进行隐写处理,同时引入注意力机制的图像隐写方法划分为级别3;将针对引入注意力机制后的、针对载体图像中待隐写入待嵌入信息的目标区域进行局部压缩处理,以及对待嵌入信息进行压缩处理,并将与待嵌入信息对应的目标测量值隐写入与载体图像对应的局部载体测量值中的图像隐写方法划分为级别4。之后,将复杂度最高的级别所对应的图像隐写方法授权给高级用户,如将级别4授权给高级用户;将次一级别对应的图像隐写方法授权给次高级用户,如将级别3授权给次 高级用户;将再次一级别对应的图像隐写方法授权给再次一级别的高级用户,如将级别2授权给次次高级用户;将最低级别对应的图像隐写方法授权给普通用户,如将级别1授权给普通用户。
需要说明的是,上述从两种不同角度描述了对本申请第一实施例所述图像隐写方法进行安全级别划分的方法。在具体实施时,针对上述两种角度的安全级别划分方法,可按实际需要进行更具体的划分;或者,可按实际的需要,从其它角度对本申请所述图像隐写方法进行安全级别划分,如将上述两种角度的划分方法进行组合以获得更细粒度的安全级别,此处不再一一描述。
综上所述,本申请第一实施例所述图像隐写方法,包括:获取载体图像,并获取待嵌入信息;对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
本申请第一实施例所述图像隐写方法在进行图像隐写处理时,在所述载体图像中隐写的是针对待嵌入信息进行压缩处理后获得的目标测量值,而非是待嵌入信息本身;这样处理一方面大大的减少了所述待嵌入信息中的冗余信息,减少了之后的隐写处理的计算量,可以提高隐写效率;另一方面因为隐写的是与所述待嵌入信息对应的目标测量值,而非是所述待嵌入信息本身,所述目标测量值相当于是所述待嵌入信息的密钥,即所述方法相当于是先对所述待嵌入信息做了一个加密,然后再对加密后的所述待嵌入信息进行隐写,进而使最终获得的目标隐写图像中的所述待嵌入信息不容易被破坏,大大的提高了所述待嵌入信息的安全性;并且,在进行具体的隐写处理时,引入了注意力机制,通过获取与所述载体图像对应的特征信息图像来指导隐写处理,可以将所述目标测量值更好的隐写到所述载体图像的纹理复杂区域,也可以一定程度的提高隐写效果,进而使最终获得的目标隐写图像中的所述待嵌入信息不容易被破坏,大大的提高了所述待嵌入信息的安全性。
在以上描述中,提供了一种图像隐写方法,与上述图像隐写相对应,本申请还提供一种图像提取方法,请参看图14所示,其为本申请第二实施例提供的一种图像提取方法的流程图,其中部分步骤在上述第一实施例中已经详细描述,所以此处描述的比较简单,相关之处参见本申请第一实施例提供的一种图像隐写方法中的部分说明即可,下述描述的处理过程仅是示意性的。
如图14所示,其为本申请第二实施例提供的一种图像提取方法的流程图,以下结合图7以及图14予以说明。
步骤S1401,获取待检测图像;
步骤S1402,从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
步骤S1403,将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
其中,所述从所述待检测图像中获取被隐写的测量值,包括:
获取目标蒸馏卷积神经网络模型,其中,所述目标蒸馏卷积神经网络模型用于从隐写入测量值的待检测图像中,提取所述被隐写的测量值;将所述待检测图像输入到所述目标蒸馏卷积神经网络模型中,获取所述待检测图像中被隐写的测量值。
所述获取目标蒸馏卷积神经网络模型,包括:获取蒸馏卷积神经网络模型;对所述蒸馏卷积神经网络模型进行训练,使所述蒸馏卷积神经网络模型收敛,并将收敛后的所述蒸馏卷积神经网络模型作为所述目标蒸馏卷积神经网络模型。
所述目标蒸馏卷积神经网络模型,是在获取上述第一实施例中所述的目标隐写卷积神经网络模型时,通过引入所述蒸馏卷积神经网络模型,并将所述蒸馏卷积神经网络模型和上述第一实施例中所述原始隐写卷积神经网络模型进行联合训练,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛,并将收敛后的所述蒸馏卷积神经网络模型作为所述目标蒸馏卷积神经网络模型。由于其详细处理过程在上述第一实施例中已经详细描述,此处不再赘述,具体参考上述第一实施例中描述即可。
所述重建卷积神经网络模型,通过以下步骤获取:
获取原始重建卷积神经网络模型;对所述原始重建卷积神经网络模型进行训练,使所述原始重建卷积神经网络模型收敛,并将收敛后的所述原始重建卷积神经网络模型作为所述目标重建卷积神经网络模型。
其中,所述对所述原始重建卷积神经网络模型进行训练,使所述原始重建卷积神经网络模型收敛,包括:获取原始训练检测测量值;将所述原始训练检测测量值输入到所述原始重建卷积神经网络模型中,获取原始重建图像;使用所述原始重建卷积神经网络模型对应的损失函数调整所述原始重建卷积神经网络模型的参数,使所述原始重建卷积神经网络模型收敛。
所述获取原始训练检测测量值,是指通过上述第一实施例中所述目标采样卷积神经网络模型通过针对所述原始训练样本信息进行采样处理后获取到的。其中,在本实施例中,所述原始训练样本信息的来源主要为四个数据集:Set14,LIVE1,VOC2012中的测 试集,ImageNet中的测试集。
例如,以θ R表示所述原始重建卷积神经网络模型的参数,以y”表示所述原始训练检测测量值,以x i表示所述原始训练样本信息,以
Figure PCTCN2020113735-appb-000007
表示所述原始重建卷积神经网络模型,则所述原始重建卷积神经网络模型的损失函数
Figure PCTCN2020113735-appb-000008
可以表示为:
Figure PCTCN2020113735-appb-000009
所述原始训练检测测量值包括一个全连接层和至少一个残差块;所述将所述原始训练检测测量值输入到所述原始重建卷积神经网络模型中,获取原始重建图像,包括:使用所述全连接层对所述原始训练检测测量值进行处理,使所述原始训练检测测量值的维数和大小与所述待检测图像的维数和大小一致;使用所述至少一个残差块对与所述待检测图像的维数和大小一致的所述原始训练检测测量值进行处理,获取与所述原始训练检测测量值对应的原始重建图像。其中,所述残差块的结构与上述第一实施例中所述的卷积神经网络模型中所使用的残差块的结构相同,此处不再赘述。
与上述第一实施例提供的一种图像隐写方法相对应,本申请还提供一种图像隐写装置,请参看图15,其为本申请第三实施例提供的一种图像隐写装置的实施例的示意图,由于装置实施例基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可,下述描述的装置实施例仅仅是示意性的。本申请第三实施例提供的一种图像隐写装置包括如下部分:
信息获取单元1501,用于获取载体图像,并获取待嵌入信息。
测量值获取单元1502,用于对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值。
隐写图像获取单元1503,用于将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
与上述第一实施例提供的一种图像隐写方法相对应,本申请还提供一种电子设备,请参看图16,其为本申请第四实施例提供的一种电子设备的示意图,由于电子设备实施例基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可,下述描述的电子设备实施例仅仅是示意性的。本申请第四实施例提供的一种电子 设备包括:
处理器1601;
存储器1602,用于存储图像隐写方法的程序,该设备通电并通过所述处理器运行所述图像隐写方法的程序后,执行下述步骤:
获取载体图像,并获取待嵌入信息;
对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
与上述第一实施例提供的一种图像隐写方法相对应,本申请还提供一种存储设备,由于存储设备实施例基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可,下述描述的存储设备实施例仅仅是示意性的。本申请第五实施例提供的一种存储设备,存储有图像隐写方法的程序,该程序被处理器运行,执行下述步骤:
获取载体图像,并获取待嵌入信息;
对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
与上述第二实施例提供的一种图像提取方法相对应,本申请还提供一种图像提取装置,请参看图17,其为本申请第六实施例提供的一种图像提取装置的实施例的示意图,由于装置实施例基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可,下述描述的装置实施例仅仅是示意性的。本申请第六实施例提供的一种图像提取装置包括如下部分:
图像获取单元1701,用于获取待检测图像;
测量值获取单元1702,用于从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
原始隐写信息获取单元1703,用于将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
与上述第二实施例提供的一种图像提取方法相对应,本申请还提供另一种电子设备,请参看图18,其为本申请第七实施例提供的一种电子设备的示意图,由于电子设备实施 例基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可,下述描述的电子设备实施例仅仅是示意性的。本申请第七实施例提供的一种电子设备包括:
处理器1801;
存储器1802,用于存储图像提取方法的程序,该设备通电并通过所述处理器运行所述图像提取方法的程序后,执行下述步骤:
获取待检测图像;
从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
与上述第二实施例提供的一种图像隐写方法相对应,本申请还提供一种存储设备,由于存储设备实施例基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可,下述描述的存储设备实施例仅仅是示意性的。本申请第八实施例提供的一种存储设备,存储有图像提取方法的程序,该程序被处理器运行,执行下述步骤:
获取待检测图像;
从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
本申请虽然以较佳实施例公开如上,但其并不是用来限定本申请,任何本领域技术人员在不脱离本申请的精神和范围内,都可以做出可能的变动和修改,因此本申请的保护范围应当以本申请权利要求所界定的范围为准。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算 机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。

Claims (26)

  1. 一种图像隐写方法,其特征在于,包括:
    获取载体图像,并获取待嵌入信息;
    对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
    将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
  2. 根据权利要求1所述的图像隐写方法,其特征在于,所述对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值,包括:
    获取目标采样卷积神经网络模型,其中,所述目标采样卷积神经网络模型用于对所述待嵌入信息进行压缩处理;
    使用所述目标采样卷积神经网络模型对所述待嵌入信息进行处理,获取所述目标测量值。
  3. 根据权利要求2所述的图像隐写方法,其特征在于,所述获取目标采样卷积神经网络模型,包括:
    获取原始采样卷积神经网络模型;
    对所述原始采样卷积神经网络模型进行训练,使所述原始采样卷积神经网络模型收敛,并将收敛后的所述原始采样卷积神经网络模型作为所述目标采样卷积神经网络模型。
  4. 根据权利要求3所述的图像隐写方法,其特征在于,所述对所述原始采样卷积神经网络模型进行训练,使所述原始采样卷积神经网络模型收敛,包括:
    获取指导卷积神经网络模型,其中,所述指导卷积神经网络模型用于将所述原始采样卷积神经网络模型的输出数据恢复为所述原始采样卷积神经网络模型的输入数据;
    将所述指导卷积神经网络模型和所述原始采样卷积神经网络模型进行联合训练,使所述原始采样卷积神经网络模型收敛。
  5. 根据权利要求4所述的图像隐写方法,其特征在于,所述将所述指导卷积神经网络模型和所述原始采样卷积神经网络模型进行联合训练,使所述原始采样卷积神经网络模型收敛,包括:
    获取原始训练样本信息;
    将所述原始训练样本信息输入到所述原始采样卷积神经网络模型中,获取与所述原始训练样本信息对应的原始训练测量值;
    将所述原始训练测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练 样本信息对应的指导样本信息;
    通过所述指导卷积神经网络模型的损失函数调整所述指导卷积神经网络模型的参数和所述原始采样卷积神经网络模型的参数,使所述原始采样卷积神经网络模型收敛。
  6. 根据权利要求5所述的图像隐写方法,其特征在于,所述将所述原始训练样本信息输入到所述原始采样卷积神经网络模型中,获取与所述原始训练样本信息对应的原始训练测量值,包括:
    将所述原始训练样本信息划分为至少一个原始训练样本子信息;
    将所述至少一个原始训练样本子信息输入到所述原始采样卷积神经网络模型中,获取至少一个原始训练子测量值。
  7. 根据权利要求6所述的图像隐写方法,其特征在于,所述将所述原始训练测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息,包括:
    将所述至少一个原始训练子测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息。
  8. 根据权利要求7所述的图像隐写方法,其特征在于,所述指导卷积神经网络模型包括一个全连接层和至少一个残差块;
    所述将所述至少一个原始训练子测量值输入到所述指导卷积神经网络模型中,获取与所述原始训练样本信息对应的指导样本信息,包括:
    使用所述全连接层对所述至少一个原始训练子测量值进行预处理,获取与所述原始训练样本信息的大小相同的预处理样本信息;
    使用所述至少一个残差块对所述预处理样本信息进行处理,获取与所述原始训练样本信息对应的指导样本信息。
  9. 根据权利要求8所述的图像隐写方法,其特征在于,所述使用所述全连接层对所述至少一个原始训练子测量值进行预处理,获取与所述原始训练样本信息的大小相同的预处理样本信息,包括:
    对所述至少一个原始训练子测量值进行维数调整处理,使所述至少一个原始训练子测量值的维数与所述原始训练样本信息的维数一致;
    对经过维数调整处理后的所述至少一个原始训练子测量值进行形状重塑处理,使所述至少一个原始训练子测量值的大小与所述原始训练样本子信息的大小一致;
    对经过所述维数调整处理和所述形状重塑处理后的所述至少一个原始训练子测量 值进行拼接处理,获取与所述原始训练样本信息的大小相同的预处理样本信息。
  10. 根据权利要求1所述的图像隐写方法,其特征在于,所述将所述目标测量值隐写到所述载体图像中,获取目标隐写图像,包括:
    获取目标隐写卷积神经网络模型;
    将所述目标测量值和所述载体图像输入到所述目标隐写卷积神经网络模型中,获取所述目标隐写图像。
  11. 根据权利要求10所述的图像隐写方法,其特征在于,所述目标隐写卷积神经网络模型包括一个全连接层和至少一个残差块;
    所述将所述载体图像和所述目标测量值输入到所述目标隐写卷积神经网络模型中,获取所述目标隐写图像,包括:
    使用所述全连接层对所述载体图像和所述测量值进行拼接处理,获取待处理拼接图像;
    将所述待处理拼接图像值输入到所述至少一个残差块中,获取目标隐写图像。
  12. 根据权利要求11所述的图像隐写方法,其特征在于,所述对所述载体图像和所述目标测量值进行拼接处理,获取待处理拼接图像,包括:
    获取与所述载体图像对应的特征信息图像,其中,所述特征信息图像包括所述载体图像的纹理分布信息;
    根据所述特征信息图像,对所述目标测量值进行重新布局,获取与所述载体图像中的特征信息对应的重布局测量值;
    对所述载体图像和所述重布局测量值进行拼接处理,获取所述待处理拼接图像。
  13. 根据权利要求12所述的图像隐写方法,其特征在于,所述根据所述特征信息图像,对所述目标测量值进行重新布局,获取与所述载体图像中的特征信息对应的重布局测量值,包括:
    对所述目标测量值进行维数调整处理,使所述目标测量值的维数与所述特征信息图像的维数一致;
    对经过维数调整处理后的所述目标测量值进行形状重塑处理,使所述目标测量值的大小与所述特征信息图像的大小一致;
    对经过所述维数调整处理和所述形状重塑处理后的所述目标测量值和所述特征信息图像进行像素点的乘积运算,获取与所述载体图像中的特征信息对应的重布局测量值。
  14. 根据权利要求10所述的图像隐写方法,其特征在于,所述获取目标隐写卷积神经网络模型,包括:
    获取用于生成隐写图像的原始隐写卷积神经网络模型;
    对所述原始隐写卷积神经网络模型进行训练,使所述原始隐写卷积神经网络模型收敛,并将收敛后的所述原始隐写卷积神经网络模型作为所述目标隐写卷积神经网络模型。
  15. 根据权利要求14所述的图像隐写方法,其特征在于,所述对所述原始隐写卷积神经网络模型进行训练,使所述原始隐写卷积神经网络模型收敛,包括:
    获取蒸馏卷积神经网络模型,其中,所述蒸馏卷积神经网络模型用于从所述原始隐写卷积神经网络模型的输出数据中获取与所述原始隐写卷积神经网络模型的输入数据对应的测量值;
    将所述蒸馏卷积神经网络模型和所述原始隐写卷积神经网络模型进行联合训练,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛。
  16. 根据权利要求15所述的图像隐写方法,其特征在于,所述将所述蒸馏卷积神经网络模型和所述原始隐写卷积神经网络模型进行联合训练,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛,包括:
    获取原始训练载体图像,并获取原始待隐写测量值;
    将所述原始训练载体图像和所述原始待隐写测量值输入到所述原始隐写卷积神经网络模型中,获取原始训练隐写图像;
    将所述原始训练隐写图像输入到所述蒸馏卷积神经网络模型中,获取与所述原始待隐写测量值对应的蒸馏测量值;
    使用两个卷积神经网络模型对应的损失函数调整所述两个卷积神经网络模型的参数,使所述原始隐写卷积神经网络模型和所述蒸馏卷积神经网络模型收敛。
  17. 根据权利要求16所述的图像隐写方法,其特征在于,所述蒸馏卷积神经网络模型包括一个全连接层和至少一个残差块;
    所述将所述原始训练隐写图像输入到所述蒸馏卷积神经网络模型中,获取与所述原始待隐写测量值对应的蒸馏测量值,包括:
    使用所述至少一个残差块对所述原始训练隐写图像进行处理,获取与所述原始待隐写测量值对应的待处理蒸馏测量值;
    使用所述全连接层对所述待处理蒸馏测量值进行处理,使所述待处理蒸馏测量值的 维数和大小与所述原始待隐写测量值的维数和大小一致。
  18. 根据权利要求1所述的图像隐写方法,其特征在于,所述对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值,包括:
    使用压缩感知技术对所述待嵌入信息进行处理,获取与所述待嵌入信息对应的测量值。
  19. 一种图像提取方法,其特征在于,包括:
    获取待检测图像;
    从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
    将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
  20. 根据权利要求19所述的图像提取方法,其特征在于,所述从所述待检测图像中获取被隐写的测量值,包括:
    获取目标蒸馏卷积神经网络模型,其中,所述目标蒸馏卷积神经网络模型用于从隐写入测量值的待检测图像中,提取所述被隐写的测量值;
    将所述待检测图像输入到所述目标蒸馏卷积神经网络模型中,获取所述待检测图像中被隐写入的测量值。
  21. 一种图像隐写装置,其特征在于,包括:
    信息获取单元,用于获取载体图像,并获取待嵌入信息;
    测量值获取单元,用于对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
    隐写图像获取单元,用于将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
  22. 一种电子设备,其特征在于,包括:
    处理器;
    存储器,用于存储图像隐写方法的程序,该设备通电并通过所述处理器运行所述图像隐写方法的程序后,执行下述步骤:
    获取载体图像,并获取待嵌入信息;
    对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
    将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
  23. 一种存储设备,其特征在于,
    存储有图像隐写方法的程序,该程序被处理器运行,执行下述步骤:
    获取载体图像,并获取待嵌入信息;
    对所述待嵌入信息进行压缩处理,获取与所述待嵌入信息对应的目标测量值;
    将所述目标测量值隐写到所述载体图像中,获取目标隐写图像。
  24. 一种图像提取装置,其特征在于,包括:
    图像获取单元,用于获取待检测图像;
    测量值获取单元,用于从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
    原始隐写信息获取单元,用于将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
  25. 一种电子设备,其特征在于,包括:
    处理器;
    存储器,用于存储图像提取方法的程序,该设备通电并通过所述处理器运行所述图像提取方法的程序后,执行下述步骤:
    获取待检测图像;
    从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
    将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
  26. 一种存储设备,其特征在于,
    存储有图像提取方法的程序,该程序被处理器运行,执行下述步骤:
    获取待检测图像;
    从所述待检测图像中获取被隐写的测量值,其中,所述测量值是对原始隐写信息进行压缩处理后获得的;
    将所述测量值输入到用于重建图像的重建卷积神经网络模型中,获取与所述测量值对应的所述原始隐写信息。
PCT/CN2020/113735 2019-09-10 2020-09-07 图像隐写及提取方法、装置及电子设备 WO2021047471A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910850963.3 2019-09-10
CN201910850963.3A CN112561766B (zh) 2019-09-10 2019-09-10 图像隐写及提取方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2021047471A1 true WO2021047471A1 (zh) 2021-03-18

Family

ID=74866895

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/113735 WO2021047471A1 (zh) 2019-09-10 2020-09-07 图像隐写及提取方法、装置及电子设备

Country Status (3)

Country Link
CN (2) CN118158329A (zh)
TW (1) TW202111604A (zh)
WO (1) WO2021047471A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157773A (zh) * 2021-12-01 2022-03-08 杭州电子科技大学 基于卷积神经网络和频域注意力的图像隐写方法
CN114926706A (zh) * 2022-05-23 2022-08-19 支付宝(杭州)信息技术有限公司 数据处理方法、装置及设备
CN116095339A (zh) * 2023-01-16 2023-05-09 北京智芯微电子科技有限公司 图像传输方法、训练方法、电子设备及可读存储介质
CN116156072A (zh) * 2023-02-08 2023-05-23 马上消费金融股份有限公司 隐写图像生成方法、隐写信息提取方法及相关装置
CN117876273A (zh) * 2024-03-11 2024-04-12 南京信息工程大学 一种基于可逆生成对抗网络的鲁棒图像处理方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076549B (zh) * 2021-04-08 2023-05-12 上海电力大学 一种基于新型U-Net结构生成器的对抗网络图像隐写方法
CN112926607B (zh) * 2021-04-28 2023-02-17 河南大学 基于卷积神经网络的双支网络图像隐写框架及方法
CN113326531B (zh) * 2021-06-29 2022-07-26 湖南汇视威智能科技有限公司 一种鲁棒的高效分布式人脸图像隐写方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165082A1 (zh) * 2015-04-15 2016-10-20 中国科学院自动化研究所 基于深度学习的图像隐写检测方法
CN106339978A (zh) * 2016-08-24 2017-01-18 湖南工业大学 一种基于压缩感知的彩色数字图像水印嵌入及提取方法
CN106791872A (zh) * 2016-11-18 2017-05-31 南京邮电大学 基于svd的信息隐藏方法
CN108961137A (zh) * 2018-07-12 2018-12-07 中山大学 一种基于卷积神经网络的图像隐写分析方法及系统
CN110110535A (zh) * 2019-04-24 2019-08-09 湖北工业大学 一种基于像素矩阵的低失真隐写方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165082A1 (zh) * 2015-04-15 2016-10-20 中国科学院自动化研究所 基于深度学习的图像隐写检测方法
CN106339978A (zh) * 2016-08-24 2017-01-18 湖南工业大学 一种基于压缩感知的彩色数字图像水印嵌入及提取方法
CN106791872A (zh) * 2016-11-18 2017-05-31 南京邮电大学 基于svd的信息隐藏方法
CN108961137A (zh) * 2018-07-12 2018-12-07 中山大学 一种基于卷积神经网络的图像隐写分析方法及系统
CN110110535A (zh) * 2019-04-24 2019-08-09 湖北工业大学 一种基于像素矩阵的低失真隐写方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157773A (zh) * 2021-12-01 2022-03-08 杭州电子科技大学 基于卷积神经网络和频域注意力的图像隐写方法
CN114157773B (zh) * 2021-12-01 2024-02-09 杭州电子科技大学 基于卷积神经网络和频域注意力的图像隐写方法
CN114926706A (zh) * 2022-05-23 2022-08-19 支付宝(杭州)信息技术有限公司 数据处理方法、装置及设备
CN116095339A (zh) * 2023-01-16 2023-05-09 北京智芯微电子科技有限公司 图像传输方法、训练方法、电子设备及可读存储介质
CN116156072A (zh) * 2023-02-08 2023-05-23 马上消费金融股份有限公司 隐写图像生成方法、隐写信息提取方法及相关装置
CN117876273A (zh) * 2024-03-11 2024-04-12 南京信息工程大学 一种基于可逆生成对抗网络的鲁棒图像处理方法
CN117876273B (zh) * 2024-03-11 2024-06-07 南京信息工程大学 一种基于可逆生成对抗网络的鲁棒图像处理方法

Also Published As

Publication number Publication date
CN112561766A (zh) 2021-03-26
CN112561766B (zh) 2024-03-05
CN118158329A (zh) 2024-06-07
TW202111604A (zh) 2021-03-16

Similar Documents

Publication Publication Date Title
WO2021047471A1 (zh) 图像隐写及提取方法、装置及电子设备
CN110197229B (zh) 图像处理模型的训练方法、装置及存储介质
CN110084734B (zh) 一种基于物体局部生成对抗网络的大数据权属保护方法
CN112598579B (zh) 面向监控场景的图像超分辨率方法、装置及存储介质
Wei et al. Generative steganography network
US20230051960A1 (en) Coding scheme for video data using down-sampling/up-sampling and non-linear filter for depth map
CN117237197B (zh) 基于交叉注意力机制的图像超分辨率方法及装置
CN112487365A (zh) 信息隐写方法及信息检测方法及装置
Wu et al. An image authentication and recovery system based on discrete wavelet transform and convolutional neural networks
US11854164B2 (en) Method for denoising omnidirectional videos and rectified videos
Liu et al. Facial image inpainting using multi-level generative network
CN113660386B (zh) 一种彩色图像加密压缩与超分重构系统和方法
US20220335560A1 (en) Watermark-Based Image Reconstruction
Li et al. Robust image steganography framework based on generative adversarial network
US20230326086A1 (en) Systems and methods for image and video compression
Xintao et al. Hide the image in fc-densenets to another image
Tsai et al. A generalized image interpolation-based reversible data hiding scheme with high embedding capacity and image quality
CN111065000B (zh) 视频水印处理方法、装置及存储介质
CN115375539A (zh) 图像分辨率增强、多帧图像超分辨率系统和方法
CN112966230A (zh) 信息隐写及提取方法、装置及设备
Hashemi et al. Color Image steganography using Deep convolutional Autoencoders based on ResNet architecture
Xu et al. Image Super-Resolution Based on Variational Autoencoder and Channel Attention
CN115643348B (zh) 基于可逆图像处理网络的可证安全自然隐写方法及装置
Liu et al. Soft-introVAE for continuous latent space image super-resolution
US20240331204A1 (en) Method to generate for global displacement transformation in mesh compression

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20863771

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20863771

Country of ref document: EP

Kind code of ref document: A1