CN117376484A - Electronic license anti-counterfeiting oriented generation type steganography method - Google Patents

Electronic license anti-counterfeiting oriented generation type steganography method Download PDF

Info

Publication number
CN117376484A
CN117376484A CN202311651548.8A CN202311651548A CN117376484A CN 117376484 A CN117376484 A CN 117376484A CN 202311651548 A CN202311651548 A CN 202311651548A CN 117376484 A CN117376484 A CN 117376484A
Authority
CN
China
Prior art keywords
model
noise
image
diffusion
denoising
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311651548.8A
Other languages
Chinese (zh)
Other versions
CN117376484B (en
Inventor
熊翱
严文昊
王伟
刘雨潇
张楠
张秀永
钱旭盛
朱萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Co ltd Customer Service Center
State Grid Jiangsu Electric Power Co ltd Marketing Service Center
Beijing University of Posts and Telecommunications
Original Assignee
State Grid Co ltd Customer Service Center
State Grid Jiangsu Electric Power Co ltd Marketing Service Center
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Co ltd Customer Service Center, State Grid Jiangsu Electric Power Co ltd Marketing Service Center, Beijing University of Posts and Telecommunications filed Critical State Grid Co ltd Customer Service Center
Priority to CN202311651548.8A priority Critical patent/CN117376484B/en
Publication of CN117376484A publication Critical patent/CN117376484A/en
Application granted granted Critical
Publication of CN117376484B publication Critical patent/CN117376484B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32347Reversible embedding, i.e. lossless, invertible, erasable, removable or distorsion-free embedding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a generation type steganography method for electronic license anti-counterfeiting, which specifically comprises the following steps: when a message is sent, the secret message is encoded, a pre-trained encoder is input to obtain potential noise, and the potential noise is input to a pre-trained denoising diffusion implicit model to obtain a dense image; and/or when receiving the message, restoring the image containing the secret to the potential noise by utilizing the inverse sampling process of the pre-trained denoising diffusion implicit model, inputting the restored potential noise into a pre-trained decoder, and restoring the potential noise to the secret message according to the coding mode. Compared with the traditional steganography method and the generation type steganography method, the steganography method provided by the invention not only has higher safety and is difficult to detect and analyze, but also can obtain higher image quality, and improves the message capacity of the confidential pictures and the accuracy of extracting the confidential messages.

Description

Electronic license anti-counterfeiting oriented generation type steganography method
Technical Field
The invention belongs to the technical field of information hiding, and relates to a generation type steganography method for electronic license anti-counterfeiting.
Background
Steganography (steganography), also referred to as an information hiding technique, refers to a technique of directly generating information hiding with secret information, which includes a secret carrier, without letting persons other than the intended recipient know the transfer event of the information or the content of the information. In order to perform anti-counterfeiting identification of electronic certificates in an offline scene, a generating type steganography technology is introduced, and unique anti-counterfeiting marks are generated for each electronic certificate. The anti-fake mark is read through special matched software or equipment, so that authenticity of the certificate can be easily verified, trusted license information can be directly obtained under the condition of no network, and electronic license management and application level are improved. When the license information is used as secret data and the image is used as a secret carrier, the secret image produced by the generated steganography technology can be used as an electronic license anti-counterfeiting mark. The anti-counterfeiting mark can be attached to an electronic license image, a paper version license and an electronic format license file, and can also be used independently. The secret-containing image can play a role of a chip in the entity certificate, and can safely store a small amount of license key information for equipment to automatically read and identify in an off-line state.
The denoising diffusion implicit model (Denoising Diffusion Implicit Model, DDIM) is one of diffusion models, and is widely used in the field of image generation. Through gradually adding Gaussian noise to pictures in a training set and then learning the noise removing process, high-quality images can be generated finally from pure Gaussian noise. DDIM has certainty, i.e. its output is unique for each determined input. Although the restored noise has errors, people can always restore the noise corresponding to the image through an inverse sampling process.
In the prior art, patent document CN116456037B provides a method for generating an image steganography based on a diffusion model, using DDIM and providing a method for generating an image from binary messages, forming a mapping from secret messages to potential noise by mapping different binary sequences to random numbers in different intervals; however, since DDIM itself requires that the latent variable of input is gaussian noise, it is not preferable to directly input a secret message as a latent noise input model, and there is a fear of generating degradation of image quality. In addition, the method also needs to train and generate an extraction network with the same architecture as the diffusion model separately for extracting the secret message, and the characteristic reversibility of the DDIM is not fully utilized.
Disclosure of Invention
In view of this, in order to solve the problems existing in the prior art, the first aspect of the present invention provides a generating type steganography method for electronic license anti-counterfeiting, which can solve the technical problems in the prior art that the image quality after model processing is reduced and the reversibility of the model is not fully utilized.
In order to achieve the above effects, the invention provides an electronic license anti-counterfeiting oriented generation type steganography method, which comprises the following steps: when a message is sent, the secret message is encoded, a pre-trained encoder model is input to obtain potential noise, and the potential noise is input to a pre-trained denoising diffusion implicit model to obtain a dense image; and/or when receiving the message, utilizing the inverse sampling process of the pre-trained denoising diffusion implicit model to restore the dense-containing image into potential noise, inputting the restored potential noise into the pre-trained decoder model, and restoring the potential noise into the secret message according to the coding mode.
Optionally, the pre-training to obtain the denoising diffusion implicit model includes training to obtain the denoising diffusion implicit model based on the existing data set, which specifically includes: preparing a sufficient number of image data to form a dataset; pre-defining a denoising diffusion implicit model and a noise sequence; training to obtain a denoising diffusion implicit model which can generate an image from input potential noise and/or can restore the image into the potential noise and output the image based on an original image selected from the data set and the denoising diffusion implicit model and a noise sequence which are defined in advance.
Optionally, the pre-defined denoising diffusion implicit model and the noise sequence specifically include: predefining a denoising diffusion implicit modelWherein->Representing time step->Image of time->Is a neural network parameter; predefining a noise arrayWherein->Time step in order to gradually add noise to images in training set>Variance of added noise->For maximum time step, satisfy +.>The method comprises the steps of carrying out a first treatment on the surface of the For time step->Defining coefficients->
Optionally, training to obtain a denoising diffusion implicit model capable of generating an image from input potential noise and/or capable of restoring the image to potential noise and outputting the image based on the original image selected from the dataset, the pre-defined denoising diffusion implicit model and the noise sequence specifically comprises: s131: selecting an original from a datasetAnd randomly generating a Gaussian noise +.>Randomly selecting time step->The method comprises the steps of carrying out a first treatment on the surface of the S132: calculating time step->Noise-added imageSubsequently, will->And->Inputting the model to obtain the Gaussian noise>Is an implicit model of the estimated denoising diffusion of (2)>The method comprises the steps of carrying out a first treatment on the surface of the S133: denoising diffusion implicit mode for estimation in S132Is->Calculating the loss Updating neural network parameters by gradient descent method>Will lose->Updated to-> Wherein->For learning rate->Is a loss function->For->Is a gradient of (2); s134: repeating the above steps S131-S133 until ++is lost>The number of iterations is small enough or reaches the set number of iterations, and a trained denoising diffusion implicit model is obtained>
Optionally, inputting the potential noise into a pre-trained denoising diffusion implicit model to obtain the dense image specifically includes: denoising diffusion implicit model completed according to trainingSampling to generate a dense image, wherein the process is as follows:
for latent noise tensorsThe dense-containing image +.>
I.e. the output dense image, the above procedure is defined as +.>Meaning latent noise tensor +.>By means of a model->And (5) generating a dense-containing image.
Optionally, the reducing the dense image to the potential noise by the de-sampling process of the de-noising diffusion implicit model obtained by pre-training specifically includes: denoising diffusion implicit model completed by trainingThe process of restoring the image containing the density is as follows: for containing dense image->Gradually reducing +.>
Last, lastI.e. potential noise tensor, the above procedure is defined as +.>The meaning is composed of a dense picture +.>By means of a model->A restored dense-containing image or latent noise tensor.
Optionally, the pre-trained encoder model and the pre-trained decoder model specifically include: predefining an encoder model and a decoder model, and training the predefined encoder model and decoder model; a loss function is defined and the encoder model and decoder model parameters are updated based on the calculated loss.
Optionally, predefining the encoder model and the decoder model, and training the predefined encoder model and decoder model specifically includes: definition of encoder asThe decoder is +.>And respectively determining the structure thereof, wherein +.>Secret message representing binary +_>Representing potential noise, the encoder and decoder employ neural network architecture suitable for image processing; defining the binary capacity of a single image as +.>Generating length +.>Binary sequences or lengths of (2) are smaller than +.>But the insufficient part is filled with 0's and defines the binary sequence as secret message +.>The method comprises the steps of carrying out a first treatment on the surface of the Binary secret message to be defined +.>Input encoder acquires potential noise->Then the potential noise is sequentially input into the image generation process>Noise reduction process->Obtaining the extracted secret message +.>. Optionally, defining a loss function, and updating the encoder model and decoder model parameters according to the calculated loss specifically includes: define the loss function as +.>Wherein the coding error->For measuring potential noise->Similarity to Gaussian white noise, decoding error +.>For measuring the extracted secret message +.>Is->Is a degree of similarity of (2); and synchronously updating the parameters of the encoder model and the decoder model by using a gradient descent method according to the loss function calculation result.
Optionally, the coding errorDecoding error->Is defined as:
wherein,is->Component of each position after expansion into vector, +.>Is->Expanded into the dimension after vector,>is the mean of the vector components, here +.>The first part of (a) is the statistics of the shape-Wilk test, +.>Is a constant in the shape-Wilk test; />The second part of (2) is the autocorrelation coefficient of the first order.
The second aspect of the present invention provides a virtual device, including a message sending module and a message receiving module; the message sending module is used for encoding the secret message and inputting the secret message into a pre-trained encoder model to obtain potential noise when sending the message, and inputting the potential noise into a pre-trained denoising diffusion implicit model to obtain a dense image; and/or the message receiving module is used for restoring the dense image into potential noise by utilizing the inverse sampling process of the noise-removing diffusion implicit model obtained by pre-training when receiving the message, inputting the restored potential noise into the decoder model obtained by pre-training, and restoring the potential noise into the secret message according to the coding mode.
Optionally, the device further comprises a training module, which is used for training to obtain a denoising diffusion implicit model based on the existing data set; the method specifically comprises the following steps: preparing a sufficient number of image data to form a dataset; pre-defining a denoising diffusion implicit model and a noise sequence; training to obtain a denoising diffusion implicit model which can generate an image from input potential noise and/or can restore the image into the potential noise and output the image based on an original image selected from the data set and the denoising diffusion implicit model and a noise sequence which are defined in advance.
Optionally, the training module is specifically configured to: predefining a denoising diffusion implicit modelWherein->Representing time step->Image of time->Is a neural network parameter; predefining noise number column->Wherein->Time step in order to gradually add noise to images in training set>Variance of added noise->For the maximum time step, satisfyThe method comprises the steps of carrying out a first treatment on the surface of the For time step->Defining coefficients->
Optionally, the training module is specifically configured to: selecting an original from a datasetAnd randomly generating a Gaussian noise +.>Randomly selecting time step->The method comprises the steps of carrying out a first treatment on the surface of the Calculating time step->Noise added image->Subsequently, will->And->Inputting the model to obtain the Gaussian noise>Is an implicit model of the estimated denoising diffusion of (2)>The method comprises the steps of carrying out a first treatment on the surface of the Denoising diffusion implicit model for the estimate +.>Calculate loss-> Updating neural network parameters by gradient descent method>Will lose->Updated to-> Wherein->For learning rate->Is a loss function->For->Is a gradient of (2); repeating the above process until the loss->Sufficiently small or reaching the set iteration times to obtain a denoising diffusion implicit model with training completion
Optionally, the message sending module is specifically configured to:
denoising diffusion implicit model completed according to trainingSampling to generate a dense image, wherein the process is as follows:
for latent noise tensorsThe dense-containing image +.>
I.e. the output dense image, the above procedure is defined as +.>Meaning latent noise tensor +.>By means of a model->And (5) generating a dense-containing image.
Optionally, the message receiving module is specifically configured to:
denoising diffusion implicit model completed by trainingThe process of restoring the image containing the density is as follows:
for images containing a secretGradually reducing +.>
Last, lastI.e. potential noise tensor, the above procedure is defined as +.>The meaning is composed of a dense picture +.>By means of a model->A restored dense-containing image or latent noise tensor.
Optionally, the training module is further configured to: predefining an encoder model and a decoder model, and training the predefined encoder model and decoder model; a loss function is defined and the encoder model and decoder model parameters are updated based on the calculated loss.
Optionally, the training module is specifically configured to:
definition of encoder asRelieve->The encoder is, and determines its structure, respectively, wherein +.>Secret message representing binary +_>Representing potential noise, the encoder and decoder employ neural network architecture suitable for image processing;
defining binary capacity of a single image asGenerating length +.>Binary sequences or lengths of (2) are smaller than +.>But the insufficient part is filled with 0's and defines the binary sequence as secret message +.>
Binary secret message to be definedInput encoder acquires potential noise->Then the potential noise is sequentially input into the image generation process>Noise reduction process->Obtaining the extracted secret message
Optionally, the training module is specifically configured to:
define the loss function asWherein the coding error->For measuring potential noise->Similarity to Gaussian white noise, decoding error +.>For measuring the extracted secret message +.>Is->Is a degree of similarity of (2);
and synchronously updating the parameters of the encoder model and the decoder model by using a gradient descent method according to the loss function calculation result.
Optionally, the training module is specifically configured to:
said coding errorDecoding error->Is defined as:
wherein,is->Component of each position after expansion into vector, +.>Is->Expanded into the dimension after vector,>is the mean of the vector components, here +.>The first part of (a) is the statistics of the shape-Wilk test, +.>Is a constant in the shape-Wilk test; />The second part of (2) is the autocorrelation coefficient of the first order.
The third aspect of the present invention provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the above-mentioned electronic license anti-counterfeit generation steganography method.
The fourth aspect of the present invention provides a computer storage medium, where a computer program is stored, where the computer program, when executed by a processor, implements the above-mentioned electronic license anti-counterfeit generation steganography method.
The invention has the beneficial effects that: the DDIM model capable of generating images by utilizing Gaussian noise and the encoder and decoder models based on the deep neural network are obtained through training respectively, and the reversibility (back sampling) of the DDIM model is utilized, so that the accurate extraction of secret information is realized while the training model is simplified.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure and method steps particularly pointed out in the written description and claims hereof as well as the appended drawings.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate and together with the description serve to explain the invention.
FIG. 1 is a schematic diagram of a flow chart of implementation of a generating steganography method based on a denoising diffusion implicit model in an embodiment of the present invention;
FIG. 2 is a step diagram of a method for generating steganography based on a denoising diffusion implicit model in an embodiment of the present invention;
FIG. 3 is a partial step diagram of step S1 in a generating steganography method based on a denoising diffusion implicit model in an embodiment of the present invention;
fig. 4 is a partial step diagram of step S2 in the method for generating steganography based on the denoising diffusion implicit model according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent, and the illustrative embodiments of the present invention and the descriptions thereof are used for explaining the present invention, but not limiting the present invention. It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
The specific steps of implementing the method for generating the hidden writing based on the denoising diffusion hidden model according to the embodiment of the present invention will be described below with reference to the schematic flow diagram of implementation of the method for generating the hidden writing based on the denoising diffusion hidden model in the embodiment of the present invention shown in fig. 1.
As shown in fig. 2, an embodiment of the present invention provides a generating type steganography method for electronic license anti-counterfeiting, including: s1: training to obtain a denoising diffusion implicit model based on the existing data set; the purpose of this step is to obtain a DDIM model that can generate an image from the input gaussian noise.
Specifically, the model can be obtained in two ways, namely, a pre-training model disclosed on the Internet is utilized, and a self-training DDIM model is utilized. The DDIM model is self-trained, and a training set which is large enough to be constructed and maintained is firstly needed, and finally the style and the content of the generated image depend on the training set. Because of the certainty of the DDIM, once the DDIM model is obtained, the image can be generated from gaussian noise, and the corresponding (error-prone) noise can also be acquired from the image.
S2: training to obtain an encoder and a decoder; the codec is two deep neural networks that train simultaneously after the image generation model training is completed. The input of the encoder is a binary secret message and the output is potential noise for use by the DDIM model samples to generate the image. The input to the decoder is the potential noise recovered by the DDIM upsampling process. The input and output sizes of the two are determined by the secret information content of the single image and the image size generated by the image generation model.
During training, a maximum length binary sequence and a certain proportion of binary sequences with less than maximum length are randomly generated in the secret information bit number range of a single image to serve as training samples. For training samples, inputting the samples into an encoder to obtain potential noise, and then inputting the potential noise into a DDIM model to obtain a dense image; then, a DDIM (digital direct memory access) anti-sampling process is adopted to obtain restored potential noise, wherein errors exist in the restored potential noise; finally, the restored potential noise is input into a decoder to obtain a restored binary secret message. The losses and gradients are then calculated and random gradient descent or other optimization methods are employed to update the neural network parameters. The loss function used in training contains two terms, namely, whether the potential noise generated by the encoder is similar to Gaussian noise or not, and the difference between the message extracted by the decoder and the original message.
S3: when a message is sent, the secret message is encoded, a pre-trained encoder model is input to obtain potential noise, and the potential noise is input to a pre-trained denoising diffusion implicit model to obtain a dense image; and/or when receiving the message, utilizing the inverse sampling process of the pre-trained denoising diffusion implicit model to restore the dense-containing image into potential noise, inputting the restored potential noise into the pre-trained decoder model, and restoring the potential noise into the secret message according to the coding mode.
In practice, the DDIM model and codec can be put into practical use after training of the model is completed. As shown in fig. 1, in actual use, both the electronic license issuer and the verifier need to hold the DDIM model in advance, the issuer must also hold the encoder, and the verifier must also hold the decoder.
Before issuing the license, the issuer needs to determine which electronic license key information is embedded into the anti-counterfeiting mark, and convert the information into binary secret information, and can perform redundancy coding or error correction coding on the secret information according to the requirement so as to cope with extraction errors. When the size of the binary secret message exceeds the single image capacity, the sender also needs to divide the binary message to generate a plurality of anti-counterfeiting marks.
When issuing the electronic license, the issuer inputs the binary secret information into the encoder to obtain potential noise, and then inputs the potential noise into the DDIM model to obtain a confidential image which is used as an anti-counterfeiting mark to be attached to the electronic license picture or file. During verification, a verifier restores the anti-counterfeiting mark into potential noise by using a DDIM anti-sampling process, inputs the restored potential noise into a decoder, obtains an extracted binary secret message, and restores the binary secret message into original electronic license information according to a coding mode and a segmentation agreement.
Optionally, in S1, training to obtain the denoising diffusion implicit model based on the existing data set specifically includes: s11: preparing a sufficient number of image data to form a dataset; the content of which depends on the type and semantics of the image the user wishes to generate, for example if it is desired to generate animal images, a sufficient number of animal images should be prepared; s12: pre-defining a denoising diffusion implicit model and a noise sequence; DDIM modelA neural network, which may be a U-Net structure, whose performance may be enhanced using a mechanism of attention; s13: based on slave dataAnd (3) intensively selecting original pictures, and training a predefined denoising diffusion implicit model and a noise sequence to obtain a denoising diffusion implicit model which can generate an image from input potential noise and/or can restore the image to the potential noise and output the image.
As shown in fig. 3, optionally, the predefined denoising diffusion implicit model and noise sequence in S12 specifically includes: predefining a denoising diffusion implicit modelWherein->Representing time step->Image of time->Is a neural network parameter; predefining noise number column->Wherein->Time step in order to gradually add noise to images in training set>Variance of added noise->For maximum time step, satisfy +.>The method comprises the steps of carrying out a first treatment on the surface of the For time step->Defining coefficients->
Optionally, selecting from the dataset in S13Training to obtain a denoising diffusion implicit model which can generate an image from input potential noise and/or can restore the image into the potential noise and output the potential noise, wherein the denoising diffusion implicit model comprises the following specific steps of: s131: selecting an original from a datasetAnd randomly generating a Gaussian noise +.>Randomly selecting time step->The method comprises the steps of carrying out a first treatment on the surface of the S132: calculating time step->Noise added image->Subsequently, will->And->Inputting the model to obtain the Gaussian noise>Is an implicit model of the estimated denoising diffusion of (2)>The method comprises the steps of carrying out a first treatment on the surface of the S133: implicit model of denoising diffusion for estimation in S132 +.>Calculate loss-> Updating neural network parameters by gradient descent method>Will beLoss->Updated to-> Wherein->For learning rate->Is a loss function->For the followingIs a gradient of (2); s134: repeating the above steps S131-S133 until ++is lost>The number of iterations is small enough or reaches the set number of iterations, and a trained denoising diffusion implicit model is obtained>
Optionally, inputting the potential noise into a pre-trained denoising diffusion implicit model, and obtaining the dense image specifically includes: denoising diffusion implicit model completed according to trainingSampling to generate a dense image, wherein the process is as follows:
for latent noise tensorsThe dense-containing image +.>
I.e. the output dense image, the above procedure is defined as +.>Meaning latent noise tensor +.>By means of a model->And (5) generating a dense-containing image.
Optionally, the method for restoring the dense image to the potential noise by utilizing the pre-sampling process of the de-noising diffusion implicit model obtained by pre-training specifically comprises the following steps: denoising diffusion implicit model completed by trainingThe process of restoring the image containing the density is as follows: for containing dense image->Gradually reducing +.>
Last, lastI.e. potential noise tensor, the above procedure is defined as +.>The meaning is composed of a dense picture +.>By means of a model->A restored dense-containing image or latent noise tensor. It should be noted here that the above described restoration process is error-prone due to computational accuracy and integer/floating point conversion and not truly reciprocal operation.
As shown in fig. 4, optionally, S2: training the resulting encoder and decoder specifically includes: s21: predefining an encoder and decoder model, and training the predefined encoder and decoder model; s22: a loss function is defined and the encoder and decoder model parameters are updated based on the calculated loss.
Optionally, predefining the encoder and decoder models in S21, and training the predefined encoder and decoder models specifically includes: s211: definition of encoder asThe decoder is +.>And respectively determining the structure thereof, wherein +.>Secret message representing binary +_>Representing potential noise, the encoder and decoder employ a neural network architecture suitable for image processing. For example, the encoder, decoder herein may consist of several convolutional layers and several residual connections, etc.; s212: defining the binary capacity of a single image as +.>Generating length +.>Binary sequences or lengths of (2) are smaller than +.>But the insufficient part is filled with 0's and defines the binary sequence as secret message +.>The method comprises the steps of carrying out a first treatment on the surface of the S213: binary secret message to be defined +.>Input encoder acquires potential noise->Then the potential noise is sequentially input into the image generation process>Noise reduction process->Obtaining the extracted secret message +.>
Optionally, a loss function is defined in S22 and the encoder and decoder model parameters are updated according to the calculated loss: s221: define the loss function asWherein the coding error->For measuring potential noise->Similarity to Gaussian white noise, decoding error +.>For measuring the extracted secret message +.>Is->Is a degree of similarity of (2); s222: updating encoder and decoding synchronously using gradient descent method based on loss function calculation resultEncoder model parameters. The potential noise is more similar to Gaussian noise through the loss functions of the encoder and the decoder, so that adverse effects on the quality of generated images caused by the existing secret message processing means are avoided, and the quality of the generated images is improved.
Optionally, the coding errorDecoding error->Is defined as:
wherein,is->Component of each position after expansion into vector, +.>Is->Expanded into the dimension after vector,>is the mean of the components of the vector. Here->The first part of (a) is the statistics of the shape-Wilk test, +.>Is a constant in the shape-Wilk test; />The second part of (2) is the autocorrelation coefficient of the first order.
Optionally, the specific steps of message hiding and message extraction are:
before use, both the issuer and the verifier need to have the same model and parametersThe issuer must also have an encoder +.>The verifier needs to have a decoder +.>These neural network models must be shared and delivered in a secure and reliable manner. In the issuing process, the issuer encodes the electronic license information into binary sequence, and can take error correction coding or redundancy coding measures, and then divide the electronic license information into pieces with the length of +>Is->Last length is less than->Is filled with 0. For each subsequence->The issuer generates a secret-containing image using the encoder and the image generation model>And then the image containing the secret is used as an anti-counterfeiting mark to be attached to the electronic license picture or the file.
For the secret-containing image to be verifiedAnti-fake markThe verifier uses the decoder and the image generation model to generate a secret message +.>Extracting. />
Extracted binary secret messageErrors may be present and the use of redundant codes or error correction codes helps to prevent errors. And then the receiver splices the binary subsequences together in sequence, and restores the binary subsequences into the electronic license information according to the agreed binary coding mode.
In practical application, the license issuer encodes the electronic license information into binary secret information, divides the binary information into subsequences according to the specified capacity limit, and converts each subsequence as input of an encoder into potential noise which is hidden with the secret information and is similar to Gaussian noise as much as possible, and inputs the potential noise into a DDIM model to obtain a secret-containing image which can be used as an anti-counterfeiting mark of the electronic license. After receiving the electronic license image or file with the anti-counterfeiting mark, the verifier uses the DDIM model which is the same as that of the sender to extract potential noise by utilizing the reversibility of the DDIM model; and then, the potential noise with errors is put into a decoder to obtain the extracted license information.
Based on the same inventive concept, a second embodiment of the present invention provides a virtual device, including a message sending module and a message receiving module; the message sending module is used for encoding the secret message and inputting the secret message into a pre-trained encoder model to obtain potential noise when sending the message, and inputting the potential noise into a pre-trained denoising diffusion implicit model to obtain a dense image; and/or the message receiving module is used for restoring the dense image into potential noise by utilizing the inverse sampling process of the noise-removing diffusion implicit model obtained by pre-training when receiving the message, inputting the restored potential noise into the decoder model obtained by pre-training, and restoring the potential noise into the secret message according to the coding mode.
Optionally, the device further comprises a training module, which is used for training to obtain a denoising diffusion implicit model based on the existing data set; the method specifically comprises the following steps: preparing a sufficient number of image data to form a dataset; pre-defining a denoising diffusion implicit model and a noise sequence; training to obtain a denoising diffusion implicit model which can generate an image from input potential noise and/or can restore the image into the potential noise and output the image based on an original image selected from the data set and the denoising diffusion implicit model and a noise sequence which are defined in advance.
Optionally, the training module is specifically configured to: predefining a denoising diffusion implicit modelWherein->Representing time step->Image of time->Is a neural network parameter; predefining noise number column->Wherein->Time step in order to gradually add noise to images in training set>Variance of added noise->For the maximum time step, satisfyThe method comprises the steps of carrying out a first treatment on the surface of the For time step->Defining coefficients->
Optionally, the training module is specifically configured to: selecting an original from a datasetAnd randomly generating a Gaussian noise +.>Randomly selecting time step->The method comprises the steps of carrying out a first treatment on the surface of the Calculating time step->Noise added image->Subsequently, will->And->Inputting the model to obtain the Gaussian noise>Is an implicit model of the estimated denoising diffusion of (2)>The method comprises the steps of carrying out a first treatment on the surface of the Denoising diffusion implicit model for the estimate +.>Calculate loss-> Updating neural network parameters by gradient descent method>Will lose->Updated to-> Wherein->For learning rate->Is a loss function->For->Is a gradient of (2); repeating the above process until the loss->Sufficiently small or reaching the set iteration times to obtain a denoising diffusion implicit model with training completion
Optionally, the message sending module is specifically configured to:
denoising diffusion implicit model completed according to trainingSampling to generate a dense image, wherein the process is as follows:
for latent noise tensorsThe dense-containing image +.>
I.e. the output dense image, the above procedure is defined as +.>Meaning latent noise tensor +.>By means of a model->And (5) generating a dense-containing image.
Optionally, the message receiving module is specifically configured to:
denoising diffusion implicit model completed by trainingThe process of restoring the image containing the density is as follows:
for images containing a secretGradually reducing +.>
Last, lastI.e. potential noise tensor, the above procedure is defined as +.>The meaning is composed of a dense picture +.>By means of a model->A restored dense-containing image or latent noise tensor.
Optionally, the training module is further configured to: predefining an encoder model and a decoder model, and training the predefined encoder model and decoder model; a loss function is defined and the encoder model and decoder model parameters are updated based on the calculated loss.
Optionally, the training module is specifically configured to:
definition of encoder asThe decoder is +.>And respectively determining the structure thereof, wherein +.>Secret message representing binary +_>Representing potential noise, the encoder and decoder employ neural network architecture suitable for image processing;
defining binary capacity of a single image asGenerating length +.>Binary sequences or lengths of (2) are smaller than +.>But the insufficient part is filled with 0's and defines the binary sequence as secret message +.>
Binary secret message to be definedInput encoder acquires potential noise->Then the potential noise is sequentially input into the image generation process>Noise reduction process->Obtaining the extracted secret message
Optionally, the training module is specifically configured to:
define the loss function asWherein the coding error->For measuring potential noise->Similarity to Gaussian white noise, decoding error +.>For measuring the extracted secret message +.>Is->Is a degree of similarity of (2);
and synchronously updating the parameters of the encoder model and the decoder model by using a gradient descent method according to the loss function calculation result.
Optionally, the training module is specifically configured to:
said coding errorDecoding error->Is defined as:
wherein,is->Component of each position after expansion into vector, +.>Is->Expanded into the dimension after vector,>is the mean of the components of the vector. Here->The first part of (a) is the statistics of the shape-Wilk test, +.>Is a constant in the shape-Wilk test; />The second part of (2) is the autocorrelation coefficient of the first order.
Based on the same inventive concept, a third aspect of the invention provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the computer program realizes the electronic license anti-counterfeiting generation type steganography method when being executed by the processor.
Based on the same inventive concept, a fourth aspect of the present invention provides a computer readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the above electronic license anti-counterfeiting generation type steganography method is implemented.
It should be noted that in the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The computer readable storage medium mentioned above may be a tangible storage medium such as Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, floppy disk, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (10)

1. The generation type steganography method for electronic license anti-counterfeiting is characterized by comprising the following steps of:
when a message is sent, the secret message is encoded, a pre-trained encoder model is input to obtain potential noise, and the potential noise is input to a pre-trained denoising diffusion implicit model to obtain a dense image;
and/or when receiving the message, utilizing the inverse sampling process of the pre-trained denoising diffusion implicit model to restore the dense-containing image into potential noise, inputting the restored potential noise into the pre-trained decoder model, and restoring the potential noise into the secret message according to the coding mode.
2. The electronic license-oriented anti-counterfeit generation method of claim 1, further comprising: training to obtain a denoising diffusion implicit model based on the existing data set; the method specifically comprises the following steps:
preparing a sufficient number of image data to form a dataset;
pre-defining a denoising diffusion implicit model and a noise sequence;
training to obtain a denoising diffusion implicit model which can generate an image from input potential noise and/or can restore the image into the potential noise and output the image based on an original image selected from the data set and the denoising diffusion implicit model and a noise sequence which are defined in advance.
3. The electronic license anti-counterfeiting generation type steganography method according to claim 2, wherein the pre-defined denoising diffusion hidden model and the noise sequence specifically comprise:
predefining a denoising diffusion implicit modelWherein->Representing time step->Image of time->Is a neural network parameter;
predefining a noise arrayWherein->To the direction ofTime step in process of gradually adding noise to training set images>Variance of added noise->For maximum time step, satisfy +.>The method comprises the steps of carrying out a first treatment on the surface of the For time step->Defining coefficients
4. The electronic license anti-counterfeiting generation type steganography method according to claim 3, wherein training is based on an original image selected from the dataset, the pre-defined denoising diffusion hidden model and a noise sequence to obtain a denoising diffusion hidden model capable of generating an image from input potential noise and/or capable of restoring the image to potential noise and outputting the image, and specifically comprises the following steps:
selecting an original from a datasetAnd randomly generating a Gaussian noise +.>Randomly selecting time step->
Calculating a time stepNoise added image->Subsequently, will->And->Inputting the model to obtain the Gaussian noise>Is an implicit model of the estimated denoising diffusion of (2)>
Denoising diffusion implicit model for the estimationCalculate loss-> Updating neural network parameters by gradient descent method>Will lose->Updated to-> Wherein->For learning rate->Is a loss function->For->Is a gradient of (2);
repeating the above steps until lossThe number of iterations is small enough or reaches the set number of iterations, and a trained denoising diffusion implicit model is obtained>
5. The electronic license anti-counterfeiting generation type steganography method according to claim 1, wherein the step of inputting potential noise into a noise-removal diffusion hidden model obtained by pre-training to obtain a secret-containing image specifically comprises the following steps:
denoising diffusion implicit model completed according to trainingSampling to generate a dense image, wherein the process is as follows:
for latent noise tensors, the dense-containing image is generated step by step using the following formula
I.e. the output dense image, the above procedure is defined as +.>Meaning latent noise tensor +.>By means of a model->And (5) generating a dense-containing image.
6. The electronic license anti-counterfeiting generation type steganography method according to claim 1, wherein the step of restoring the dense image to potential noise by utilizing the pre-sampling process of the de-noised diffusion hidden model obtained by pre-training specifically comprises the steps of:
denoising diffusion implicit model completed by trainingThe process of restoring the image containing the density is as follows:
for images containing a secretGradually reducing +.>
Last, lastI.e. potential noise tensor, the above procedure is defined as +.>The meaning is composed of a dense picture +.>By means of a model->A restored dense-containing image or latent noise tensor.
7. The electronic license-oriented anti-counterfeit generation method of claim 1, further comprising:
predefining an encoder model and a decoder model, and training the predefined encoder model and decoder model;
a loss function is defined and the encoder model and decoder model parameters are updated based on the calculated loss.
8. The electronic license-oriented anti-counterfeit generation method of claim 7, wherein the predefining the encoder model and the decoder model and training the predefined encoder model and decoder model specifically comprises:
definition of encoder asThe decoder is +.>And respectively determining the structure thereof, wherein +.>Representing a binary secret message,representing potential noise, the encoder and decoder employ neural network architecture suitable for image processing;
defining binary capacity of a single image asGenerating length +.>Binary sequences or lengths of (2) are smaller than +.>But the insufficient part is filled with 0's and defines the binary sequence as secret message +.>
Binary secret message to be definedInput encoder acquires potential noise->Then the potential noise is sequentially input into the image generation process>Noise reduction process->Obtaining the extracted secret message +.>
9. The electronic license-oriented anti-counterfeit generation method of claim 8, wherein defining the loss function and updating the encoder model and decoder model parameters according to the calculated loss comprises:
define the loss function asWherein the coding error->For measuring potential noise->Similarity to Gaussian white noise, decoding error +.>For measuring the extracted secret message +.>Is->Is a degree of similarity of (2);
and synchronously updating the parameters of the encoder model and the decoder model by using a gradient descent method according to the loss function calculation result.
10. The electronic license-oriented anti-counterfeiting generation type steganography method according to claim 9, wherein the method comprises the following steps of:
said coding errorDecoding error->Is defined as:
wherein,is->Component of each position after expansion into vector, +.>Is->Expanded into the dimension after vector,>is the mean of the vector components, here +.>The first part of (a) is the statistics of the shape-Wilk test, +.>Is a constant in the shape-Wilk test; />The second part of (2) is the autocorrelation coefficient of the first order.
CN202311651548.8A 2023-12-05 2023-12-05 Electronic license anti-counterfeiting oriented generation type steganography method Active CN117376484B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311651548.8A CN117376484B (en) 2023-12-05 2023-12-05 Electronic license anti-counterfeiting oriented generation type steganography method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311651548.8A CN117376484B (en) 2023-12-05 2023-12-05 Electronic license anti-counterfeiting oriented generation type steganography method

Publications (2)

Publication Number Publication Date
CN117376484A true CN117376484A (en) 2024-01-09
CN117376484B CN117376484B (en) 2024-08-20

Family

ID=89396883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311651548.8A Active CN117376484B (en) 2023-12-05 2023-12-05 Electronic license anti-counterfeiting oriented generation type steganography method

Country Status (1)

Country Link
CN (1) CN117376484B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130181435A1 (en) * 2012-01-17 2013-07-18 Ecole Polytechnique Federale De Lausanne (Epfl) Synthesis of authenticable halftone images with non-luminescent halftones illuminated by a luminescent emissive layer
US20210144274A1 (en) * 2019-11-07 2021-05-13 Dotphoton Ag Method and device for steganographic processing and compression of image data
CN115314222A (en) * 2022-08-08 2022-11-08 公安部交通管理科学研究所 Authentication method of electronic certificate
CN116091288A (en) * 2022-12-08 2023-05-09 中国人民武装警察部队工程大学 Diffusion model-based image steganography method
CN116456037A (en) * 2023-06-16 2023-07-18 南京信息工程大学 Diffusion model-based generated image steganography method
CN116645260A (en) * 2023-07-27 2023-08-25 中国海洋大学 Digital watermark attack method based on conditional diffusion model
CN117078517A (en) * 2023-08-25 2023-11-17 福建省青易信息科技有限公司 Image super-resolution steganography method based on reversible neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130181435A1 (en) * 2012-01-17 2013-07-18 Ecole Polytechnique Federale De Lausanne (Epfl) Synthesis of authenticable halftone images with non-luminescent halftones illuminated by a luminescent emissive layer
US20210144274A1 (en) * 2019-11-07 2021-05-13 Dotphoton Ag Method and device for steganographic processing and compression of image data
CN115314222A (en) * 2022-08-08 2022-11-08 公安部交通管理科学研究所 Authentication method of electronic certificate
CN116091288A (en) * 2022-12-08 2023-05-09 中国人民武装警察部队工程大学 Diffusion model-based image steganography method
CN116456037A (en) * 2023-06-16 2023-07-18 南京信息工程大学 Diffusion model-based generated image steganography method
CN116645260A (en) * 2023-07-27 2023-08-25 中国海洋大学 Digital watermark attack method based on conditional diffusion model
CN117078517A (en) * 2023-08-25 2023-11-17 福建省青易信息科技有限公司 Image super-resolution steganography method based on reversible neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙文权等: "《基于可逆网络的轻量化图像隐写方案》", 《计算机应用研究》, vol. 41, no. 1, pages 1 - 7 *
董炜娜等: "《基于编码-解码网络的大容量鲁棒图像隐写方案》", 《计算机应用》, pages 1 - 12 *

Also Published As

Publication number Publication date
CN117376484B (en) 2024-08-20

Similar Documents

Publication Publication Date Title
CN109587372B (en) Invisible image steganography based on generation of countermeasure network
CN109377532B (en) Image processing method and device based on neural network
JP2022549031A (en) Transformation of data samples into normal data
TWI803243B (en) Method for expanding images, computer device and storage medium
CN113990330A (en) Method and device for embedding and identifying audio watermark based on deep network
Li et al. Face inpainting via nested generative adversarial networks
CN116402719A (en) Human blind face image recovery system and method based on potential diffusion model
CN114494387A (en) Data set network generation model and fog map generation method
CN115136183A (en) Image watermarking
CN108648135B (en) Hidden model training and using method, device and computer readable storage medium
TW202240531A (en) Methods, apparatuses, electronic devices and storage media for image generation and for 3d face model generation
CN117376484B (en) Electronic license anti-counterfeiting oriented generation type steganography method
CN117710660A (en) Watermark processing method, watermark processing device, electronic equipment and computer readable storage medium
Abdollahi et al. Image steganography based on smooth cycle-consistent adversarial learning
CN115861401B (en) Binocular and point cloud fusion depth recovery method, device and medium
CN114493971B (en) Media data conversion model training and digital watermark embedding method and device
Lavoue et al. Subdivision surface watermarking
KR20120055070A (en) System and method for lossless digital watermarking for image integrity
US11423506B2 (en) Video frame to frame difference watermarking with drm metadata
Xue et al. SARANIQA: self-attention restorative adversarial network for No-reference image quality assessment
CN116363263B (en) Image editing method, system, electronic device and storage medium
CN117609962B (en) Image hyperlink generation method based on feature point generation
Pavlović et al. DNN-based speech watermarking resistant to desynchronization attacks
Luo et al. Shape watermarking based on minimizing the quadric error metric
CN113537484B (en) Network training, encoding and decoding method, device and medium for digital watermarking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant