CN114936962A - One-to-one full text watermark encryption adding technology based on document - Google Patents

One-to-one full text watermark encryption adding technology based on document Download PDF

Info

Publication number
CN114936962A
CN114936962A CN202210714146.7A CN202210714146A CN114936962A CN 114936962 A CN114936962 A CN 114936962A CN 202210714146 A CN202210714146 A CN 202210714146A CN 114936962 A CN114936962 A CN 114936962A
Authority
CN
China
Prior art keywords
text
watermark
layer
module
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210714146.7A
Other languages
Chinese (zh)
Other versions
CN114936962B (en
Inventor
李宁
李静
宋丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Forest Fantasy Taiyuan Digital Technology Co ltd
Original Assignee
Jincheng Darui Jinma Engineering Design Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jincheng Darui Jinma Engineering Design Consulting Co ltd filed Critical Jincheng Darui Jinma Engineering Design Consulting Co ltd
Priority to CN202210714146.7A priority Critical patent/CN114936962B/en
Publication of CN114936962A publication Critical patent/CN114936962A/en
Application granted granted Critical
Publication of CN114936962B publication Critical patent/CN114936962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention discloses a one-to-one full text watermark encryption adding technology based on documents, which comprises a watermark text to be loaded, a current page retrieval module, a cloud watermark loading module and a client updating module; the cloud watermark loading module comprises a mirror image text, a style text, a text preprocessing module, a ByeNa watermark loading network module, an FMN audio watermark loading module, a loss calculation module and a fusion module: the invention belongs to the technical field of image identification, in particular to a one-to-one text watermark encryption technology based on documents; the problem that the watermark of the oversized text cannot be loaded in real time can be effectively solved; the problem that the information integrity of the super-large text watermark is difficult to verify; the problem that the robustness and the vulnerability of the loaded watermark are difficult to be considered in the traditional method is solved; the problem that whether an image is modified or not is difficult to verify by a traditional image-based text watermark is solved.

Description

One-to-one full-text watermark encryption adding technology based on document
Technical Field
The invention belongs to the technical field of image identification, and particularly relates to a one-to-one full-text watermark encryption adding technology based on documents.
Background
The digital image watermarking technology is characterized in that watermarking information is embedded into a host image, and meanwhile, the embedded watermarking is guaranteed to have imperceptibility, safety and certain robustness; digital watermarks can be classified into robust digital watermarks and fragile digital watermarks according to the characteristics of the watermarks. The robust digital watermark is mainly used for identifying copyright information such as authors, work serial numbers and the like in digital works, and the embedded watermark is required to be capable of withstanding various common editing processes; the fragile digital watermark is mainly used for integrity protection, and is opposite to the requirement of robust watermark, the fragile watermark must be very sensitive to the change of a signal, and people can judge whether the data is tampered according to the state of the fragile watermark.
The audio digital watermark is embedded into an audio file through a watermark embedding algorithm, but has no great influence on the original tone quality of the audio file or cannot be influenced by human ears. And conversely, the audio digital watermark is completely extracted from the audio host file by a watermark extraction algorithm.
The traditional watermark loading mode is completed at one time, watermark embedding is firstly carried out, then use and transmission can be carried out, once the watermark is embedded, if other watermarks are embedded again, a watermark embedding program needs to be used for restarting, in addition, the pixel level loading mode of the traditional identification watermark has large calculation amount, the watermark real-time embedding of the whole text can not be realized, and the watermark real-time loading of the super-large text can not be realized.
The traditional text watermark is difficult to have both robustness and vulnerability, namely, the traditional text watermark can not only endure some conventional image processing operations (compression, filtering and denoising) and malicious attacks, but also conveniently carry out integrity protection.
Disclosure of Invention
Technical problem to be solved
In order to solve the above problems in the prior art, the present invention provides a document-based one-to-one full-text watermark encryption adding technology, which can effectively solve the following problems:
(1) the watermark of the oversized text cannot be loaded in real time;
(2) the problem that the information integrity of the super-large text watermark is difficult to verify;
(3) the traditional method has the problems that the robustness and the vulnerability of the loaded watermark are difficult to be considered;
(4) the traditional text watermark based on the image has the problem that whether the image is modified or not is difficult to verify;
(5) the audio data is used for the watermark loading of the text image, and the technical bias that the audio and video data cannot be used for the image watermark technology is improved to a certain extent.
(II) technical scheme
In order to achieve the purpose, the invention adopts a technical scheme that: the one-to-one full text watermark encryption adding technology based on the document comprises a watermark text to be loaded, a current page retrieval module, a cloud watermark loading module and a client updating module; the cloud watermark loading module comprises a mirror image text, a style text, a text preprocessing module, a ByeNa watermark loading network module, an FMN audio watermark loading module, a loss calculation module and a fusion module: the ByeNa watermark loading network module comprises a ByeNa characteristic specific model, a loss calculation model and a gradient updating model;
the watermark text to be loaded refers to a target text when browsing on a webpage;
the current page retrieval module returns the current page where the user browses on the webpage or the client;
the image text is a backup of a target file at the cloud end when a user browses on a webpage or a client, the watermark text to be loaded can be in any text format, but the image text is a corresponding image in a JPG format or a PNG format;
the style text is used for adding a watermark to the text to be loaded with the watermark, is an oversized text and is used as a style to be added to the text to be loaded with the watermark;
the text preprocessing module comprises slicing, image sharpening and image contrast increasing, and has the functions of cutting a whole page of text into small rectangular areas one by one, and then using a ByeNa watermark loading network module to load watermarks one by one, wherein the size of each small rectangular area is 320 multiplied by 320 pixels; and recording a small rectangular area image obtained after the mirror image text is processed by the text preprocessing module as PO, and recording a small rectangular area image obtained after the style text is processed by the preprocessing module as PS.
Further, the byna watermark loading network module is a full convolution operation model with few parameters, and aims to obtain a series of characteristic images, the byna watermark loading network module comprises a characteristic first layer, a characteristic second layer, a characteristic third layer, a characteristic fourth layer and a characteristic fifth layer, the characteristic first layer, the characteristic second layer, the characteristic third layer, the characteristic fourth layer and the characteristic fifth layer are formed by more than one CBL operation, the CBL is an operation block, and the CBL comprises a convolution operation, a normalization operation and a function activation operation:
(1) the feature layer comprises a CBL operation, and the output feature size after the feature layer operation is 320 multiplied by 8;
(2) the feature two layer comprises two CBL operations, and the output feature size after the feature two layer operation is 160 multiplied by 32;
(3) the characteristic three-layer operation comprises three CBL operations, and the output characteristic size after the characteristic three-layer operation is 80 multiplied by 256;
(4) the characteristic four-layer operation comprises three CBL operations, and the output characteristic size is 40 multiplied by 64 after the characteristic three-layer operation;
(5) the characteristic five-layer operation comprises three CBL operations, and the output characteristic size after the characteristic three-layer operation is 20 multiplied by 8; preferably, the loss calculation module fuses the image PO and the image PS, and the fusion degree is measured by using a loss value Cost on the feature image;
further, the gradient updating model is used for solving the gradient of the total loss value Cost relative to the small rectangular area image obtained after the mirror image text is processed by the text preprocessing module, and performing iterative updating by using a gradient descent algorithm to obtain a final image RB _ out.
Further, the FMN audio watermark loading module comprises an auto-encoder and a quantizer; the self-encoder is a multi-input multi-output network model with a four-layer recurrent neural network structure, the input dimension and the output dimension of the self-encoder are the same, and the self-encoder comprises an encoder and a decoder; the encoder comprises a first encoding layer and a second encoding layer; the decoder comprises a first decoding layer and a second decoding layer; the quantizer may quantize a vector with components between-1 to a vector with components between 0 to 255; the method comprises the following steps of using X to represent audio sample data input to an FMN audio watermark loading module, using Z to represent an output value of the FMN audio watermark loading module, and using Z to represent intermediate output of the audio sample data after an encoder operates, wherein the specific flow of the FMN audio watermark loading module is as follows:
s0, training the self-encoder, inputting the audio sample data X into the self-encoder to obtain an output value Z, and enabling the input and the output of the self-encoder to be equal as far as possible:
Figure 208156DEST_PATH_IMAGE001
the above equation represents minimizing a mean square error loss function between the audio sample data X and the output value Z;
s1, processing the audio sampling data by a first coding layer and a second coding layer to obtain intermediate data Y, wherein the dimensionality of the intermediate data is the same as that of the audio sampling data;
and S2, processing the intermediate data Y by using a quantizer to obtain a quantized vector with the component between 0 and 255, and marking the quantized vector as N.
Furthermore, the data of the fusion module consists of two parts, and the fusion module comprises an image obtained by processing a ByeNa watermark loading network moduleRB_outAnd a quantization vector N obtained by an FMN audio watermark loading module, wherein the fusion module comprises the following calculation steps:
s1, carrying out reshape operation on the quantization vector to obtain an 8 x 32 dimensional matrixM’
S2, pairing matrixM’Performing a 0-complementing operation, wherein the rule of complementing 0 isM’Is complemented with 24 rows 0 between the fourth row and the 5 th row to obtain a matrix of 32 x 32 dimensionsM
S3, weighting and summing operation, and imageRB_outCorresponding matrix and matrixMCarrying out weighted summation to obtain the final toolA small rectangular area image with robustness and vulnerability.
(III) advantageous effects
The invention provides a one-to-one full text watermark encryption adding technology based on documents, which can effectively solve the following problems:
(1) by utilizing the computing calculation and parallel computing power of the cloud watermark loading module, the watermark loading of the ultra-large text can be divided into rectangular areas one by one, and the block calculation is carried out, so that the watermark real-time loading of the ultra-large text is realized;
(2) the decoder of the FMN audio watermark loading module is utilized, so that the problem of verifying the information integrity of the super-large text watermark can be conveniently solved;
(3) the watermark obtained by loading the network module with the byna watermark is in the full text pixel level, so the scheme gives consideration to the problems of robustness and vulnerability of the loaded watermark;
(4) the audio data is used for watermark loading of the text image, and the technical bias that the audio and video data cannot be displayed and verified on the image watermark technology is overcome to a certain extent.
Drawings
FIG. 1 is a flowchart of the calculation of a document-based one-to-one full-text watermarking technique proposed by the present invention;
fig. 2 is a schematic structural diagram of a byna watermark loading network module proposed by the present invention;
FIG. 3 is a gradient update model;
FIG. 4 is a flow chart of the FMN audio watermark loading module;
FIG. 5 is a block diagram of an auto encoder in the FMN audio watermark loading block;
the accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments; all other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present disclosure without any creative effort belong to the protection scope of the present disclosure.
The one-to-one full text watermark encryption adding technology based on the document comprises a watermark text to be loaded, a current page retrieval module, a cloud watermark loading module and a client updating module; the cloud watermark loading module comprises a mirror image text, a style text, a text preprocessing module, a ByeNa watermark loading network module, an FMN audio watermark loading module, a loss calculation module and a fusion module: the ByeNa watermark loading network module comprises a ByeNa characteristic special model, a loss calculation model and a gradient updating model;
the watermark text to be loaded refers to a target text when browsing on a webpage;
the current page retrieval module returns the current page where the user browses on the webpage or the client;
the mirror image text is a backup of a target file at the cloud end when a user browses on a webpage or a client, the text to be loaded with the watermark can be in any text format, but the mirror image text is a corresponding image in a JPG format or a PNG format;
the style text is used for adding a watermark to the text to be loaded with the watermark, is an oversized text and is used as a style to be added to the text to be loaded with the watermark.
The text preprocessing module comprises slicing, image sharpening and image contrast increasing, and has the functions of cutting a whole page of text into small rectangular areas of one block, and then using a ByeNa watermark loading network model to load watermarks one by one, wherein the size of each small rectangular area is 320 multiplied by 320 pixels; and recording a small rectangular area image obtained after the mirror image text is processed by the text preprocessing module as PO, and recording a small rectangular area image obtained after the style text is processed by the preprocessing module as PS.
The ByeNa watermark loading network module is a full convolution operation model with few parameter calculation, and aims to obtain a series of characteristic images, the ByeNa watermark loading network module comprises a characteristic first layer, a characteristic second layer, a characteristic third layer, a characteristic fourth layer and a characteristic fifth layer, the characteristic first layer, the characteristic second layer, the characteristic third layer, the characteristic fourth layer and the characteristic fifth layer are formed by more than one CBL operation, the CBL is an operation block, the CBL comprises a convolution operation, a normalization operation and a function activation operation, and the convolution uses the following calculation mode:
Figure 788173DEST_PATH_IMAGE002
the feature layer only comprises one CBL operation, and the output feature size is 320 multiplied by 8 after the feature layer operation;
the characteristic two-layer operation comprises two CBL operations, and the output characteristic size is 160 multiplied by 32 after the characteristic two-layer operation;
the characteristic three-layer comprises three CBL operations, and the output characteristic size after the characteristic three-layer operation is 80 multiplied by 256;
the feature four layer comprises three CBL operations, and the output feature size after the feature three layer operation is 40 multiplied by 64;
the characteristic five-layer operation comprises three CBL operations, and the output characteristic size after the characteristic three-layer operation is 20 multiplied by 8;
wherein the activation function uses a LeakyRELU activation function:
Figure 330013DEST_PATH_IMAGE003
in the above equation, the number of channels of the convolution kernel is equal to the number of channels of the original image, Q (x, y) represents the pixel values of the new image at coordinates x and y after convolution, P (x, y, z) represents the value of the original image at coordinate (x, y) at z channel, K (x, y, z) represents the value of the convolution kernel at coordinate (x, y) at z channel, and n represents the total number of channels of the original image.
The loss calculation module is used for fusing the image PO and the image PS, the fusion degree is measured by using a loss value on the characteristic image, and a specific calculation formula is as follows:
Figure 32389DEST_PATH_IMAGE004
in the above formula, the first and second carbon atoms are,F l inputting a small rectangular area obtained by processing the style and style representing text by the preprocessing module into the ByeNa characteristic specific modellThe image of the layer characteristic is displayed,P l the small rectangular area obtained by processing the mirror image text by the text preprocessing module is input into the ByeNa characteristic special modellThe image of the layer(s) is,iandjrespectively representing the row and column in which the pixel is located.
And (3) a gradient updating model, namely solving the gradient of the total loss value Cost relative to the small rectangular area image obtained after the mirror image text is processed by the text preprocessing module, and performing iterative updating by using a gradient descent algorithm to obtain a final image RB _ out, wherein the specific calculation formula is as follows:
Figure 535046DEST_PATH_IMAGE005
in the above formula, P i,j,z And the pixel values of the images corresponding to the small rectangular areas obtained after the mirror image texts are processed by the text preprocessing module at the positions of z channels, x abscissa and y ordinate, wherein lambda is the learning rate.
The FMN audio watermark loading module comprises an autocoder and a quantizer; the self-encoder is a multi-input multi-output network model with a four-layer recurrent neural network structure, the input dimension and the output dimension of the self-encoder are the same, and the self-encoder comprises an encoder and a decoder; the encoder comprises a first encoding layer and a second encoding layer; the decoder comprises a first decoding layer and a second decoding layer; the quantizer may quantize a vector with components between-1 to a vector with components between 0 to 255; the input data of the FMN audio watermark loading module is audio sampling data, and is represented by X as follows:
Figure 43388DEST_PATH_IMAGE006
and the output value of the FMN audio watermark loading module is expressed by Z as:
Figure 642996DEST_PATH_IMAGE007
the intermediate output of the audio sample data after the encoder operation is represented by Y:
Figure 188378DEST_PATH_IMAGE008
the specific process of the FMN audio watermark loading module is as follows:
s0, training the self-encoder, inputting the audio sample data X into the self-encoder to obtain an output value Z, and enabling the input and the output of the self-encoder to be equal as far as possible:
Figure 568544DEST_PATH_IMAGE009
the above equation represents minimizing a mean square error loss function between the audio sample data X and the output value Z;
s1, processing the audio sampling data by a first coding layer and a second coding layer to obtain intermediate data Y, wherein the dimensionality of the intermediate data is the same as that of the audio sampling data;
s2, performing quantizer processing on the intermediate data Y to obtain a quantized vector with each component between 0 and 255, and recording as N:
Figure 224784DEST_PATH_IMAGE010
in the above formula, each componentn i Are all integer values.
The data of the fusion module consists of two parts, including an image obtained by loading a network module with a ByeNa watermarkRB_outAnd a quantization vector N obtained by an FMN audio watermark loading module, wherein the fusion module comprises the following calculation steps:
s1, carrying out reshape operation on the quantization vector to obtain an 8 x 32 dimensional matrixM’
Figure 882162DEST_PATH_IMAGE011
S2, pairing matrixM’Performing a 0-complementing operation, wherein the rule of complementing 0 isM’Is complemented with 24 rows 0 between the fourth row and the 5 th row to obtain a matrix of 32 x 32 dimensionsM
S3, weighting and summing operation, and imageRB_outCorresponding matrix and matrixMPerforming weighted summation: the specific calculation formula is as follows:
Figure 988658DEST_PATH_IMAGE012
in the above formula, the first and second carbon atoms are,µandξin order to be the weighting coefficients,Crepresenting the final small rectangular area watermark text image.
The first embodiment is as follows:
firstly, a user browses a text at a client, a current page retrieval module retrieves the browsed page position and uploads the page position to a cloud watermark loading module in real time, the cloud watermark loading module loads watermarks according to a mirror image text and a style text of the text (to-be-loaded watermark text) browsed by the user, a text preprocessing module is used for respectively carrying out slicing, image sharpening and image contrast increasing operations on the mirror image text and the style text, the whole page text is divided into small rectangular areas, then a ByeNa watermark loading network model is used for carrying out watermark loading on each small rectangular area one by one to obtain an imageRB_out(ii) a The audio sampling data is processed by an encoder of an FMN audio watermark loading module to obtain intermediate data Y, and the image is fused by using a fusion moduleRB_outAnd the intermediate data Y is fused and output, a calculation result is returned to a user client browser by using a client updating module, namely, watermark loading in the aspects of text and audio is respectively carried out on each small rectangular area at the cloud end, the watermark is loaded once and returned once in real time, and the watermark is displayed in the user browser in real time.
The specific working process of the invention is described above, and the steps are repeated when the device is used next time.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
The present invention and its embodiments have been described above, and the description is not intended to be limiting, and the drawings are only one embodiment of the present invention, and the actual structure is not limited thereto. In summary, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. The document-based one-to-one full-text watermark encryption adding technology comprises a watermark text to be loaded, a current page retrieval module, a cloud watermark loading module and a client updating module; the cloud watermark loading module comprises a mirror image text, a style text, a text preprocessing module, a ByeNa watermark loading network module, an FMN audio watermark loading module, a loss calculation module and a fusion module: the ByeNa watermark loading network module comprises a ByeNa characteristic special model, a loss calculation model and a gradient updating model; the ByeNa watermark loading network module comprises a first characteristic layer, a second characteristic layer, a third characteristic layer, a fourth characteristic layer and a fifth characteristic layer, wherein the first characteristic layer, the second characteristic layer, the third characteristic layer, the fourth characteristic layer and the fifth characteristic layer are formed by more than one CBL operation, the CBL is an operation block, the CBL comprises a convolution operation, a normalization operation and a function activation operation, and the convolution uses the following calculation mode:
Figure 520625DEST_PATH_IMAGE001
the activation function uses the LeakyRELU activation function:
Figure 980556DEST_PATH_IMAGE002
the feature layer only comprises one CBL operation, and the output feature size is 320 multiplied by 8 after the feature layer operation; the characteristic two-layer comprises two CBL operations, and the output characteristic size after the characteristic two-layer operation is 160 multiplied by 32; the characteristic three-layer comprises three CBL operations, and the output characteristic size after the characteristic three-layer operation is 80 multiplied by 256; the feature four layer comprises three CBL operations, and the output feature size after the feature three layer operation is 40 multiplied by 64; the characteristic five-layer comprises three CBL operations, and the output characteristic size after the characteristic three-layer operation is 20 multiplied by 8.
2. The document-based one-to-one full-text watermarking technique according to claim 1, wherein: the watermark text to be loaded refers to a target text when browsing on a webpage; the current page retrieval module returns the current page of the user when browsing the webpage or the client; the mirror image text is a backup of a target file at the cloud end when a user browses on a webpage or a client, the watermark text to be loaded can be in any text format, but the mirror image text is a corresponding image in a JPG format or a PNG format; the style text is used for adding a watermark to the text to be loaded with the watermark, is an oversized text and is used as a style to be added to the text to be loaded with the watermark; the text preprocessing module comprises slicing, image sharpening and image contrast increasing, and has the functions of cutting a whole page of text into small rectangular areas of one block, and then performing watermark loading one by using a ByeNa watermark loading network model, wherein the size of each small rectangular area is 320 multiplied by 320 pixels; the small rectangular area image obtained after the mirror image text is processed by the text preprocessing module is recorded as PO, and the small rectangular area image obtained after the style text is processed by the preprocessing module is recorded as PS.
3. A document-based one-to-one full-text watermarking technology according to claim 2, wherein: the loss calculation module is used for fusing the image PO and the image PS, the fusion degree is measured by using a loss value on the characteristic image, and the specific calculation formula is as follows:
Figure 554757DEST_PATH_IMAGE003
in the above-mentioned formula, the compound has the following structure,F l inputting a small rectangular area obtained by processing the text representing the style and style by the preprocessing module into a ByeNa characteristic special modellThe layer-feature image is a representation of the layer,P l the small rectangular area obtained by processing the mirror image text by the text preprocessing module is input into the ByeNa characteristic special modellThe image of the layer(s) is,iandjrespectively representing the row and column in which the pixel is located.
4. A document-based one-to-one full-text watermarking technology according to claim 3, wherein: the gradient updating model is used for solving the gradient of the total loss value Cost relative to the small rectangular area image obtained after the mirror image text is processed by the text preprocessing module, and performing iterative updating by using a gradient descent algorithm to obtain a final image RB _ out, wherein the calculation formula is as follows:
Figure 566575DEST_PATH_IMAGE004
in the above formula, P i,j,z And the pixel values of the images corresponding to the small rectangular areas obtained after the mirror image texts are processed by the text preprocessing module at the positions of z channels, x abscissa and y ordinate, wherein lambda is the learning rate.
5. The document-based one-to-one full-text watermarking technique of claim 4, wherein: the FMN audio watermark loading module comprises an autocoder and a quantizer; the self-encoder is a multi-input multi-output network model with a four-layer recurrent neural network structure, the input dimension and the output dimension of the self-encoder are the same, and the self-encoder comprises an encoder and a decoder; the encoder comprises a first encoding layer and a second encoding layer; the decoder comprises a first decoding layer and a second decoding layer; the quantizer may quantize a vector with components between-1 to a vector with components between 0 to 255; the input data of the FMN audio watermark loading module is audio sampling data, and is represented by X as:
Figure 265541DEST_PATH_IMAGE005
and (3) the output value of the FMN audio watermark loading module is expressed by Z as:
Figure 653797DEST_PATH_IMAGE006
the intermediate output of the audio sample data after the encoder operation is represented by Y:
Figure 285767DEST_PATH_IMAGE007
the specific process of the FMN audio watermark loading module is as follows:
s0, training the self-encoder, inputting the audio sampling data X into the self-encoder to obtain an output value Z, and enabling the input and the output of the self-encoder to be equal as much as possible:
Figure 343853DEST_PATH_IMAGE008
the above equation represents minimizing a mean square error loss function between the audio sample data X and the output value Z;
s1, processing the audio sampling data by a first coding layer and a second coding layer to obtain intermediate data Y, wherein the dimensionality of the intermediate data is the same as that of the audio sampling data;
s2, performing quantizer processing on the intermediate data Y to obtain a quantized vector with a component between 0 and 255, where N:
Figure 920327DEST_PATH_IMAGE009
in the above formula, each componentn i Are all integer values.
6. The document-based one-to-one full-text watermarking technique of claim 5, wherein: the input data of the fusion module comprises an image obtained by loading a network module with a ByeNa watermarkRB_outAnd a quantization vector N obtained by an FMN audio watermark loading module, wherein the fusion module comprises the following calculation steps:
s1, carrying out reshape operation on the quantization vector to obtain an 8 x 32 dimensional matrixM’
Figure 518799DEST_PATH_IMAGE010
S2, performing 0 complementing operation on the matrix M ', wherein the rule of 0 complementing is to complement 24 rows 0 between the fourth row and the 5 th row of the M' to obtain a 32 x 32-dimensional matrix M;
s3, weighting and summing operation, and imageRB_outCorresponding matrix and matrixMPerforming weighted summation: the specific calculation formula is as follows:
Figure 802013DEST_PATH_IMAGE011
in the above formula, the first and second carbon atoms are,µandξin order to be the weighting coefficients,Crepresenting the final small rectangular area watermark text image.
CN202210714146.7A 2022-06-23 2022-06-23 One-to-one full text watermark encryption adding technology based on document Active CN114936962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210714146.7A CN114936962B (en) 2022-06-23 2022-06-23 One-to-one full text watermark encryption adding technology based on document

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210714146.7A CN114936962B (en) 2022-06-23 2022-06-23 One-to-one full text watermark encryption adding technology based on document

Publications (2)

Publication Number Publication Date
CN114936962A true CN114936962A (en) 2022-08-23
CN114936962B CN114936962B (en) 2023-06-23

Family

ID=82868565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210714146.7A Active CN114936962B (en) 2022-06-23 2022-06-23 One-to-one full text watermark encryption adding technology based on document

Country Status (1)

Country Link
CN (1) CN114936962B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101057743B1 (en) * 2011-01-21 2011-08-19 (주)와우소프트 A system for distributing secured documents to outside users
CN113689320A (en) * 2021-08-16 2021-11-23 南京英诺森软件科技有限公司 Image watermarking method based on deep learning model
CN113990330A (en) * 2021-10-26 2022-01-28 随锐科技集团股份有限公司 Method and device for embedding and identifying audio watermark based on deep network
CN114003870A (en) * 2021-09-28 2022-02-01 合肥高维数据技术有限公司 Audio copyright protection method and system based on two-dimensional code and digital watermark technology
CN114549273A (en) * 2022-02-28 2022-05-27 中山大学 Self-adaptive robust watermark embedding method and system based on deep neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101057743B1 (en) * 2011-01-21 2011-08-19 (주)와우소프트 A system for distributing secured documents to outside users
CN113689320A (en) * 2021-08-16 2021-11-23 南京英诺森软件科技有限公司 Image watermarking method based on deep learning model
CN114003870A (en) * 2021-09-28 2022-02-01 合肥高维数据技术有限公司 Audio copyright protection method and system based on two-dimensional code and digital watermark technology
CN113990330A (en) * 2021-10-26 2022-01-28 随锐科技集团股份有限公司 Method and device for embedding and identifying audio watermark based on deep network
CN114549273A (en) * 2022-02-28 2022-05-27 中山大学 Self-adaptive robust watermark embedding method and system based on deep neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIN S. SEO: "" An Asymmetric Matching Method for a Robust Binary Audio Fingerprinting"", vol. 21, no. 7, pages 844, XP011546829, DOI: 10.1109/LSP.2014.2310237 *
郗艳华;张敏瑞;: "脆弱性数字文档水印技术", no. 01, pages 1 - 5 *

Also Published As

Publication number Publication date
CN114936962B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
US7817161B2 (en) Texture synthesis using dimensionality-reduced appearance space
CN112150450B (en) Image tampering detection method and device based on dual-channel U-Net model
CN113536990A (en) Deep fake face data identification method
Chowdhuri et al. Secured steganographic scheme for highly compressed color image using weighted matrix through DCT
CN113077377A (en) Color image steganography method based on generation countermeasure network
Su et al. Hierarchical image resampling detection based on blind deconvolution
Chen et al. JSNet: a simulation network of JPEG lossy compression and restoration for robust image watermarking against JPEG attack
Yang et al. Xception-based general forensic method on small-size images
Jana et al. A new DCT based robust image watermarking scheme using cellular automata
CN114936962B (en) One-to-one full text watermark encryption adding technology based on document
CN116912130A (en) Image defogging method based on multi-receptive field feature fusion and mixed attention
Wang et al. Deep neural network watermarking based on texture analysis
Nataraj et al. Seam carving detection and localization using two-stage deep neural networks
CN102184516A (en) Digital watermarking method based on 2DPCA (two-dimensional principal component analysis)
Mansour et al. A Robust Deep Learning-Based Video Watermarking Using Mosaic Generation.
Yakoh et al. Re‐shooting Resistant Blind Watermarking Framework Based on Feature Separation With Gaussian Mixture Model
Chadha et al. Image steganography using Karhunen-Loève transform and least bit substitution
Cristin et al. Image tampering detection in image forensics using earthworm‐rider optimization
Chen et al. A lenet based convolution neural network for image steganalysis on multiclass classification
Wang et al. Median filtering detection using LBP encoding pattern★
Kubal et al. Image Manipulation Detection Using Error Level Analysis and Deep Learning
CN112862655A (en) JPEG image steganalysis method based on channel space attention mechanism
Kim et al. Two‐stream neural networks to detect manipulation of JPEG compressed images
Siopi et al. A Multi-Stream Fusion Network for Image Splicing Localization
Mahale et al. Copy-Move Image Forgery Detection Using Discrete Wavelet Transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240119

Address after: Room 0606, 6th Floor, Building A, Berlin International Business Center, No. 85 Binhe West Road, Wanbailin District, Taiyuan City, Shanxi Province, 030024

Patentee after: Forest Fantasy (Taiyuan) Digital Technology Co.,Ltd.

Address before: Room 302, Unit 2, Building 5, Agricultural Bank Residential Area, Nancheng District, Xinshi East Street, Jincheng City, Shanxi Province 048000

Patentee before: Jincheng Darui Jinma Engineering Design Consulting Co.,Ltd.

TR01 Transfer of patent right