CN114596187B - Double-domain robust watermark extraction method for diffusion weighted image - Google Patents

Double-domain robust watermark extraction method for diffusion weighted image Download PDF

Info

Publication number
CN114596187B
CN114596187B CN202210099627.1A CN202210099627A CN114596187B CN 114596187 B CN114596187 B CN 114596187B CN 202210099627 A CN202210099627 A CN 202210099627A CN 114596187 B CN114596187 B CN 114596187B
Authority
CN
China
Prior art keywords
watermark
image
weighted image
diffusion
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210099627.1A
Other languages
Chinese (zh)
Other versions
CN114596187A (en
Inventor
李智
刘程萌
樊缤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou University
Original Assignee
Guizhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou University filed Critical Guizhou University
Priority to CN202210099627.1A priority Critical patent/CN114596187B/en
Publication of CN114596187A publication Critical patent/CN114596187A/en
Application granted granted Critical
Publication of CN114596187B publication Critical patent/CN114596187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0051Embedding of the watermark in the spatial domain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0052Embedding of the watermark in the frequency domain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection

Abstract

The invention discloses a double-domain robust watermark extraction method for diffusion weighted images, which comprises the following steps: obtaining diffusion weighted images I in spatial domain o Embedding a watermark W into the entity characteristics by using an encoder, and reconstructing a diffusion weighted image containing the watermark W to obtain the diffusion weighted image; carrying out frequency domain transformation on the diffusion weighted image to obtain the global texture features of the diffusion weighted image; diffusion weighted image I o The frequency domain information is used as prior information, a watermark W is embedded into the global texture characteristics by using an encoder, and a frequency domain image is reconstructed; carrying out frequency domain inverse transformation on the frequency domain image to obtain a diffusion weighted image I embedded with a watermark W kspace And performing visual enhancement on the video image by utilizing a BEGAN network; diffusion weighted image I after visual enhancement ksapce Adding noise to generate a noise image I knoise (ii) a Extraction of noise images from noisy images I by means of a watermark extraction decoder knoise Extracting a watermark W; the invention effectively improves the accuracy and robustness of watermark extraction.

Description

Double-domain robust watermark extraction method for diffusion weighted image
Technical Field
The invention relates to the technical field of robust watermark extraction, in particular to a dispersion weighted image-oriented double-domain robust watermark extraction method.
Background
Nowadays, the medical imaging technology is becoming mature, and medical images become important bases for doctors to judge the illness state of patients. And at present, only Diffusion-Weighted Imaging (DWI) capable of noninvasively observing the movement of water molecules in living tissues has important clinical significance and research value for major diseases such as brain segmentation, tumor detection and the like. Due to the diagnosis requirement of remote medical treatment, a large number of medical images need to be transmitted on the network, but unprotected medical images are very easy to attack and illegally use in the network transmission process, so that doctors are influenced to make correct diagnosis on the medical images, and unauthorized persons are very likely to illegally use the medical images, so that the privacy of patients is leaked. In order to protect the integrity and reliability of medical images under the application requirements of remote diagnosis, it is very urgent to provide accurate and complete medical images to a remote specialist and to prevent the medical images from being used by unauthorized persons. The DWI image-oriented robust watermarking algorithm can effectively realize copyright protection of the diffusion-weighted image.
In the research aiming at the robust watermarking algorithm of the medical image, on one hand, the gray information of the medical image is not damaged so as to cause misjudgment of a doctor, and on the other hand, the watermark information is ensured to have stronger robustness. So how to embed robust watermarks while guaranteeing high medical image quality has become a major concern for researchers. In the traditional robust watermarking algorithm for the medical image, the watermark is embedded and extracted in the frequency domain of the medical image in a more common method, and the traditional robust watermarking algorithm is embedded in the frequency domain, so that the traditional robust watermarking algorithm has certain advantages in the imperceptibility of the watermark, but the fact that the traditional robust watermarking algorithm needs to design a special watermark protection algorithm according to the characteristics of an attack mode can be obviously found, and the universality is lacked.
The situation is broken through by the occurrence of deep learning, watermark information is embedded in a spatial domain of an image by a medical image robust watermark algorithm based on a deep learning framework, and then a watermark is extracted from an attacked watermark-containing image by utilizing a neural network, so that the limitation of the traditional robust watermark algorithm is broken through, and the watermark protection algorithm does not need to be designed independently in an attack mode; however, the current medical image robust watermarking algorithm based on the deep learning framework usually selects a single frequency domain or a single space domain for watermark embedding and extraction, and image feature extraction in a single domain is usually not sufficient enough, so that the watermark embedding amount is small, the watermark robustness is poor, and the image reconstruction quality after watermark information is embedded is low.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
In order to solve the above technical problems, the present invention provides the following technical solutions, including: weighting diffusion image I in spatial domain o Carrying out feature extraction to obtain a diffusion weighted image I o Embedding watermark W in the entity characteristic by using an encoder, and reconstructing a diffusion weighted image containing watermark W to obtain a diffusion weighted image I ispace (ii) a Diffusion weighted image I ispace Carrying out frequency domain transformation, carrying out characteristic extraction on the frequency spectrum coefficient of the frequency domain transformation to obtain a diffusion weighted image I ispace The global texture feature of (1); diffusion weighted image I o The frequency domain information is used as prior information, and then a watermark W is redundantly embedded in the global texture characteristics by using an encoder, and a frequency domain image containing the watermark W is reconstructed; carrying out frequency domain inverse transformation on the frequency domain image to obtain a diffusion weighted image I embedded with a watermark W kspace And performing visual enhancement on the video image by utilizing a BEGAN network; diffusion weighted image I after visual enhancement ksapce Adding noise to generate a noise image I knoise (ii) a From noisy images I by means of watermark extraction decoders knoise Extracting the watermark W.
As a preferred scheme of the dispersion weighted image-oriented double-domain robust watermark extraction method, the method comprises the following steps: reconstructing a diffusion-weighted image containing a watermark W comprises: building a reconstruction module based on a super-resolution reconstruction algorithm; performing channel splicing on each entity characteristic through a reconstruction module, forming pyramid characteristics through an attention mechanism, and then performing convolution on the pyramid characteristics; and will convolution knotAdding the result and the entity characteristic, and obtaining a diffusion weighted image I through convolution ispace
As a preferred scheme of the double-domain robust watermark extraction method for the diffusion weighted image, the method comprises the following steps: the method comprises the following steps: diffusion weighted image I in spatial domain using SDRDB o Carrying out feature extraction; wherein SDRDB weights the diffusion-weighted image I by using different expansion rates o And performing hole convolution to obtain the entity characteristics of n scales.
As a preferred scheme of the dispersion weighted image-oriented double-domain robust watermark extraction method, the method comprises the following steps: the method comprises the following steps: during reconstruction, corresponding loss functions are respectively designed:
Figure BDA0003491871600000021
L w =P(I ispace )+L mae (L kspace )L en =L w +L BEGAN wherein L is mae (I ksapce ) A loss function for a two-domain reconstruction, the two-domain referring to a spatial domain and a frequency domain; p (I) ispace ) A loss function reconstructed for the spatial domain; c is the image length, K is the image width, B is the number of pictures processed in batch, I out For the graph after embedding the watermark W, P is the perceptual loss, L BEGAN To combat losses, L en Is the total loss of embedding of the watermark W.
As a preferred scheme of the double-domain robust watermark extraction method for the diffusion weighted image, the method comprises the following steps: the method comprises the following steps: the watermark W is a sequence of [0,1 ].
As a preferred scheme of the double-domain robust watermark extraction method for the diffusion weighted image, the method comprises the following steps: the watermark extraction decoder comprises a basicBlock module and an attention mechanism; the abstractiveness of the extracted features is improved through a BasicBlock module; combining a BasicBlock module with an attention mechanism, circularly extracting watermark features for multiple times, and then fusing the watermark features by convolution to ensure that the channel number is the same as the length of a watermark sequence to obtain a watermark W.
As a preferred scheme of the double-domain robust watermark extraction method for the diffusion weighted image, the method comprises the following steps: the method comprises the following steps: combining a BasicBlock module and an attention mechanism, enabling the extracted features to learn the weight of the channel by self, wherein the weight calculation formula of the channel is as follows:
Figure BDA0003491871600000031
wherein, F weight Is the weight of the channel, F basic Characteristic value, w, output for the BasicBlock Module 1 And w 2 And delta is a relu activation function, sigma is a sigmoid activation function, W is a graph width, H is a graph height, and i and j are subscripts traversed in the signed number.
As a preferred scheme of the double-domain robust watermark extraction method for the diffusion weighted image, the method comprises the following steps: the method comprises the following steps: in the process of gradient descent of an encoder, a diffusion weighted image I is processed ispace And sharing the same parameters of the watermark extraction decoder with the frequency domain image for training, wherein the parameters are as follows:
W d1 =D(I inoise ),W d2 =D(I knoise )
L bce (W d )=-(W o ×lg(W d )+(1-W o )×lg(1-W d ))
L de =L bce (W d1 )+L bce (W d2 )
where D is a watermark extraction decoder, L de Extracting the loss function of the decoder for the watermark, W d1 Is from I inoise Extracted watermark, W d2 Is from I knoise Extracted watermark, W d For input, W o For embedding watermarks in images, L bce Is the cross entropy loss.
The invention has the beneficial effects that: the method extracts the feature information again from the context of the global features with different scales by utilizing the hole convolution, combines the residual block and the attention mechanism to learn the distribution correlation of the watermark information, and effectively improves the watermark extraction accuracy and robustness.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
FIG. 1 is a graph of a comparative experiment for reconstitution according to a second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as specifically described herein, and it will be appreciated by those skilled in the art that the present invention may be practiced without departing from the spirit and scope of the present invention and that the present invention is not limited by the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected" and "connected" in the present invention are to be construed broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
The embodiment provides a dispersion weighted image-oriented double-domain robust watermark extraction method, which comprises the following steps:
s1: weighting diffusion image I in spatial domain o Carrying out feature extraction to obtain a diffusion weighted image I o Embedding watermark W in the entity characteristic by using an encoder, and reconstructing a diffusion weighted image containing watermark W to obtain a diffusion weighted image I ispace
(1) Diffusion-weighted image I in the spatial domain using SDRDB (Squeeze-and-excitation proportional Dense Block) o And (5) carrying out feature extraction.
Wherein SDRDB weights the diffusion-weighted image I by using different expansion rates o And performing hole convolution to obtain the entity characteristics of n scales.
(2) A watermark W is embedded in the physical characteristic using an encoder.
The watermark W is a sequence of [0,1 ].
(3) And (4) reconstructing.
It should be noted that shannon's theorem indicates that the redundancy of information is a necessary condition for ensuring the robustness of a watermark, but distortion of an image is necessarily caused in the process of embedding the watermark, so in order to ensure the quality of the image and also enable the watermark W to be redundantly embedded, the embodiment proposes that a reconstruction module reconstructs a diffusion-weighted image containing the watermark W, specifically:
(1) Building a reconstruction module based on a super-resolution reconstruction algorithm;
the reconstruction module improves a reconstruction module DRDB in a super-resolution reconstruction algorithm to enable the reconstruction module DRDB to be embedded with a watermark W, and in order to reduce the influence of the watermark W on a reconstruction network, the reconstruction module replaces one characteristic channel with watermark information after the DRDB extracts complete local characteristics.
(2) Performing channel splicing on each entity characteristic through a reconstruction module, forming pyramid characteristics through an attention mechanism, and then performing convolution on the pyramid characteristics; and adding the convolution result to the entity characteristics to obtain a diffusion weighted image I by convolution in order to prevent network degradation caused by too deep network ispace
In order to ensure that the spatial domain features and the frequency domain features are sufficiently learned in the reconstruction process, the embodiment designs corresponding loss functions for two stages (spatial domain and dual-domain) during reconstruction:
Figure BDA0003491871600000061
L w =P(I ispace )+L mae (I kspace )
L en =L w +L BEGAN
wherein L is mae (I ksapce ) A loss function for a two-domain reconstruction, the two-domain referring to a spatial domain and a frequency domain; p (I) ispace ) Loss for spatial domain reconstructionA function; c is the image length, K is the image width, B is the number of pictures processed in batch, I out For the graph after embedding the watermark W, P is the perceptual loss, L BEGAN To combat losses, L en Is the total loss of embedding of the watermark W.
Preferably, by constructing the pyramid features, a sufficient range of receptive fields in the convolution is ensured.
S2: diffusion weighted image I ispace Carrying out frequency domain transformation, carrying out characteristic extraction on the frequency spectrum coefficient of the frequency domain transformation to obtain a diffusion weighted image I ispace The global texture feature of (1).
S3: diffusion weighted image I o The frequency domain information of (a) is used as prior information, and then the watermark W is redundantly embedded in the global texture characteristic by using an encoder, and a frequency domain image containing the watermark W is reconstructed.
S4: carrying out frequency domain inverse transformation on the frequency domain image to obtain a diffusion weighted image I embedded with a watermark W kspace And visually enhanced using the BEGAN network.
In order to promote the diffusion weighted image containing the watermark and the diffusion weighted image I again o The present embodiment uses the discriminator of the BEGAN network to weight the diffusion image I embedded with the watermark W kspace Visual enhancement is performed.
S5: diffusion weighted image I after visual enhancement ksapce Adding noise to generate a noise image I knoise
S6: from noisy images I by means of watermark extraction decoders knoise Extracting the watermark W.
The watermark extraction decoder comprises a BasicBlock module and an attention mechanism, and comprises the following specific extraction steps:
(1) The abstractiveness of the extracted features is improved through a BasicBlock module;
in order to extract complete watermark features, corresponding watermark features need to be extracted from a spatial domain and a frequency domain respectively, a module for extracting features in the existing classification network generally increases the depth or width of the network to extract high-level semantic features, and the high-level semantic features are abstract features.
The watermark feature to be extracted in the invention is actually an abstract feature, so in order to improve the accuracy of extracting the watermark, the embodiment provides a watermark feature extraction module for extracting the watermark feature in the spatial domain or the frequency domain of the image, namely a BasicBlock module of ResNet.
(2) Combining a BasicBlock module with an attention mechanism, circularly extracting watermark features for multiple times, and then fusing the watermark features by utilizing convolution to ensure that the channel number is the same as the length of a watermark sequence to obtain a watermark W.
The existing feature extraction methods all extract global features through top-down convolution, but the method limits the scope of the field of experience of convolution, so that the extraction of the global features is insufficient.
Specifically, a BasicBlock module and an attention mechanism are combined, watermark features are extracted circularly for multiple times, and then convolution is used for fusing the watermark features, so that the channel number of the watermark features is the same as the length of a watermark sequence.
It should be noted that, in the process of extracting the watermark, the embodiment adds the extracted features to obtain the complete watermark features, so that the weights (i.e., the weights of the channels) of the watermark information distribution in the spatial domain and the frequency domain are automatically learned in the training process, thereby solving the problem that the secondary watermark embedding conflicts with each other.
Specifically, the BasicBlock module is combined with the attention mechanism, so that the weight of the extracted feature self-learning channel is calculated according to the following formula:
Figure BDA0003491871600000071
wherein, F weight As weight of the channel, F basic Characteristic value, w, output for the BasicBlock Module 1 And w 2 For parameters of the fully-connected layer, delta is a relu activation function, and sigma is a sigmoid activation functionW is the graph width, H is the graph height, and i, j are the subscripts traversed in the plus symbol.
In the training process, if only the finally reconstructed watermark image is concerned, the watermark image reconstructed in the spatial domain in the training process is L mae (I ispace ) The loss is minimized, and the watermark information embedded in the spatial domain is possibly reduced as much as possible; therefore, to prevent this, this embodiment proposes a multi-feature training method, i.e. a diffusion-weighted image I is applied during the gradient descent of the encoder ispace And sharing the same parameters of the watermark extraction decoder with the frequency domain image for training, wherein the parameters are as follows:
W d1 =D(I inoise ),W d2 =D(I knoise )
L bce (W d )=-(W o ×lg(W d )+(1-W o )×lg(1-W d ))
L de =L bce (W d1 )+L bce (W d2 )
where D is a watermark extraction decoder, L de Extracting the loss function of the decoder for the watermark, W d1 Is from I inoise Extracted watermark, W d2 Is from I knoise Extracted watermark, W d For input, W o For embedding watermarks of images, L bce Is the cross entropy loss.
Example 2
In order to prove the effectiveness of the method, the embodiment verifies the effectiveness of the method in two directions of image reconstruction effect and watermark robustness by adjusting the network framework and performing experiments from the spatial domain, the frequency domain and the double domains of the image.
In the contrast experiment of image reconstruction effect, the reconstruction module built in the embodiment is used as a watermark embedding frame in a space domain and a frequency domain, a watermark extraction decoder is used as a watermark extraction module to carry out an experiment, and the experiment carried out in a double domain is carried out by using the method as a frame, wherein the quantitative experiment result is shown in table 1, the No. 1 laboratory is the result of carrying out convolution embedding watermark only in the space domain of an image, the No. 2 experiment is the result of carrying out convolution embedding watermark only in the frequency domain, the No. 3 experiment is the experiment result of convolution embedding watermark in the double domain, and the qualitative experiment result is shown in fig. 1.
Table 1: and (5) quantifying the experimental result.
Serial number Categories Average PSNR
1 Spatial domain 75.93
2 Frequency domain 75.00
3 Spatial domain + frequency domain 81.68
From table 1, it is known that the average PSNR of the image obtained by the method through training in the two domains is the highest, the PSNR of the watermark image reconstructed by the method can reach 81.68dB, the dispersion characteristic parameter changes little, and the tensor imaging requirement of the DWI image is completely met.
In order to verify the effectiveness of the watermark extraction decoder, in this embodiment, a clipping pixel replacement attack with the strength of 0.2 is used for performing a watermark robustness experiment, the experimental result is shown in table 2, experiment No. 1 is a decoder using only a ResNet residual block, experiment No. 2 is a decoder using a pyramid feature network and a ResNet residual block, experiment No. 3 is a decoder using a ResNet residual block, a pyramid feature network, and an attention mechanism network, experiment No. 4 is a watermark extraction decoder of the method, and it can be obtained from table 2 that the watermark extraction accuracy of the decoder provided herein is the highest.
Table 2: and (5) watermark extraction and comparison.
Serial number Categories Watermark extraction accuracy
1 Resnet residual block 97%
2 ResNet residual block + pyramid network 97.11%
3 ResNet residual block + pyramid network + attention mechanism 99.13%
4 Watermark extraction decoder 99.43%
Compared with the traditional robust watermarking algorithms (HiDDeN and DwiMark), the image quality reconstructed in the region of interest of the DWI image is higher, and the quantitative experiment result is shown in Table 3.
Table 3: quantitative experimental results for different reconstitution methods.
Serial number Network framework Average PSNR
1 HiDDeN 39.08
2 DwiMark 54.86
3 Text algorithm 81.68
It should be recognized that embodiments of the present invention can be realized and implemented in computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable connection, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, or the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media includes instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein. A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (7)

1. A double-domain robust watermark extraction method for diffusion weighted images is characterized by comprising the following steps:
weighting diffusion image I in spatial domain o Carrying out feature extraction to obtain a diffusion weighted image I o By the physical characteristics ofEmbedding watermark W in the entity characteristics by using an encoder, and reconstructing a diffusion weighted image containing watermark W to obtain a diffusion weighted image I ispace
Diffusion weighted image I ispace Carrying out frequency domain transformation, and carrying out characteristic extraction on the frequency spectrum coefficient to obtain a diffusion weighted image I ispace The global texture feature of (1);
diffusion weighted image I o The frequency domain information is used as prior information, and then a watermark W is redundantly embedded in the global texture characteristic by using an encoder, and a frequency domain image containing the watermark W is reconstructed;
performing frequency domain inverse transformation on the frequency domain image to obtain a diffusion weighted image I embedded with a watermark W kspace And performing visual enhancement on the video image by utilizing a BEGAN network;
diffusion weighted image I after visual enhancement ksapce Adding noise to generate a noise image I knoise
Extraction of noise images from noisy images I by means of a watermark extraction decoder knoise Extracting a watermark W, wherein a watermark extraction decoder comprises a BasicBlock module and an attention mechanism; the abstraction of the extracted features is improved through a BasicBlock module; combining a BasicBlock module with an attention mechanism, circularly extracting watermark features for multiple times, and then fusing the watermark features by convolution to ensure that the channel number is the same as the length of a watermark sequence to obtain a watermark W.
2. The method for extracting the double-domain robust watermark oriented to the diffusion-weighted image as claimed in claim 1, wherein reconstructing the diffusion-weighted image containing the watermark W comprises:
building a reconstruction module based on a super-resolution reconstruction algorithm;
performing channel splicing on each entity characteristic through a reconstruction module, forming pyramid characteristics through an attention mechanism, and then performing convolution on the pyramid characteristics; adding the convolution result and the entity characteristics to obtain a diffusion weighted image I through convolution ispace
3. The method for extracting the double-domain robust watermark oriented to the diffusion-weighted image as claimed in claim 1 or 2, comprising:
diffusion weighted image I in spatial domain using SDRDB o Carrying out feature extraction;
wherein SDRDB weights the diffusion-weighted image I by using different expansion rates o And performing hole convolution to obtain the entity characteristics of n scales.
4. The method for extracting the double-domain robust watermark oriented to the diffusion-weighted image as claimed in claim 3, comprising:
during reconstruction, corresponding loss functions are respectively designed:
Figure FDA0004091578440000021
L w =P(I ispace )+L mae (I kspace )
L en =L w +L BEGAN
wherein L is mae (I ksapce ) A loss function for a two-domain reconstruction, the two domains being a spatial domain and a frequency domain; p (I) ispace ) A loss function reconstructed for the spatial domain; c is image length, K is image width, B is number of batch processed pictures, I out For the graph after embedding the watermark W, P is the perceptual loss, L BEGAN To combat losses, L en Is the total loss of embedding of the watermark W.
5. The method for extracting the double-domain robust watermark oriented to the diffusion-weighted image as claimed in claim 4, comprising:
the watermark W is a sequence of [0,1 ].
6. The method for extracting the double-domain robust watermark oriented to the diffusion-weighted image as claimed in claim 5, comprising:
combining a BasicBlock module and an attention mechanism, enabling the extracted features to learn the weight of the channel by self, wherein the weight calculation formula of the channel is as follows:
Figure FDA0004091578440000022
wherein, F weight Is the weight of the channel, F basic Characteristic value, w, output for the BasicBlock Module 1 And w 2 And delta is a relu activation function, sigma is a sigmoid activation function, W is a graph width, H is a graph height, and i and j are subscripts traversed in the signed number.
7. The method for extracting the two-domain robust watermark oriented to the diffusion-weighted image as claimed in claim 6, comprising:
in the process of gradient descent of an encoder, a diffusion weighted image I is processed ispace And sharing the parameters of the same watermark extraction decoder with the frequency domain image for training, as follows:
W d1 =D(I inoise ),W d2 =D(I knoise )
L bce (W d )=-(W o ×lg(W d )+(1-W o )×lg(1-W d ))
L de =L bce (W d1 )+L bce (W d2 )
where D is a watermark extraction decoder, L de Extracting the loss function of the decoder for the watermark, W d1 Is from I inoise Extracted watermark, W d2 Is from I knoise Extracted watermark, W d For input, W o For embedding watermarks in images, L bce Is the cross entropy loss.
CN202210099627.1A 2022-01-27 2022-01-27 Double-domain robust watermark extraction method for diffusion weighted image Active CN114596187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210099627.1A CN114596187B (en) 2022-01-27 2022-01-27 Double-domain robust watermark extraction method for diffusion weighted image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210099627.1A CN114596187B (en) 2022-01-27 2022-01-27 Double-domain robust watermark extraction method for diffusion weighted image

Publications (2)

Publication Number Publication Date
CN114596187A CN114596187A (en) 2022-06-07
CN114596187B true CN114596187B (en) 2023-04-07

Family

ID=81806279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210099627.1A Active CN114596187B (en) 2022-01-27 2022-01-27 Double-domain robust watermark extraction method for diffusion weighted image

Country Status (1)

Country Link
CN (1) CN114596187B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116308985B (en) * 2023-05-23 2023-07-25 贵州大学 Robust watermarking method for diffusion tensor image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211015B (en) * 2018-02-28 2022-12-20 佛山科学技术学院 Watermark method based on characteristic object protection
CN109584178A (en) * 2018-11-29 2019-04-05 腾讯科技(深圳)有限公司 Image repair method, device and storage medium
CN113033379A (en) * 2021-03-18 2021-06-25 贵州大学 Intra-frame evidence-obtaining deep learning method based on double-current CNN
CN113095987B (en) * 2021-03-26 2022-02-01 贵州大学 Robust watermarking method of diffusion weighted image based on multi-scale feature learning

Also Published As

Publication number Publication date
CN114596187A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
Sorin et al. Creating artificial images for radiology applications using generative adversarial networks (GANs)–a systematic review
Gupta et al. Visibility improvement and mass segmentation of mammogram images using quantile separated histogram equalisation with local contrast enhancement
Madabhushi et al. New methods of MR image intensity standardization via generalized scale
CN112132959B (en) Digital rock core image processing method and device, computer equipment and storage medium
Wolterink et al. Generative adversarial networks: a primer for radiologists
CN109492668B (en) MRI (magnetic resonance imaging) different-phase multimode image characterization method based on multi-channel convolutional neural network
CN113095987B (en) Robust watermarking method of diffusion weighted image based on multi-scale feature learning
Bai et al. Probabilistic self‐learning framework for low‐dose CT denoising
JP2023540910A (en) Connected Machine Learning Model with Collaborative Training for Lesion Detection
Laino et al. Generative adversarial networks in brain imaging: A narrative review
CN112470190A (en) System and method for improving low dose volume contrast enhanced MRI
Li et al. Normalization of multicenter CT radiomics by a generative adversarial network method
WO2021061710A1 (en) Systems and methods for improving low dose volumetric contrast-enhanced mri
Koshino et al. Narrative review of generative adversarial networks in medical and molecular imaging
CN114596187B (en) Double-domain robust watermark extraction method for diffusion weighted image
CN111275686A (en) Method and device for generating medical image data for artificial neural network training
Gong et al. Robust medical zero‐watermarking algorithm based on Residual‐DenseNet
Wei et al. A robust image watermarking approach using cycle variational autoencoder
Fan et al. DwiMark: a multiscale robust deep watermarking framework for diffusion-weighted imaging images
Li et al. Low-dose CT image synthesis for domain adaptation imaging using a generative adversarial network with noise encoding transfer learning
Bauer et al. Integrated segmentation of brain tumor images for radiotherapy and neurosurgery
Kumar et al. Robust Medical Image Watermarking Scheme Using PSO, LWT, and Hessenberg Decomposition
Rezaee et al. A wavelet-based robust medical image watermarking technique using whale optimization algorithm for data exchange through internet of medical things
Reichman et al. Medical image tampering detection: A new dataset and baseline
Wang et al. DPAM-PSPNet: ultrasonic image segmentation of thyroid nodule based on dual-path attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant