CN115690014A - Medical image tampering detection and self-recovery method based on texture degree cross embedding - Google Patents

Medical image tampering detection and self-recovery method based on texture degree cross embedding Download PDF

Info

Publication number
CN115690014A
CN115690014A CN202211278309.8A CN202211278309A CN115690014A CN 115690014 A CN115690014 A CN 115690014A CN 202211278309 A CN202211278309 A CN 202211278309A CN 115690014 A CN115690014 A CN 115690014A
Authority
CN
China
Prior art keywords
block
pixel
roi
sub
tampered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211278309.8A
Other languages
Chinese (zh)
Inventor
石慧
颜克勋
周梓怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Normal University
Original Assignee
Liaoning Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Normal University filed Critical Liaoning Normal University
Priority to CN202211278309.8A priority Critical patent/CN115690014A/en
Publication of CN115690014A publication Critical patent/CN115690014A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a medical image tampering detection and self-recovery method based on texture degree cross embedding, which divides a medical image into an ROI (region of interest) and an RONI (network of interest) region; calculating texture complexity in the ROI according to different features, dividing the texture complexity into texture blocks and flat sliders, and extracting different features in different blocks by using a compressed sensing technology to serve as recovery information; setting pixel level and block level detection bits in the ROI area; the RONI area realizes recovery information hiding based on a reference matrix and a cross embedding technology. Whether copy and paste attacks exist in the whole image or not is detected by utilizing the gradient histogram and DCT, the tampered block of the ROI area is detected and positioned according to the pixel level and the block level, and the recovery information is extracted to restore the tampered image.

Description

Medical image tampering detection and self-recovery method based on texture degree cross embedding
Technical Field
The invention relates to a medical image tampering detection and self-recovery method, in particular to a medical image tampering detection and self-recovery method based on texture degree cross embedding.
Background
The 5G network provides better network capability for remote medical treatment, and the problems of malicious tampering, illegal copying, privacy disclosure and the like exist in the safe transmission of medical image data. If medical images which are maliciously tampered with are adopted, serious public trust crisis can be caused. Therefore, it is imperative to research medical image tamper detection (medical image integrity certification) techniques. The early tampering detection algorithm only judges whether the image is tampered or not, and the current tampering detection algorithm not only requires accurate positioning of a tampered area, but also requires approximate recovery.
Medical images can be generally divided into regions of Interest (ROI-Region of Interest) and regions of no Interest (RONI-Region of Non-Interest). Osborne et al first propose to process the ROI and RONI regions separately, compress the RONI region while ensuring the image quality of the ROI region, and later a plurality of scholars propose to extract ROI region features as watermarks to embed into the RONI region.
Medical image tampering modes have diversity, and image copy-paste tampering (copy-move) is a common means. Fridrich et al first defined copy-paste tampering and proposed block matching based detection algorithms, and later multi-scholars respectively proposed DWT transform, FWT transform and the like for describing image block characteristics. However, block-based detection methods are less robust.
Huang et al adopts SIFT as a method for detecting and describing key points of medical images, and uses Best Bin First algorithm to search similar feature vectors, thereby determining the positions of matched key points. Since then many scholars have designed a series of SIFT, SURF based algorithms. Compared with a detection method based on block matching, the method based on key point matching avoids global search and improves robustness, however, the algorithm has the problems of tamper trace of post-processing covering operation, low detection efficiency and the like.
Disclosure of Invention
The invention provides a medical image tampering detection and self-recovery method based on texture degree cross embedding, aiming at solving the problems in the prior art.
The technical solution of the invention is as follows: a medical image tampering detection method based on texture degree cross embedding is sequentially carried out according to the following steps:
step 1, dividing a medical image into an ROI area and an RONI area;
step 2, calculating texture complexity in the ROI according to different features, dividing the ROI into texture blocks and plain blocks according to the texture complexity, and extracting different features in different blocks by using a compressed sensing technology to serve as recovery information;
step 3, setting pixel-level and block-level detection bits in the ROI;
step 4, restoring information hiding is realized in the RONI area based on a reference matrix and a cross embedding technology;
step 5, detecting image copying and pasting attacks;
and 6, double tampering detection and positioning of the ROI.
The step 1 is specifically as follows:
step 1.1, converting the medical image img _ mark into a gray image format img _ origin;
step 1.2, identifying the position of the edge line of the ROI manually identified by a doctor, assigning the pixels of the edge line to be 1 in a binary matrix img _ edge, and assigning the rest pixels to be 0;
step 1.3, scanning a matrix img _ edge, filling an edge inner region with 1 to form an ROI region img _ area, and taking the rest part as an RONI region;
step 1.4, dividing the ROI img _ area into 4 multiplied by 4 blocks, judging whether a pixel value in each block is 1, if so, changing all pixels in each block into 1, otherwise, not operating;
step 1.5, constructing auxiliary information aux _ ROI _ area, including ROI area _ num, and coordinates loc _ LRs of the upper left corner and the lower right corner of each ROI area;
the step 2 is specifically as follows:
step 2.1, calculating texture degree of the ROI region img _ area sub-block:
step 2.1.1 calculating the energy value of the subblock according to the formula (1):
Figure BDA0003897421280000021
wherein g represents a gray level co-occurrence matrix, d, theta represent the distance and direction between two gray levels, and k represents the size of the sub-block;
step 2.1.2 calculate the subblock entropy values according to formula (2):
Figure BDA0003897421280000022
step 2.1.3 calculating the sub-block contrast according to formula (3):
Figure BDA0003897421280000023
step 2.1.4, respectively distributing different weights to the three characteristic values of energy, entropy and contrast of all the subblocks through mean square errors, and calculating final characteristic values J, H and D;
step 2.1.5 assignment of weights w to the three final eigenvalues 1 ,w 2 ,w 3 Computing a texture complexity f, where w 1 ,w 2 ,w 3 Determined by the global optimization algorithm:
f=w 1 ×J+w 2 ×H+w 3 ×D (4)
step 2.2, if the texture complexity f of the sub-block is greater than the threshold value T _ c, the sub-block is a texture block, otherwise, the sub-block is a smooth block;
step 2.3, generating a position map _ ROI _ complexity, wherein 1 and 0 respectively represent a texture block and a smooth block, compressing by using Huffman coding, storing the compressed length by using 14-bit binary bits, and finally splicing the length information and the whole auxiliary information to form auxiliary information aux _ complexity;
step 2.4 tessellating each 4 x 4 subblock of the ROI area into two disjoint parts
Figure BDA0003897421280000031
And
Figure BDA0003897421280000032
the first part is stored in a positive order, and the second part is stored in a reverse order;
Figure BDA0003897421280000033
step 2.5, the first part of the positive sequence storage is taken out, and different types of compressed sensing are carried out according to the texture class of the block:
step 2.5.1 for texture blocks, calculate the average ave of all pixels of the sub-block m High average value h m And a low average value of l m As shown in equations (6-8), where x n Is the number of pixels above the average within the block, P m Is the pixel value;
Figure BDA0003897421280000034
Figure BDA0003897421280000035
Figure BDA0003897421280000036
high average value h m Is marked as 1, low average value l m Generating bitmap block _ loc _ map, and marking the bitmap and binary high average value h as 0 m And binary low mean value l m Generating sub-block recovery information recovery _ part _1 by combination;
step 2.5.2, calculating the average value of the subblocks by using a formula (6) for the smooth block, converting the average value into a binary system, and constructing recovery information reduction _ part _1;
step 2.6, taking out the second part of the reverse storage, carrying out compressed sensing in the same step 2.1-2.4, judging the texture type of the block, carrying out different compression operations on the texture block and the plain slide block according to 2.5.1 or 2.5.2 respectively, and finally generating binary reduction information reduction _ part _2;
step 2.7 will { reduction _ part _1, reduction _part _ _2 respectively compressing by using Huffman coding, storing the compressed lengths { reduction _ part _1 \/length, reduction _part _2_ length } with a 20-bit binary system;
the step 3 is specifically as follows:
step 3.1 tessellating each 4 x 4 sub-block of the ROI area into two disjoint parts
Figure BDA0003897421280000041
And
Figure BDA0003897421280000042
step 3.2, constructing a first part of check bits:
step 3.2.1 first assigns the pixel value P by equation (9) m Converting into binary form, keeping the first 7 bits unchanged except the last pixel for the first part of pixels in the sub-block, and setting the last bit to be zero; for the last pixel, keeping the first 6 bits unchanged, and setting the last two bits to zero;
Figure BDA0003897421280000043
step 3.2.2 judge the number of 0,1 in the first 7 bits of each pixel by formula (10), if the number of 1 is more, the check bit is 1, otherwise, it is 0, and check bit P I Bit 8 placed in each pixel;
Figure BDA0003897421280000044
step 3.2.3 will removeThe first 7 bits of each pixel except the last pixel are exclusive-ORed according to bit to finally form a check bit B I Assigning to the 7 th bit of the last pixel;
step 3.3, constructing a second part of check bits:
step 3.3.1, keeping the first 7 bits of each pixel unchanged and the last bit of each pixel in the second part of the sub-block to be zero;
step 3.3.2, converting the first seven bits of the first pixel into a decimal, dividing the decimal by 128 to obtain a decimal between 0 and 1, and reserving three decimal places to generate a first parameter para _1;
step 3.3.3, converting the first 7 bits of other pixels into decimal numbers and adding the decimal numbers to generate a second parameter para _2;
step 3.3.4 let Z 0 = para _1, a = para \, 2, a random sequence is generated as in equation (11) using icmc mapping;
Figure BDA0003897421280000045
3.3.5, sorting the first 4 different values of the random sequence, converting the sorting index values 0-3 into 2-bit binary systems respectively, and splicing the binary systems together to form 8-bit block-level check bits which are arranged at the last bit of each pixel;
the step 4 is specifically as follows:
step 4.1, constructing an embedded reference matrix C, as shown in formula (12, 13), where i and j both belong to [0,255], setting an initial value C (0, 0) =0, and constraining conditions that values in each 3 × 3 block in the reference matrix C are not repeated and are 0 to 8;
C(i+1,j)=(C(i,j)+1)mod9 (12)
C(i,j+1)=(C(i,j)+3)mod9 (13)
step 4.2, dividing the RONI area into 2 multiplied by 2 sub-blocks, forming a pixel pair by the upper left and the lower right of each sub-block, and forming another pixel pair by the lower left and the upper right;
step 4.3 embed the auxiliary information aux _ ROI _ area into the LSB of line 1,2 of the image;
step 4.4, the auxiliary information aux _ complexity and the recovery information recovery _ part _1 are combined to be used as the first part of information to be embedded in the positive sequence; the reduction _ part _2 is used as a second part to be embedded into the information to be embedded in a reverse order;
step 4.5, converting each 3-bit binary system of the information to be embedded into a 1-bit octal system;
step 4.6 calculating the difference X between sub-block pixel pairs by equation (14) d ,X L And X R Two pixel values corresponding to the pixel pairs, respectively;
X d =X L -X R (14)
step 4.7, replacing and embedding by using a reference matrix C according to the relation between the difference value and the threshold value;
if X d Less than or equal to the threshold T _ dif, the following (1) to (3) are followed:
(1) Overflow handling
If there is a pixel value of 0 or 255 in the pixel pair, 0 is set to 1 and 255 is set to 254;
(2) Positioning
Couple of pixels (X) L ,X R ) Corresponding to the reference matrix C, finding the corresponding point R t (X L ,X R );
(3) Is embedded into
With R t (X L ,X R ) Finding the position of the octal information b to be embedded in a 3X 3 matrix of the central point, and finding the corresponding coordinate (X) L ′,X R ') the original pixel pair (X) is obtained using equation (15) L ,X R ) Modified to (X) L ′,X R ') in which M ∈ [ -1,1]I, j represents a position index;
(X L ′,X R ′)=(X L ±M i ,X R ±M j )if C(X L ±M i ,X R ±M j )==b (15)
if X d If the value is larger than the threshold value T _ dif, the following steps (1) to (4) are carried out:
(1) Overflow handling
If there is a pixel value of 0 or 255 in the pixel pair, set the 0 value to 2 and 255 to 253;
(2) Positioning
Couple of pixels (X) L ,X R ) Corresponding to the reference matrix C, finding the corresponding point R t (X L ,X R );
(3) Determining an embedding range
For the octal b of information to be embedded, according to X d Positive and negative of (2) determine the embedding range:
if X is d Is positive in (X) L ,X R ) To (X) L +2,X R -2) finding the position of b within the rectangular range;
if X is d Is negative at (X) L ,X R ) To (X) L -2,X R + 2) searching the position of b in the rectangular range;
(4) Embedding
Finding the corresponding coordinates (X) according to equation (16) L ′,X R ') will be the original pixel pair (X) L ,X R ) Modified as (X) L ′,X R ') in which U belongs to [0,2 ]],V∈[-2,0]I, j represents a position index;
Figure BDA0003897421280000061
the step 5 is as follows
Step 5.1, dividing the whole image into 4 multiplied by 4 subblocks p, and calculating the subblock gradient size and gradient direction by using a formula (17-20):
p h =(p(i,j-1)-p(i,j+1))+(p(i+1,j)-p(i-1,j)) (17)
p o =(p(i-1,j-1)-p(i+1,j+1))+(p(i+1,j-1)-p(i-1,j+1)) (18)
grad size (f)=|p h |+|p o | (19)
Figure BDA0003897421280000062
step 5.2, scanning the blocks in a block overlapping mode, wherein the scanning step length is 1;
step 5.3, dividing the 4 × 4 subblocks into 4 2 × 2 subunits, calculating the gradient size and gradient direction in each subunit, and expressing the gradient size and gradient direction by using a gradient histogram;
step 5.4 building sub-block features
Connecting the gradient histograms of the four subunits to form a subblock eigenvalue 1; then carrying out DCT transformation on the 4 multiplied by 4 sub-blocks, taking the value of the first row and the first column at the upper left corner of the coefficient matrix as a sub-block characteristic value 2, and combining the two characteristics together to construct a sub-block characteristic character _ all;
step 5.5, subtracting the feature values of all 4 × 4 blocks two by two to obtain dif _ value ij Where i ≠ j, as shown in equation (21):
dif_value ij =character_all i -character_all j (21)
step 5.6 setting threshold mark _ yy, and dividing difference dif _ value ij Comparing with a threshold value mark _ yy, judging whether the block is a copy-paste tampered block by using a formula (22), and if dif _ value ij Less than or equal to mark _ yy, indicating that the two blocks are the same, namely copying and pasting the tampered block, and detecting the result S ij Is set to 1; otherwise, the detection result S is obtained ij Set to 0, indicating a non-copy paste tampered block;
Figure BDA0003897421280000071
step 5.7 marks copy-paste tampered blocks belonging to the ROI area: d i ={d i |i=1,2,…,n};
Step 5.8, further positioning the ROI area by a neighborhood method, copying and pasting a tampering block:
calculating the number of the tampered blocks in the neighborhood of 8 of all the sub-blocks of the ROI area, if the number is more than or equal to 5, determining the block as the tampered block, otherwise, determining the block as the non-tampered block, as shown in formula (23), wherein N is 8 (d i ) Denotes d i Number of tampered blocks in domain 8;
Figure BDA0003897421280000072
the step 6 is specifically as follows:
step 6.1, extracting ROI area auxiliary information aux _ ROI _ area from LSB of the first and second lines of the image, and determining an ROI area through the number area _ num of the ROI areas and coordinates loc _ LRs of the upper left corner and the lower right corner of each ROI area;
step 6.2 tessellating each 4 x 4 subblock of the ROI region into two disjoint parts according to equation (5)
Figure BDA0003897421280000073
And
Figure BDA0003897421280000074
the first part is stored in a positive order, and the second part is stored in a reverse order;
step 6.3 first heavy tampering detection positioning:
step 6.3.1, the 7 th bit of the last pixel of the first part in the sub-block is taken according to the positive sequence to obtain a detection bit B I
Step 6.3.2, the first 7 bits of the rest pixels are subjected to bitwise XOR to obtain a detection bit B I ′;
Step 6.3.3 Block-level detection: b is to be I And B I Comparing, if the sub-block is different, determining that the sub-block is tampered, otherwise, determining that the sub-block is not tampered, and realizing block-level tamper detection positioning;
step 6.3.4 extract each pixel with the last bit as the detection bit P I
Step 6.3.5 calculate the number of 1,0 of the first 7 bits of each pixel, and if there are more than 1, detect bit P I ' is 1, otherwise is 0;
step 6.3.6 treatment of P I ' and P I Comparing, if not, determining that the pixel is tampered, otherwise, determining that the pixel is not tampered, and realizing pixel-level tamper detection positioning;
step 6.3.7 generating a position map LP of equal size to the ROI area of the image 1 Whether the marked pixel is tampered or not is marked as 1, and otherwise, the marked pixel is marked as 0;
step 6.4 second tamper detection positioning:
step 6.4.1 taking the 8 th bit of all the pixels in the second part of the sub-block in the reverse order, and splicing together to form W I
Step 6.4.2, taking the first 7 bits of the first pixel, converting the first 7 bits into a decimal, dividing the decimal by 128 to obtain a decimal between 0 and 1, and reserving three decimal places to generate a first parameter para _1;
step 6.4.3, converting the first 7 bits of other pixels into decimal, and adding the decimal to generate a second parameter para _2;
step 6.4.4 order Z 0 = para _1, a = para \2, a random sequence is generated as in equation (11) using icmc mapping;
step 6.4.5, the first 4 different values of the random sequence are taken for sorting, the sorting index values 0-3 are respectively converted into 2-bit binary systems to be spliced together to form an 8-bit block-level check bit W I ′;
Step 6.4.6 determining W I ' and W I Whether or not they are the same, generating a position map LP having the same size as the ROI area of the image 2 Whether the marked pixel is tampered or not is marked as 1, and otherwise, the marked pixel is marked as 0;
step 6.5 map of position LP 1 And LP 2 Merge into LP, LP 1 Or LP 2 If the pixel is marked to be tampered, the pixel is considered to be tampered and is set to be 1, otherwise, the pixel is considered to be not tampered and is set to be 0, and the formula (24) is adopted;
Figure BDA0003897421280000081
step 6.6 further determine tampered Block Using the Direction sub-bands
For each subblock, defining four directional bands, respectively (S, SW, W), (W, NW, N), (N, NE, E), (E, SE, S), wherein S represents a block located south of the block, W represents a block located west of the block, SW represents a block located south-west of the block, and so on; a block is considered a tampered block if all four directional strips of the block have been tampered with, otherwise is considered a non-tampered block, as shown in equation (25), where N is 4 (d i ) Denotes d i 4 directions ofThe number of tampering in the tape;
Figure BDA0003897421280000082
a tampering recovery method corresponding to the medical image tampering detection method based on texture degree cross embedding is carried out according to the following steps:
step 7.1, constructing an embedded reference matrix C, as shown in formula (12, 13), where i and j both belong to [0,255], setting an initial value C (0, 0) =0, and constraining conditions that values in each 3 × 3 block in the reference matrix C are not repeated and are 0 to 8;
step 7.2, dividing the RONI area into 2 multiplied by 2 sub-blocks, forming a pixel pair by the upper left and the lower right of each sub-block, and forming another pixel pair by the lower left and the upper right;
step 7.3 by pixel pair (X) L ,X R ) Corresponding to the reference matrix C in a coordinate mode, and extracting a corresponding value R t (X L ,X R ) Obtaining secret information;
step 7.4, the first 5 pieces of extracted information are taken and converted into decimal, and the length of the compressed position graph is calculated;
7.5, continuously extracting the compressed position map with the same length from the back, decompressing to form a texture classification position map;
step 7.6, extracting the secret information of the first part in a positive sequence, and extracting the information of the second part in a reverse sequence;
7.7, determining the classification of compressed sensing according to the marks of the position map, wherein texture blocks are represented by 24 bits, and smooth blocks are represented by 8 bits; if the block is a texture block, splitting 24 bits into 8+8 bits, the first 8 bits representing a high average value, the second 8 bits representing a low average value, and the third 8 bits representing a value distribution position diagram in the corresponding block;
step 7.8, for the texture block, replacing 1 in the position map with a high average value, replacing 0 with a low average value, and finally forming a reduction block; for the flat sliding block, the 8 bits are directly converted into the first part of pixels in the decimal replacement block, and the image recovery is completed.
The invention determines the embedding mode according to different pixel relations, so that the embedded information can be self-adapted to the content of the medical image, larger information is embedded when the pixel difference is larger, and smaller information is embedded when the pixel difference is smaller, the contrast of the medical image can be enhanced while the embedding capacity is improved, a better visual effect is achieved, and a certain effect is achieved on the diagnosis of the state of an illness by a doctor. The invention provides three detection methods in two aspects of block level and pixel level, so that an image can be accurately positioned to a tampered area after being interfered by any attack. Whether copy and paste attacks exist in the whole image or not is detected by utilizing the gradient histogram and DCT, the tampered block of the ROI area is detected and positioned according to the pixel level and the block level, and the recovery information is extracted to restore the tampered image.
Drawings
Fig. 1 is a general flowchart of an embodiment of the present invention, fig. 1 (a) is an embedding flowchart, and fig. 1 (b) is a detection and recovery flowchart.
Fig. 2 is a ROI region information map according to the embodiment of the present invention, fig. 2 (a) is a diagram showing a boundary line of an ROI region manually identified by a doctor, and fig. 2 (b) (c) is a diagram showing ROI auxiliary information.
FIG. 3 is a schematic diagram of texture block smooth block partitioning according to an embodiment of the present invention.
FIG. 4 is a schematic view of checkerboard cross-segmentation in accordance with an embodiment of the present invention.
FIG. 5 is a flow chart of compressed sensing according to an embodiment of the present invention.
FIG. 6 is a flow chart of constructing a parity bit according to an embodiment of the present invention.
Fig. 7 is a reference matrix diagram of an embodiment of the invention.
Fig. 8 is a schematic diagram of an embodiment of the invention.
FIG. 9 is a histogram of a watermark image and an original image according to an embodiment of the invention.
FIG. 10 is a block overlay scan according to an embodiment of the present invention.
Fig. 11 is a diagram of the result of the copy and paste attack resistance according to the embodiment of the present invention.
Detailed Description
Fig. 1 shows a method for detecting medical image tampering and self-recovery based on texture degree cross embedding, where fig. 1 (a) is an embedding flow chart, fig. 1 (b) is a detection flow chart, and the method is performed sequentially according to the following steps:
step 1, dividing the medical image into an ROI area and an RONI area:
step 1.1, converting the medical image img _ mark into a gray image format img _ origin;
step 1.2, identifying the position of the edge line of the ROI area manually identified by a doctor, and assigning the pixel of the edge line as 1 and the rest pixels as 0 in a binary matrix img _ edge as shown in fig. 2 (a);
step 1.3, scanning a matrix img _ edge, filling an edge inner area with 1 to form an ROI area img _ area, and taking the rest as an RONI area;
step 1.4, dividing the ROI img _ area into 4 multiplied by 4 blocks, judging whether a pixel value in each block is 1, if so, changing all pixels in each block into 1, otherwise, not operating;
step 1.5, constructing auxiliary information aux _ ROI _ area, including ROI area _ num, and coordinates loc _ LRs of the upper left corner and the lower right corner of each ROI, as shown in FIG. 2 (b);
the dividing result and embeddable rate of the ROI/RONI region of the medical image are shown in Table 1.
TABLE 1
Figure BDA0003897421280000101
Figure BDA0003897421280000111
Step 2, calculating texture complexity in the ROI according to different features, dividing the ROI into texture blocks and flat sliders according to the texture complexity, and extracting different features in different blocks by using a compressed sensing technology to serve as recovery information:
step 2.1, calculating texture degree of the ROI region img _ area sub-block:
step 2.1.1 calculating the energy value of the subblock according to the formula (1):
Figure BDA0003897421280000112
wherein g represents a gray level co-occurrence matrix, d, theta represent the distance and direction between two gray levels, and k represents the size of a sub-block;
step 2.1.2 calculate the subblock entropy values according to formula (2):
Figure BDA0003897421280000113
step 2.1.3 calculating the sub-block contrast according to formula (3):
Figure BDA0003897421280000114
step 2.1.4, respectively distributing different weights to the three characteristic values of energy, entropy and contrast of all the subblocks through mean square errors, and calculating final characteristic values J, H and D;
for example, the calculation steps of the final feature value J are as follows:
Figure BDA0003897421280000115
Figure BDA0003897421280000116
Figure BDA0003897421280000117
the MSE m Is the mean square error of the sub-block,
Figure BDA0003897421280000118
is the sub-block weight, k num Is the number of sub-blocks;
step 2.1.5 for three Final runsEigenvalue distribution weight w 1 ,w 2 ,w 3 Computing a texture complexity f, where w 1 ,w 2 ,w 3 Determined by the global optimization algorithm:
f=w 1 ×J+w 2 ×H+w 3 ×D (4)
step 2.2, if the texture complexity f of the sub-block is greater than the threshold value T _ c, the sub-block is a texture block, otherwise, the sub-block is a smooth block, and T _ c =0.7, as shown in fig. 3;
step 2.3, generating a position map _ ROI _ complex, wherein 1 and 0 respectively represent a texture block and a smooth block, compressing by using Huffman coding, storing the compressed length by using 14-bit binary bits, and finally splicing the length information and the whole auxiliary information to form auxiliary information aux _ complex, as shown in FIG. 2 (c);
step 2.4 tessellating each 4 x 4 subblock of the ROI area into two disjoint parts
Figure BDA0003897421280000121
And
Figure BDA0003897421280000122
as shown in fig. 4, the first part is stored in a positive order, and the second part is stored in a reverse order;
Figure BDA0003897421280000123
step 2.5, the first part of the positive sequence storage is taken out, and different types of compressed sensing are carried out according to the texture class of the block, as shown in fig. 5:
step 2.5.1 for texture blocks, calculate the average ave of all pixels of the sub-block m High average value h m And low average value l m As shown in equations (6-8), where x n Is the number of pixels above average within the block, P m Is the pixel value;
Figure BDA0003897421280000124
Figure BDA0003897421280000125
Figure BDA0003897421280000126
will be high average value h m Is marked as 1, low average value l m Generating bitmap block _ loc _ map, and marking the bitmap and binary high average value h as 0 m And binary low mean value l m Generating sub-block recovery information reduction _ part _1 in a combined manner;
step 2.5.2, calculating the average value of the subblocks by using a formula (6) for the smooth block, converting the average value into a binary system, and constructing recovery information reduction _ part _1;
step 2.6, taking out the second part of the reverse storage, carrying out compressed sensing in the same step 2.1-2.4, judging the texture type of the block, carrying out different compression operations on the texture block and the plain slide block according to 2.5.1 or 2.5.2 respectively, and finally generating binary reduction information reduction _ part _2;
step 2.7 will { reduction _ part _1, reduction _ part _2 respectively compressing by using Huffman coding, storing the compressed lengths { reduction _ part _1 \/length, reduction _part _2_ length } with a 20-bit binary system;
step 3, setting pixel-level and block-level detection bits in the ROI;
step 3.1 synchronization step 2.4, tessellating each 4 x 4 sub-block of the ROI area into two disjoint parts
Figure BDA0003897421280000134
And
Figure BDA0003897421280000135
step 3.2 constructs the first partial parity bit, as shown in fig. 6 (a):
step 3.2.1 first assigns the pixel value P by equation (9) m Converting into binary form, dividing the first part of pixels in the sub-block by the maximumExcept the latter pixel, all other pixels keep the first 7 bits unchanged, and the last bit is zero; for the last pixel, keeping the first 6 bits unchanged, and setting the last two bits to zero;
Figure BDA0003897421280000131
step 3.2.2 judges the number of 0,1 in the first 7 bits of each pixel by the formula (10), if the number of 1 is more, the check bit is 1, otherwise, 0, and checks the bit P I Bit 8 placed in each pixel;
Figure BDA0003897421280000132
step 3.2.3 bitwise XOR the first 7 bits of each pixel except the last pixel to finally form a check bit B I Assigning to the 7 th bit of the last pixel;
step 3.3 constructs a second partial parity bit, as shown in fig. 6 (b):
step 3.3.1, for the second part of pixels of the subblock, keeping the first 7 bits of each pixel unchanged, and keeping the last bit of each pixel to be zero;
step 3.3.2, converting the first seven bits of the first pixel into a decimal system, dividing the decimal system by 128 to obtain a decimal between 0 and 1, and reserving three decimal places to generate a first parameter para _1;
step 3.3.3, converting the first 7 bits of other pixels into decimal numbers and adding the decimal numbers to generate a second parameter para _2;
step 3.3.4 let Z 0 = para _1, a = para \, 2, a random sequence is generated as in equation (11) using icmc mapping;
Figure BDA0003897421280000133
3.3.5, taking the first 4 different values of the random sequence for sorting, respectively converting the sorting index values 0-3 into 2-bit binary systems to be spliced together to form 8-bit block-level check bits which are arranged at the last bit of each pixel;
step 4, restoring information hiding is realized in the RONI area based on a reference matrix and a cross embedding technology;
step 4.1, constructing an embedded reference matrix C, as shown in equation (12, 13), where i and j both belong to [0,255], setting an initial value C (0, 0) =0, and constraining conditions that values in each 3 × 3 block in the reference matrix C are not repeated and are 0 to 8, as shown in fig. 7;
C(i+1,j)=(C(i,j)+1)mod9 (12)
C(i,j+1)=(C(i,j)+3)mod9 (13)
step 4.2, dividing the RONI area into 2 multiplied by 2 sub-blocks, forming a pixel pair by the upper left and the lower right of each sub-block, and forming another pixel pair by the lower left and the upper right;
step 4.3 embed the auxiliary information aux _ ROI _ area into the LSB of line 1,2 of the image;
step 4.4, the auxiliary information aux _ complexity and the recovery information recovery _ part _1 are merged as the first part of information to be embedded in the forward order (i.e. from left to right, from top to bottom); the reduction _ part _2 is used as a second part to embed the information to be embedded in a reverse order (namely from right to left and from bottom to top);
step 4.5, converting each 3-bit binary system of the information to be embedded into a 1-bit octal system;
step 4.6 calculating the difference X between sub-block pixel pairs by equation (14) d ,X L And X R Two pixel values corresponding to the pixel pairs, respectively;
X d =X L -X R (14)
step 4.7, replacing and embedding by using a reference matrix C according to the relation between the difference value and the threshold value;
if X d Less than or equal to the threshold T _ dif, the following (1) to (3) are followed:
(1) Overflow handling
If the pixel value of 0 or 255 exists in the pixel pair, 0 is set to 1, and 255 is set to 254;
(2) Positioning
Couple of pixels (X) L ,X R ) Corresponding to the reference matrix C, finding the corresponding point R t (X L ,X R );
(3) Embedding
With R t (X L ,X R ) Finding the position of the octal information b to be embedded in a 3X 3 matrix of the central point, and finding the corresponding coordinate (X) L ′,X R ') the original pixel pair (X) is obtained using equation (15) L ,X R ) Modified as (X) L ′,X R ') in which M ∈ [ -1,1]I, j represents a position index;
(X L ′,X R ′)=(X L ±M i ,X R ±M j )if C(X L ±M i ,X R ±M j )==b (15)
assuming that there is one pixel pair (4, 4), the secret information bit is 1, and the threshold T _ dif is set to 3; judgment of X d =4-4=0<T _ dif, finding the position coordinates (5, 5) of the secret information bit, and modifying the original pixel pair (4, 4) into a secret pixel pair (5, 5), as shown in FIG. 8 (a);
if X d If the threshold value is larger than the threshold value T _ dif, the following steps (1) to (4) are carried out:
(1) Overflow handling
If there is a pixel value of 0 or 255 in the pixel pair, set the 0 value to 2 and 255 to 253;
(2) Positioning
Couple of pixels (X) L ,X R ) Corresponding to the reference matrix C, finding a corresponding point R t (X L ,X R );
(3) Determining an embedding range
For the octal b of information to be embedded, according to X d Positive and negative of (2) determine the embedding range:
if X is d Is positive in (X) L ,X R ) To (X) L +2,X R -2) finding the position of b within the rectangular range;
if X d Is negative at (X) L ,X R ) To (X) L -2,X R + 2) searching the position of b in the rectangular range;
(4) Is embedded into
Finding the corresponding coordinates (X) according to equation (16) L ′,X R ') will be the original pixel pair (X) L ,X R ) Modified as (X) L ′,X R ') in which U ∈ [0,2 ]],V∈[-2,0]I, j denotes a position index, as shown in fig. 8 (b);
Figure BDA0003897421280000151
the histogram pair of the watermarked image and the original image is shown in fig. 9; in fig. 9, (a 1-n 1) represents the original image histogram, and (a 2-n 2) represents the secret image histogram.
The PSNR, SSIM and NCC values of the watermarked image and the original image are shown in table 2.
TABLE 2
Figure BDA0003897421280000152
And 5, detecting image copy and paste attacks:
the step 5 is as follows
Step 5.1 is to resist copy and paste attacks, is not limited to the ROI area, divides the whole image into 4 x 4 sub-blocks p, and calculates the gradient size and gradient direction of the sub-blocks by using the formula (17-20):
p h =(p(i,j-1)-p(i,j+1))+(p(i+1,j)-p(i-1,j)) (17)
p o =(p(i-1,j-1)-p(i+1,j+1))+(p(i+1,j-1)-p(i-1,j+1)) (18)
grad size (f)=|p h |+|p o | (19)
Figure BDA0003897421280000161
step 5.2, scanning the blocks in a block overlapping mode, wherein the scanning step length is 1, as shown in fig. 10;
step 5.3, dividing the 4 × 4 subblocks into 4 2 × 2 subunits, calculating the gradient size and gradient direction in each subunit, and expressing the gradient size and gradient direction by using a gradient histogram;
step 5.4 building sub-Block features
Connecting the gradient histograms of the four subunits to form a subblock eigenvalue 1; then carrying out DCT transformation on the 4 multiplied by 4 sub-blocks, taking the value of the first row and the first column at the upper left corner of the coefficient matrix as a sub-block characteristic value 2, and combining the two characteristics together to construct a sub-block characteristic character _ all;
step 5.5, subtracting the feature values of all 4 multiplied by 4 blocks pairwise to obtain dif _ value ij Where i ≠ j, as shown in equation (21):
dif_value ij =character_all i -character_all j (21)
step 5.6 setting threshold mark _ yy, and dividing difference dif _ value ij Comparing with a threshold value mark _ yy, judging whether the block is a copy-paste tampered block by using a formula (22), and if dif _ value ij Less than or equal to mark _ yy, indicating that the two blocks are the same, namely copying and pasting the tampered block, and detecting the result S ij Is set to 1; otherwise, the detection result S is obtained ij Set to 0, indicating a non-copy paste tampered block;
Figure BDA0003897421280000162
step 5.7 marks copy-paste tampered blocks belonging to the ROI area: d i ={d i |i=1,2,…,n};
Step 5.8, further positioning the ROI area by a neighborhood method, copying and pasting a tampering block:
calculating the number of the tampered blocks in the neighborhood of 8 of all the sub-blocks of the ROI area, if the number is more than or equal to 5, determining the block as the tampered block, otherwise, determining the block as the non-tampered block, as shown in formula (23), wherein N is 8 (d i ) Denotes d i Number of tampered blocks in domain 8;
Figure BDA0003897421280000163
fig. 11 shows a graph of the results of the copy and paste attack resistance. In fig. 11, (a, e, i) denote three original images, (b, f, j) denote images after the copy-and-paste attack is received, (c, g, k) denote images for identifying tampered areas, and (d, h, i) denote images after recovery.
And 6, double tampering detection and positioning of the ROI:
step 6.1, extracting ROI area auxiliary information aux _ ROI _ area from LSB of the first row and the second row of the image, and determining the ROI area according to the number area _ num of the ROI areas and coordinates loc _ LRs of the upper left corner and the lower right corner of each ROI area;
step 6.2 tessellating each 4 x 4 sub-block of the ROI area into two disjoint parts according to equation (5)
Figure BDA0003897421280000171
And
Figure BDA0003897421280000172
the first part is stored in a positive sequence, and the second part is stored in a reverse sequence;
step 6.3 first heavy tampering detection positioning:
step 6.3.1 taking the 7 th bit of the last pixel of the first part in the sub-block in positive order to obtain the detection bit B I
Step 6.3.2, the first 7 bits of the rest pixels are subjected to bitwise XOR to obtain a detection bit B I ′;
Step 6.3.3 Block-level detection: b is to be I And B I Comparing, if the sub-block is different, determining that the sub-block is tampered, otherwise, determining that the sub-block is not tampered, and realizing block-level tamper detection positioning;
step 6.3.4 extract each pixel with the last bit as the detection bit P I
Step 6.3.5 calculate the number of 1,0 of the first 7 bits of each pixel, and if there are more than 1, detect bit P I ' is 1, otherwise is 0;
step 6.3.6 treatment of P I ' and P I Comparing, if not, determining that the pixel is tampered, otherwise, determining that the pixel is not tampered, and realizing pixel-level tamper detection positioning;
step 6.3.7 generating a position map LP of the same size as the ROI area of the image 1 Whether the marked pixel is tampered or not is marked as 1, and otherwise, the marked pixel is marked as 0;
step 6.4 second tamper detection positioning:
step 6.4.1 taking the 8 th bit of all the pixels in the second part of the sub-block in the reverse order, and splicing together to form W I
Step 6.4.2, the first 7 bits of the first pixel are taken and converted into decimal, the decimal is divided by 128 to obtain a decimal between 0 and 1, three decimal places are reserved, and a first parameter para _1 is generated;
6.4.3, converting the first 7 bits of other pixels into decimal, and adding the decimal to generate a second parameter para _2;
step 6.4.4 order Z 0 = para _1, a = para \, 2, a random sequence is generated as in equation (11) using icmc mapping;
step 6.4.5, the first 4 different values of the random sequence are taken for sorting, the sorting index values 0-3 are respectively converted into 2-bit binary systems to be spliced together to form an 8-bit block-level check bit W I ′;
Step 6.4.6 judging W I ' and W I Whether or not they are the same, generating a position map LP having the same size as the ROI area of the image 2 Whether the pixel is tampered or not is marked, the tampered pixel is marked as 1, and otherwise, the tampered pixel is marked as 0;
step 6.5 map of the position LP 1 And LP 2 Merge into LP, LP 1 Or LP 2 If the pixel is marked to be tampered, the pixel is considered to be tampered and is set to be 1, otherwise, the pixel is considered to be not tampered and is set to be 0, and the formula (24) is adopted;
Figure BDA0003897421280000181
step 6.6 further determine tampered Block Using the Direction sub-bands
For each subblock, defining four directional bands, namely S, SW and W; w, NW, N; n, NE, E; e, SE, S; wherein S represents a block located south of the block, W represents a block located west of the block, SW represents a block located south-west of the block, and so on; a block is considered to be tampered with if all four directional strips of the block are tampered withA tampered block, otherwise considered a non-tampered block, as shown in equation (25), where N is 4 (d i ) Denotes d i The number of falsifications in the 4 directional bands of (1);
Figure BDA0003897421280000182
the tamper recovery method corresponding to the medical image tamper detection method based on texture degree cross embedding of the present invention, as shown in fig. 1 (b), is performed according to the following steps:
step 7.1 constructs an embedded reference matrix C, as shown in equation (12, 13), where i and j both belong to [0,255], sets an initial value C (0, 0) =0, with the constraint that the values in each 3 × 3 block in the reference matrix C are not repeated and are 0-8;
step 7.2, dividing the RONI area into 2 multiplied by 2 sub-blocks, forming a pixel pair by the upper left and the lower right of each sub-block, and forming another pixel pair by the lower left and the upper right;
step 7.3 by pixel pair (X) L ,X R ) Corresponding to the reference matrix C in a coordinate mode, and extracting a corresponding value R t (X L ,X R ) Obtaining secret information;
step 7.4, taking the first 5 extracted information, converting the extracted information into a decimal system, and calculating the length of the compressed position map;
7.5, continuously extracting the compressed position map with the same length from the back, decompressing to form a texture classification position map;
step 7.6, extracting the secret information of the first part in a positive sequence, and extracting the information of the second part in a reverse sequence;
7.7, determining the classification of compressed sensing according to the marks of the position map, wherein texture blocks are represented by 24 bits, and smooth blocks are represented by 8 bits; if the block is a texture block, splitting 24 bits into 8+8 bits, the first 8 bits representing a high average value, the second 8 bits representing a low average value, and the third 8 bits representing a value distribution position diagram in the corresponding block;
7.8, for the texture block, replacing 1 in the position map with the high average value, and replacing 0 with the low average value to finally form a reduction block; for the flat sliding block, directly converting 8 bits into the first part of pixels in the decimal replacing block, and completing the image recovery.
The restored image and the original image contrast data are shown in table 3; and other references such as shown in table 4.
TABLE 3
Figure BDA0003897421280000191
TABLE 4
Figure BDA0003897421280000192
[1]Geetha R,Geetha S.Embedding electronic patient information in clinical images:an improved and efficient reversible data hiding technique[J].Multimedia Tools and Applications,2020,79(8).
[2]Geetha,R.,Geetha,S.(2018).Improved Reversible Data Embedding in Medical Images Using I-IWT and Pairwise Pixel Difference Expansion.In:Bhattacharyya,P.,Sastry,H.,Marriboyina,V.,Sharma,R.(eds)Smart and Innovative Trends in Next Generation Computing Technologies.NGCT 2017.Communications in Computer and Information Science,vol 828.Springer,Singapore.https://doi.org/10.1007/978-981-10-8660-1_45.
[3]Yang Y,Zhang W M,Liang,D,et al.A ROI-based high capacity reversible data hiding scheme with contrast enhancement for medical images[J].Multimedia tools and applications,2018,77(14):18043-18065.DOI:10.1007/s11042-017-4444-0。

Claims (3)

1. A medical image tampering detection method based on texture degree cross embedding is characterized by comprising the following steps in sequence:
step 1, dividing a medical image into an ROI (region of interest) and an RONI (region of interest);
step 2, calculating texture complexity in the ROI according to different features, dividing the ROI into texture blocks and plain blocks according to the texture complexity, and extracting different features in different blocks by using a compressed sensing technology to serve as recovery information;
step 3, setting pixel-level and block-level detection bits in the ROI;
step 4, restoring information hiding is realized in the RONI area based on a reference matrix and a cross embedding technology;
step 5, detecting image copying and pasting attacks;
and 6, double tampering detection and positioning of the ROI.
2. The medical image tampering detection method based on texture degree cross embedding of claim 1, characterized in that:
the step 1 is specifically as follows:
step 1.1, converting the medical image img _ mark into a gray image format img _ origin;
step 1.2, identifying the position of the edge line of the ROI manually identified by a doctor, assigning the pixels of the edge line to be 1 in a binary matrix img _ edge, and assigning the other pixels to be 0;
step 1.3, scanning a matrix img _ edge, filling an edge inner region with 1 to form an ROI region img _ area, and taking the rest part as an RONI region;
step 1.4, dividing the ROI img _ area into 4 multiplied by 4 blocks, judging whether a pixel value in each block is 1, if so, changing all pixels in each block into 1, otherwise, not operating;
step 1.5, constructing auxiliary information aux _ ROI _ area, including ROI area _ num, and coordinates loc _ LRs of the upper left corner and the lower right corner of each ROI area;
the step 2 is specifically as follows:
step 2.1, calculating texture degree of ROI region img _ area sub-block:
step 2.1.1 calculating the energy value of the subblock according to the formula (1):
Figure FDA0003897421270000011
wherein g represents a gray level co-occurrence matrix, d, theta represent the distance and direction between two gray levels, and k represents the size of a sub-block;
step 2.1.2 calculating subblock entropy values according to formula (2):
Figure FDA0003897421270000021
step 2.1.3 calculating the sub-block contrast according to formula (3):
Figure FDA0003897421270000022
step 2.1.4, respectively distributing different weights to the three characteristic values of energy, entropy and contrast of all subblocks through mean square errors, and calculating final characteristic values J, H and D;
step 2.1.5 assigns weights w to the three final eigenvalues 1 ,w 2 ,w 3 Computing a texture complexity f, where w 1 ,w 2 ,w 3 Determined by the global optimization algorithm:
f=w 1 ×J+w 2 ×H+w 3 ×D (4)
step 2.2, if the texture complexity f of the sub-block is greater than the threshold value T _ c, the sub-block is a texture block, otherwise, the sub-block is a smooth block;
step 2.3, generating a position map _ ROI _ complexity, wherein 1 and 0 respectively represent a texture block and a smooth block, compressing by using Huffman coding, storing the compressed length by using 14-bit binary bits, and finally splicing the length information and the whole auxiliary information to form auxiliary information aux _ complexity;
step 2.4 tessellating each 4 x 4 sub-block of the ROI area into two disjoint parts
Figure FDA0003897421270000023
And
Figure FDA0003897421270000024
the first part is stored in a positive order, and the second part is stored in a reverse order;
Figure FDA0003897421270000025
step 2.5, a first part of the positive sequence storage is taken out, and different types of compressed sensing are carried out according to the texture class of the block:
step 2.5.1 for texture blocks, calculate the average ave of all pixels of the sub-block m High average value h m And low average value l m As shown in equations (6-8), where x n Is the number of pixels above average within the block, P m Is the pixel value;
Figure FDA0003897421270000026
Figure FDA0003897421270000027
Figure FDA0003897421270000028
high average value h m Is marked as 1, low average value l m Is marked as 0, generates bitmap block _ loc _ map, and takes bitmap and binary high average value h m And binary low mean value l m Generating sub-block recovery information reduction _ part _1 in a combined manner;
step 2.5.2, calculating the average value of the subblocks by using a formula (6) for the smooth block, converting the average value into a binary system, and constructing recovery information reduction _ part _1;
step 2.6, taking out the second part of the reverse storage, carrying out compressed sensing in the same step 2.1-2.4, judging the texture type of the block, carrying out different compression operations on the texture block and the plain slide block according to 2.5.1 or 2.5.2 respectively, and finally generating binary reduction information reduction _ part _2;
step 2.7 will { reduction _ part _1, reduction _part _ _2 are compressed with huffman coding respectively, storing the compressed lengths { reduction _ part _1 \/length, reduction _part _2_ length } with a 20-bit binary system;
the step 3 is specifically as follows:
step 3.1 tessellating each 4 x 4 sub-block of the ROI area into two disjoint parts
Figure FDA0003897421270000033
And
Figure FDA0003897421270000034
step 3.2, constructing a first part of check bits:
step 3.2.1 first of all the pixel value P is expressed by the formula (9) m Converting into binary form, keeping the first 7 bits unchanged except the last pixel for the first part of pixels in the sub-block, and setting the last bit to be zero; for the last pixel, keeping the first 6 bits unchanged, and setting the last two bits to zero;
Figure FDA0003897421270000031
step 3.2.2 judge the number of 0,1 in the first 7 bits of each pixel by formula (10), if the number of 1 is more, the check bit is 1, otherwise, it is 0, and check bit P I Bit 8 placed in each pixel;
Figure FDA0003897421270000032
step 3.2.3 bitwise XOR the first 7 bits of each pixel except the last pixel to finally form a check bit B I Assigning to the 7 th bit of the last pixel;
step 3.3, constructing a second part of check bits:
step 3.3.1, for the second part of pixels of the subblock, keeping the first 7 bits of each pixel unchanged, and keeping the last bit of each pixel to be zero;
step 3.3.2, converting the first seven bits of the first pixel into a decimal system, dividing the decimal system by 128 to obtain a decimal between 0 and 1, and reserving three decimal places to generate a first parameter para _1;
step 3.3.3 converts the first 7 bits of other pixels into decimal and adds them to generate a second parameter para _2;
step 3.3.4 let Z 0 = para _1, a = para \2, a random sequence is generated as in equation (11) using icmc mapping;
Figure FDA0003897421270000041
3.3.5, sorting the first 4 different values of the random sequence, converting the sorting index values 0-3 into 2-bit binary systems respectively, and splicing the binary systems together to form 8-bit block-level check bits which are arranged at the last bit of each pixel;
the step 4 is specifically as follows:
step 4.1, constructing an embedded reference matrix C, as shown in formula (12, 13), where i and j both belong to [0,255], setting an initial value C (0, 0) =0, and constraining conditions that values in each 3 × 3 block in the reference matrix C are not repeated and are 0 to 8;
C(i+1,j)=(C(i,j)+1)mod9 (12)
C(i,j+1)=(C(i,j)+3)mod9 (13)
step 4.2, dividing the RONI area into 2 multiplied by 2 sub-blocks, forming a pixel pair by the upper left and the lower right of each sub-block, and forming another pixel pair by the lower left and the upper right;
step 4.3 embedding the auxiliary information aux _ ROI _ area into LSB of line 1,2 of the image;
step 4.4, the auxiliary information aux _ complexity and the recovery information recovery _ part _1 are combined to be used as the first part of information to be embedded in the positive sequence; the reduction _ part _2 is used as a second part to be embedded into the information to be embedded in a reverse order;
step 4.5, converting each 3-bit binary system of the information to be embedded into a 1-bit octal system;
step 4.6 calculating the difference X between sub-block pixel pairs by equation (14) d ,X L And X R Two pixel values corresponding to the pixel pair, respectively;
X d =X L -X R (14)
step 4.7, replacing and embedding by using a reference matrix C according to the relation between the difference value and the threshold value;
if X d Less than or equal to the threshold value T _ dif, the following (1) to (3) are performed:
(1) Overflow handling
If there is a pixel value of 0 or 255 in the pixel pair, 0 is set to 1 and 255 is set to 254;
(2) Positioning
Couple of pixels (X) L ,X R ) Corresponding to the reference matrix C, finding the corresponding point R t (X L ,X R );
(3) Embedding
With R t (X L ,X R ) Finding the position of the octal information b to be embedded in a 3X 3 matrix of the central point, and finding the corresponding coordinate (X) L ′,X R ') the original pixel pair (X) is obtained using equation (15) L ,X R ) Modified as (X) L ′,X R ') in which M ∈ [ -1,1]I, j represents a position index;
(X L ′,X R ′)=(X L ±M i ,X R ±M j ) if C(X L ±M i ,X R ±M j )==b (15)
if X d If the threshold value is larger than the threshold value T _ dif, the following steps (1) to (4) are carried out:
(1) Overflow handling
If there is a pixel value of 0 or 255 in the pixel pair, set the 0 value to 2 and 255 to 253;
(2) Positioning
Couple of pixels (X) L ,X R ) Corresponding to the reference matrix C, finding a corresponding point R t (X L ,X R );
(3) Determining an embedding range
For the octal b of information to be embedded, according to X d Positive and negative of (2) determine the embedding range:
if X is d Is positive in (X) L ,X R ) To (X) L +2,X R -2) finding the position of b within the rectangular range;
if X is d Is negative at (X) L ,X R ) To (X) L -2,X R + 2) searching the position of b in the rectangular range;
(4) Embedding
Find the corresponding coordinate (X) according to the formula (16) L ′,X R ') will be the original pixel pair (X) L ,X R ) Modified to (X) L ′,X R ') in which U belongs to [0,2 ]],V∈[-2,0]I, j represents a position index;
Figure FDA0003897421270000051
the step 5 is as follows
Step 5.1, dividing the whole image into 4 multiplied by 4 subblocks p, and calculating the subblock gradient size and gradient direction by using a formula (17-20):
p h =(p(i,j-1)-p(i,j+1))+(p(i+1,j)-p(i-1,j)) (17)
p o =(p(i-1,j-1)-p(i+1,j+1))+(p(i+1,j-1)-p(i-1,j+1)) (18)
grad size (f)=|p h |+|p o | (19)
Figure FDA0003897421270000052
step 5.2, scanning the blocks in a block overlapping mode, wherein the scanning step length is 1;
step 5.3, dividing the 4 × 4 subblocks into 4 2 × 2 subunits, calculating the gradient size and gradient direction in each subunit, and expressing the gradient size and gradient direction by using a gradient histogram;
step 5.4 building sub-block features
Connecting the gradient histograms of the four subunits to form a subblock eigenvalue 1; performing DCT transformation on the 4 multiplied by 4 subblocks, taking the value of the first row and the first column at the upper left corner of the coefficient matrix as a subblock characteristic value 2, and combining the two characteristics together to construct a subblock characteristic character _ all;
step 5.5, subtracting the feature values of all 4 multiplied by 4 blocks pairwise to obtain dif _ value ij Where i ≠ j, as shown in equation (21):
dif_value ij =character_all i -character_all j (21)
step 5.6 setting threshold mark _ yy, and dividing difference dif _ value ij Comparing with the threshold value mark _ yy, judging whether the block is a copy-paste tampered block by using a formula (22), and if the block is dif _ value ij Less than or equal to mark _ yy, indicating that the two blocks are the same, namely copying and pasting the tampered block, and detecting the result S ij Is set to 1; otherwise, the detection result S is obtained ij Set to 0, indicating a non-copy paste tampered block;
Figure FDA0003897421270000061
step 5.7 marks the copy paste tamper block belonging to the ROI area: d i ={d i |i=1,2,…,n};
Step 5.8, further positioning the ROI area by a neighborhood method, copying and pasting a tampered block:
for all sub-blocks of the ROI area, calculating the number of tampered blocks in the neighborhood of 8 of the sub-blocks, if the number is greater than or equal to 5, determining the block as a tampered block, otherwise, determining the block as a non-tampered block, as shown in formula (23), wherein N is 8 (d i ) Denotes d i Number of tampered blocks in domain 8;
Figure FDA0003897421270000062
the step 6 is specifically as follows:
step 6.1, extracting ROI area auxiliary information aux _ ROI _ area from LSB of the first and second lines of the image, and determining an ROI area through the number area _ num of the ROI areas and coordinates loc _ LRs of the upper left corner and the lower right corner of each ROI area;
step 6.2 tessellating each 4 x 4 subblock of the ROI region into two disjoint parts according to equation (5)
Figure FDA0003897421270000063
And
Figure FDA0003897421270000064
the first part is stored in a positive order, and the second part is stored in a reverse order;
step 6.3 first heavy tampering detection positioning:
step 6.3.1, the 7 th bit of the last pixel of the first part in the sub-block is taken according to the positive sequence to obtain a detection bit B I
Step 6.3.2, the first 7 bits of the rest pixels are subjected to bitwise XOR to obtain a detection bit B I ′;
Step 6.3.3 Block-level detection: b is to be I And B I Comparing, if the sub-block is different, determining that the sub-block is tampered, otherwise, determining that the sub-block is not tampered, and realizing block-level tamper detection positioning;
step 6.3.4 extract each pixel with the last bit as the detection bit P I
Step 6.3.5 calculates the number of 1,0 of the first 7 bits of each pixel, and if there are more than 1, detects bit P I ' is 1, otherwise is 0;
step 6.3.6 treatment of P I ' and P I Comparing, if not, determining that the pixel is tampered, otherwise, determining that the pixel is not tampered, and realizing pixel-level tamper detection positioning;
step 6.3.7 generating a position map LP of equal size to the ROI area of the image 1 Whether the marked pixel is tampered or not is marked as 1, and otherwise, the marked pixel is marked as 0;
step 6.4 second tamper detection positioning:
step 6.4.1 taking the 8 th bit of all the pixels in the second part of the sub-block in the reverse order, and splicing together to form W I
Step 6.4.2, the first 7 bits of the first pixel are taken and converted into decimal, the decimal is divided by 128 to obtain a decimal between 0 and 1, three decimal places are reserved, and a first parameter para _1 is generated;
6.4.3, converting the first 7 bits of other pixels into decimal, and adding the decimal to generate a second parameter para _2;
step 6.4.4 order Z 0 = para _1, a = para \2, a random sequence is generated as in equation (11) using icmc mapping;
step 6.4.5, the first 4 different values of the random sequence are taken for sorting, the sorting index values 0-3 are respectively converted into 2-bit binary systems to be spliced together to form an 8-bit block-level check bit W I ′;
Step 6.4.6 determining W I ' and W I Whether or not they are the same, generating a position map LP having the same size as the ROI area of the image 2 Whether the marked pixel is tampered or not is marked as 1, and otherwise, the marked pixel is marked as 0;
step 6.5 map of the position LP 1 And LP 2 Merge into LP, LP 1 Or LP 2 If the pixel is marked to be tampered, the pixel is considered to be tampered and is set to be 1, otherwise, the pixel is considered to be not tampered and is set to be 0, and the formula (24) is adopted;
Figure FDA0003897421270000071
step 6.6 further determine tampered Block Using the Direction sub-bands
For each subblock, defining four directional bands, respectively (S, SW, W), (W, NW, N), (N, NE, E), (E, SE, S), wherein S represents a block located south of the block, W represents a block located west of the block, SW represents a block located south-west of the block, and so on; a block is considered a tampered block if all four directional strips of the block have been tampered with, otherwise is considered a non-tampered block, as shown in equation (25), where N is 4 (d i ) Denotes d i The number of tampered bands in the 4 directional bands;
Figure FDA0003897421270000072
3. the tamper recovery method corresponding to the medical image tamper detection method based on texture degree cross embedding of claim 2 is characterized by comprising the following steps:
step 7.1 constructs an embedded reference matrix C, as shown in equation (12, 13), where i and j both belong to [0,255], sets an initial value C (0, 0) =0, with the constraint that the values in each 3 × 3 block in the reference matrix C are not repeated and are 0-8;
step 7.2, dividing the RONI area into 2 multiplied by 2 sub-blocks, forming a pixel pair by the upper left and the lower right of each sub-block, and forming another pixel pair by the lower left and the upper right;
step 7.3 by pixel pair (X) L ,X R ) Corresponding to the reference matrix C in a coordinate mode, and extracting a corresponding value R t (X L ,X R ) Obtaining secret information;
step 7.4, taking the first 5 extracted information, converting the extracted information into a decimal system, and calculating the length of the compressed position map;
7.5, continuously extracting the compressed position map with the same length from the back, decompressing to form a texture classification position map;
step 7.6, extracting the secret information of the first part in a positive sequence, and extracting the information of the second part in a reverse sequence;
7.7, determining the classification of compressed sensing according to the marks of the position map, wherein texture blocks are represented by 24 bits, and smooth blocks are represented by 8 bits; if the block is a texture block, splitting 24 bits into 8+8 bits, the first 8 bits representing a high average value, the second 8 bits representing a low average value, and the third 8 bits representing a value distribution position diagram in the corresponding block;
7.8, for the texture block, replacing 1 in the position map with the high average value, and replacing 0 with the low average value to finally form a reduction block; for the flat sliding block, directly converting 8 bits into the first part of pixels in the decimal replacing block, and completing the image recovery.
CN202211278309.8A 2022-10-19 2022-10-19 Medical image tampering detection and self-recovery method based on texture degree cross embedding Pending CN115690014A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211278309.8A CN115690014A (en) 2022-10-19 2022-10-19 Medical image tampering detection and self-recovery method based on texture degree cross embedding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211278309.8A CN115690014A (en) 2022-10-19 2022-10-19 Medical image tampering detection and self-recovery method based on texture degree cross embedding

Publications (1)

Publication Number Publication Date
CN115690014A true CN115690014A (en) 2023-02-03

Family

ID=85066155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211278309.8A Pending CN115690014A (en) 2022-10-19 2022-10-19 Medical image tampering detection and self-recovery method based on texture degree cross embedding

Country Status (1)

Country Link
CN (1) CN115690014A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117055790A (en) * 2023-08-11 2023-11-14 广东盈科电子有限公司 Interactive control method and device applied to image test area and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117055790A (en) * 2023-08-11 2023-11-14 广东盈科电子有限公司 Interactive control method and device applied to image test area and storage medium
CN117055790B (en) * 2023-08-11 2024-02-13 广东盈科电子有限公司 Interactive control method and device applied to image test area and storage medium

Similar Documents

Publication Publication Date Title
Huang et al. A reversible data hiding method by histogram shifting in high quality medical images
Jo et al. A digital image watermarking scheme based on vector quantisation
Deng et al. Local histogram based geometric invariant image watermarking
Al-Qershi et al. Two-dimensional difference expansion (2D-DE) scheme with a characteristics-based threshold
CN102147912B (en) Adaptive difference expansion-based reversible image watermarking method
CN108280797B (en) Image digital watermarking algorithm system based on texture complexity and JND model
Huo et al. Alterable-capacity fragile watermarking scheme with restoration capability
Gul et al. A novel triple recovery information embedding approach for self-embedded digital image watermarking
Kiani et al. A multi-purpose digital image watermarking using fractal block coding
Lu et al. Reversible data hiding using local edge sensing prediction methods and adaptive thresholds
CN106485640A (en) A kind of reversible water mark computational methods based on multi-level IPVO
CN105741225A (en) Reversible watermark method of multi-dimensional prediction error extension
CN108805788B (en) Reversible watermarking method based on image topological structure
Sarkar et al. Large scale image tamper detection and restoration
CN115690014A (en) Medical image tampering detection and self-recovery method based on texture degree cross embedding
Shen et al. A self-embedding fragile image authentication based on singular value decomposition
CN113032813B (en) Reversible information hiding method based on improved pixel local complexity calculation and multi-peak embedding
Wang et al. A novel image restoration scheme based on structured side information and its application to image watermarking
Su et al. Reversible data hiding using the dynamic block-partition strategy and pixel-value-ordering
CN115766963A (en) Encrypted image reversible information hiding method based on self-adaptive predictive coding
CN114399419B (en) Reversible image watermarking algorithm based on prediction error expansion
CN115330582A (en) Reversible watermarking algorithm based on unidirectional extreme value prediction error expansion
Rijati Nested block based double self-embedding fragile image watermarking with super-resolution recovery
CN111127288B (en) Reversible image watermarking method, reversible image watermarking device and computer readable storage medium
US8340343B2 (en) Adaptive video fingerprinting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination