CN115049541A - Reversible gray scale method, system and device based on neural network and image steganography - Google Patents
Reversible gray scale method, system and device based on neural network and image steganography Download PDFInfo
- Publication number
- CN115049541A CN115049541A CN202210834416.8A CN202210834416A CN115049541A CN 115049541 A CN115049541 A CN 115049541A CN 202210834416 A CN202210834416 A CN 202210834416A CN 115049541 A CN115049541 A CN 115049541A
- Authority
- CN
- China
- Prior art keywords
- image
- reversible
- code stream
- super
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000002441 reversible effect Effects 0.000 title claims abstract description 91
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 65
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000006243 chemical reaction Methods 0.000 claims abstract description 40
- 230000008569 process Effects 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 6
- 238000003860 storage Methods 0.000 claims description 3
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G06T3/04—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a reversible gray scale method, a system and a device based on neural network and image steganography, which comprises the steps of performing reversible conversion on an original color image to obtain a gray scale component Y and color components U and V; carrying out neural network coding and arithmetic coding on the color components to obtain a characteristic code stream and a super-check code stream; according to the image steganography, the characteristic code stream and the super-first-check code stream are steganographically written into the gray component Y to generate a reversible gray image G; reading a characteristic code stream and a super-check code stream in the reversible gray image G, and taking the read gray image G as a gray component Y of the color image to be reconstructed R (ii) a The characteristic code stream and the super-check code stream are subjected to neural network decoding and arithmetic decoding and are converted into a code stream to be reconstructedColor component U of a color image R And V R (ii) a Combining the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed for inverse conversion to obtain a reconstructed color image I R . The invention can realize reversible gray scale of neural network and image steganography.
Description
Technical Field
The invention relates to the field of reversible gray scale, in particular to a reversible gray scale method, a reversible gray scale system and a reversible gray scale device based on neural network and image steganography.
Background
The method for generating gray scale image by color image has important application in many fields, such as printing, carving, monochrome display, image processing and other scenes. The conventional gray-scale image generation method focuses on perception factors such as contrast and texture features. Another method of generating gray scale, which is called reversible gray scale, is mainly aimed at generating a gray scale image, encoding color information of a color image in a hidden manner in the generated gray scale, and restoring the original color image as perfectly as possible when necessary.
In 2018, Xia et al proposed a reversible gray scale method on ACM Transactions on Graphics, which models the image decoloring and coloring process as a closed loop through a coding-decoding network. The method can embed the color information into the generated gray level image, thereby leading the decoded image to reconstruct the color which is consistent with the original color more accurately.
In 2020, Ye et al propose a dual feature set network on IEEE Access, using dense residual representation, integrating local residual learning and local feature fusion capabilities, and suppressing the redundancy characteristics generated by a dual-path module by an attention mechanism, thereby obtaining a gray image and a reconstructed color image with better consistency.
In 2021, Liu et al proposed a JPEG robust reversible gray scale system on IEEE Transactions on Visualization and Computer Graphics, which introduced antagonism training and JPEG simulator based on codec networks to make the generated gray scale image JPEG robust and reduce the encoded texture of the generated image. Zhao et al proposed a new reversible gray scale method on IEEE Transactions on Image Processing, which forward maps a color Image into a gray scale Image and latent variables through a reversible neural network, and then converts the gray scale Image and a set of random variables conforming to a gaussian distribution into a color Image close to the original one through reverse mapping.
For the reversible gray scale method, the most critical performance is the similarity degree of the generated gray scale image and the reconstructed color image with the target gray scale image and the original color image respectively. The existing technology adopts an end-to-end structural framework, is imperfect in the aspect of the performance, and has obvious improvement space, which is mainly attributed to two main defects in the technologies: the redundancy of color information cannot be eliminated really and efficiently, so that the information quantity of the content to be coded is large. And secondly, in the process of coding the color information, the information loss of the gray level image is more.
Disclosure of Invention
The invention aims to provide a reversible gray scale method, a reversible gray scale system and a reversible gray scale device based on a neural network and image steganography, and aims to solve the problem of reversible gray scale of an image.
The invention provides a reversible gray scale method based on neural network and image steganography, which comprises the following steps:
s1, carrying out reversible RGB2YUV conversion on the original color image to obtain a gray component Y and color components U and V;
s2, carrying out neural network coding and arithmetic coding on the color components U and V to obtain a characteristic code stream and a super-first-check code stream;
s3, according to the image steganography, the characteristic code stream and the super-check code stream are steganographically written into the gray component Y, and a reversible gray image G is generated;
s4, reading the characteristic code stream and the super-check code stream in the reversible gray image G, and taking the read gray image G as the gray component Y of the color image to be reconstructed R ;
S5, carrying out neural network decoding and arithmetic decoding on the characteristic code stream and the super-check code stream to convert the characteristic code stream and the super-check code stream into a color component U of the color image to be reconstructed R And V R ;
S6, after the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed are merged and reversible YUV2RGB conversion is carried out,obtaining a reconstructed color image I R 。
The invention also provides a reversible gray scale system based on neural network and image steganography, which comprises:
a conversion module: the system is used for performing reversible RGB2YUV conversion on an original color image to obtain a gray component Y and color components U and V;
the coding module: the device is used for carrying out neural network coding and arithmetic coding on the color components U and V to obtain a characteristic code stream and a super-check code stream;
the steganography module is used for steganographically writing the characteristic code stream and the super-check code stream into the gray component Y according to the image steganography to generate a reversible gray image G;
a reading module: the method is used for reading the characteristic code stream and the super-check code stream in the reversible gray image G and taking the read gray image G as the gray component Y of the color image to be reconstructed R ;
A decoding module: the color component U is used for carrying out neural network decoding and arithmetic decoding on the characteristic code stream and the super-check code stream and converting the characteristic code stream and the super-check code stream into a color image to be reconstructed R And V R ;
A reconstruction module: is used for combining the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed to carry out reversible YUV2RGB conversion to obtain a reconstructed color image I R 。
The embodiment of the invention also provides a reversible gray scale device based on the neural network and the image steganography, which comprises the following steps: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the above method when executed by the processor.
An embodiment of the present invention further provides a computer-readable storage medium, where an implementation program for information transfer is stored, and when the implementation program is executed by a processor, the steps of the foregoing method are implemented.
In the first aspect, the invention designs a neural network to encode the color components of the color image, extracts the key features of the color components and models the probability model thereof, and encodes the features into a binary code stream through arithmetic coding. Compared with the prior art, the strategy can more efficiently eliminate the redundancy of the color information and reduce the loss of the color information, and solves the problem that the quality of the generated gray-scale image is poor due to excessive hidden information in the intermediate process.
In a second aspect, the present invention writes (or reads) color information into (or out of) a grayscale component by an image steganography technique, embedding the color information therein with minimal modification on the grayscale component, thereby generating a target grayscale image. Compared with the prior art, the gray level image generated by the invention has more ideal visual effect and less loss of gray level information.
In the third aspect, the invention utilizes a reversible component conversion method to decompose the color image into a gray component and a color component for orthogonal processing, and combines a neural network and image steganography, thereby remarkably improving the comprehensive performance indexes of generating a gray image and reconstructing the color image.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more comprehensible.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a neural network and image steganography based reversible grayscale method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a neural network and image steganography-based reversible grayscale method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a neural network based on a neural network and a reversible gray scale method of image steganography according to an embodiment of the present invention;
FIG. 4 is a modified pixel diagram of a neural network and image steganography-based reversible grayscale method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a neural network and image steganography-based reversible grayscale system according to an embodiment of the present invention;
FIG. 6 is a diagram of a neural network and image steganography-based reversible grayscale device according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Method embodiment
According to an embodiment of the present invention, a reversible gray scale method based on a neural network and image steganography is provided, fig. 1 is a flowchart of the reversible gray scale method based on the neural network and image steganography according to the embodiment of the present invention, as shown in fig. 1, specifically including:
s1, carrying out reversible RGB2YUV conversion on the original color image to obtain a gray component Y and color components U and V;
s2, carrying out neural network coding and arithmetic coding on the color components U and V to obtain a characteristic code stream and a super-first-check code stream;
s3, according to the image steganography, the characteristic code stream and the super-check code stream are steganographically written into the gray component Y, and a reversible gray image G is generated;
s4, reading the characteristic code stream and the super-check code stream in the reversible gray image G, and taking the read gray image G as the gray component Y of the color image to be reconstructed R ;
S5, carrying out neural network decoding and arithmetic decoding on the characteristic code stream and the super-check code stream to convert the characteristic code stream and the super-check code stream into a color component U of the color image to be reconstructed R And V R ;
S6, color to be reconstructedThe gray component of the image and the color component of the color image to be reconstructed are merged for reversible YUV2RGB conversion to obtain a reconstructed color image I R 。
Fig. 2 is a schematic diagram of a neural network and image steganography-based reversible grayscale method according to an embodiment of the present invention, as shown in fig. 2,
the process of generating the gray image by the color image comprises the following steps:
step a: reversible RGB2YUV conversion is carried out on the original color image I, and the image is decomposed to obtain a gray component Y and color components U and V;
step b: the color components U, V are analyzed and coded through a neural network to obtain characteristicsAnd the first passAnd establish characteristicsPrior probability model of (2)And the first passIndependent probability model ofThen combining the characteristics with arithmetic codingSuper priorRespectively converted into characteristic code streamsSuper prior code stream
Step c: characteristic code stream by using image steganography methodAnd a prior check code streamThe reversible gray image G is generated by writing the gray component Y in a hidden mode;
the process of reconstructing a color image by using a gray image comprises the following steps:
step d: reading out characteristic code stream from gray image GAnd a prior check code streamThe read gray scale image G is used as the gray scale component Y of the color image to be reconstructed R ;
Step e: according to independent probability modelsWill exceed the prior check code streamArithmetic decoding as a super-first-checkThen decoding through a neural network to obtain featuresTo obtain a prior probability modelAnd the characteristic code streamCharacterised by arithmetic decodingThe features are then paired by a neural networkSynthesizing to obtain color component U of color image to be reconstructed R ,V R ;
Step f: will gray component Y R And a color component U R ,V R After reversible YUV2RGB conversion is carried out on the combination, a reconstructed color image I is obtained R ;
Preferably, the reversible RGB2YUV conversion formula in step a is:
wherein Y represents a gray component value, U, V represent color component values; r, G, B represent pixel values of an original color image;indicating a rounding down.
Preferably, the content of the neural network is implemented as follows:
fig. 3 is a schematic diagram of a neural network based on the neural network and the reversible gray scale method of image steganography according to an embodiment of the present invention, as shown in fig. 3,
the neural network includes four parts: a feature analysis network, a feature synthesis network, a super-first-check coding network, and a super-first-check decoding network.
Extracting the main characteristic x of the color component by the characteristic analysis network, rounding and quantifying and then recording the characteristic as the characteristicCharacteristic ofUnder the condition that the information entropy is as small as possible, the effective reconstruction of the color components by the feature synthesis network is facilitated as much as possible;the characteristic analysis network consists of a convolution network layer and a GDN (generalized differentiated simulation) nonlinear layer; assuming the dimension of the input color component is H × W × 2, the output feature dimension of the feature analysis network is
The super-prior coding network further codes and quantizes the characteristic x and calculates the super-prior with the smallest information entropy as possibleInputting the probability model into a super-prior decoding network to accurately model the probability model of the characteristic x; the super prior coding network consists of a convolutional network layer and a RELU nonlinear active layer, and the corresponding output super prior dimension is
Super-first-check decoding network implementation pair super-first checkDecoding is carried out, and the decoded variable is a normal distribution probability model parameter of the characteristic xFurther constructing the obtained featurePrior probability model of (2)The super-prior decoding network is composed of a transposed convolution network layer and a RELU nonlinear activation layer, and the output probability model parameter isWherein the prior probability modelIn the form of:
feature synthesis network based on input featuresSynthesizing and reconstructing color components which are as close to the original color components as possible; the characteristic synthesis network is composed of a transposed convolution network layer and an IGDN (inverted generated differentiated nonlinear simulation) nonlinear layer, and the dimensionality of an output reconstruction component is H multiplied by W multiplied by 2;
preferably, the image steganography method is implemented as follows:
the pixels of the carrier gray scale image are scanned column by column and are denoted as P 1 ,P 2 ,P 3 ,…,P m (ii) a Characteristic code streamAnd a prior check code streamAre combined into a binary code stream O and denoted b 1 ,b 2 ,b 3 ,…,b n (ii) a Sequentially taking every 3 pixels and every 3-bit binary code as a group, and respectively representing the 3 pixels of the ith group as P i_1 ,P i_2 ,P i_3 And 3-bit binary codes are respectively represented as b i_1 ,b i_2 ,b i_3 ;
In step c, the steganography process realizes that 3 pixels of each group embed 3-bit binary code information, and the implementation is as follows:
first, a 3-bit predictive code is calculated from the values of 3 pixels:
wherein B is i_1 ,B i_2 ,B i_3 A 3-bit predictive code representing the ith group,representing the ith group of nth pixels p i_n The lowest j-th bit of the binary value of (c),representing an exclusive or operation;
if the code B is predicted i_1 ,B i_2 ,B i_3 And binary code b i_1 ,b i_2 ,b i_3 If the pixel values are equal, the pixel is not required to be modified; otherwise by modifying the pixel P i_1 ,P i_2 ,P i_3 Making the prediction code and the binary code equal, thereby completing the steganography of the binary code;
fig. 4 is a schematic diagram of modified pixels of the reversible gray scale method based on neural network and image steganography according to an embodiment of the present invention, as shown in fig. 4,
B i_1 ≠b i_1 ,B i_2 =b i_2 ,B i_3 =b i_3 (ii) a If P is i_2 mod 2 ═ 0, then P i_2 =P i_2 -1; if P is i_2 mod 2 is not, then P i_2 =P i_2 +1;
In addition, the code stream of which the final quantity does not satisfy 3 bits directly replaces the lowest binary bit of the corresponding sequence pixel to realize steganography.
In step d, the reading process is as follows:
calculating P sequentially according to the above prediction method 1 To P m Every 3 pixels P i_1 ,P i_2 ,P i_3 If the last number of pixels does not satisfy 3 bits, the lowest binary bit of the pixel is directly read to replace the corresponding prediction code, thereby reading out the prediction codeThe binary code stream O hidden in the gray level image is decomposed to obtain the characteristic code streamAnd a prior check code stream
Preferably, the reversible YUV2RGB conversion formula in step f is:
wherein R, G, B represent pixel values of a color image; y represents a gray component value, U, V represents a color component value;indicating a rounding down.
Specifically, the embodiment is implemented through Python, and the neural network is constructed by using a Pytorch deep learning frame and optimized and trained through an Adam optimizer; the training set is 20000 images randomly extracted from the Pascal VOC2012 public data set, the cropping resolution size is 512 multiplied by 512, and the testing set is a Kodak Photo CD image data set; training initial learning rate set to 1 × 10 -4 Iteration 2X 10 6 The rate of learning decreases to 1 × 10 -5 Continue iteration by 5 × 10 5 Secondly; the training process restrains the optimization direction of the neural network through a loss function, and comprehensively reduces the conversion loss and the characteristics of the color componentsSuper priorThe coded stream length of (2). The loss function used for training is:
wherein the first term of the equation is the color component U, V before conversion and the color component U after conversion R ,V R The mean square error value between; the second and third terms are respectively characterizedSuper priorThe entropy of the information of (1);
to sum up, this embodiment finally realizes: the original color image generates a corresponding reversible gray scale image, and then a color image which is basically consistent with the original color image can be reconstructed from the gray scale image.
Compared with the prior art, the technical scheme provided by the invention at least has the following advantages:
in the first aspect, the invention designs a neural network to encode the color components of the color image, extracts the key features of the color components and models the probability model thereof, and encodes the features into a binary code stream through arithmetic coding. Compared with the prior art, the strategy can more efficiently eliminate the redundancy of the color information and reduce the loss of the color information, and solves the problem that the quality of the generated gray-scale image is poor due to excessive hidden information in the intermediate process.
In a second aspect, the present invention writes (or reads) color information into (or out of) a grayscale component by an image steganography technique, embedding the color information therein with minimal modification on the grayscale component, thereby generating a target grayscale image. Compared with the prior art, the gray level image generated by the invention has more ideal visual effect and less loss of gray level information.
In the third aspect, the invention utilizes a reversible component conversion method to decompose the color image into a gray component and a color component for orthogonal processing, and combines a neural network and image steganography, thereby remarkably improving the comprehensive performance indexes of generating a gray image and reconstructing the color image.
System embodiment
According to an embodiment of the present invention, a reversible gray scale system based on a neural network and image steganography is provided, and fig. 5 is a schematic diagram of the reversible gray scale system based on the neural network and image steganography according to the embodiment of the present invention, as shown in fig. 5, specifically including:
a conversion module: the system is used for performing reversible RGB2YUV conversion on an original color image to obtain a gray component Y and color components U and V;
the coding module: the device is used for carrying out neural network coding and arithmetic coding on the color components U and V to obtain a characteristic code stream and a super-check code stream;
the steganography module is used for steganographically writing the characteristic code stream and the super-check code stream into the gray component Y according to the image steganography to generate a reversible gray image G;
a reading module: the method is used for reading the characteristic code stream and the super-check code stream in the reversible gray level image G, and taking the read gray level image G as the gray level component Y of the color image to be reconstructed R ;
A decoding module: the color component U is used for carrying out neural network decoding and arithmetic decoding on the characteristic code stream and the super-check code stream and converting the characteristic code stream and the super-check code stream into a color image to be reconstructed R And V R ;
A reconstruction module: is used for combining the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed to carry out reversible YUV2RGB conversion to obtain a reconstructed color image I R 。
The conversion module is specifically configured to:
reversible RGB2YUV conversion is carried out on the original color image to obtain a gray component Y and color components U and V, wherein the reversible RGB2YUV conversion formula is as follows:
wherein Y represents a gray component value, and U and V represent color component values; r, G, B represent pixel values of an original color image;represents rounding down;
the encoding module is specifically configured to:
analyzing and coding the color components U and V through a neural network to obtain characteristics and super-prior, establishing a prior probability model of the characteristics and an independent probability model of the super-prior, and then combining arithmetic coding to convert the characteristic heat exchange super-prior into a characteristic code stream and a super-prior code stream respectively;
the steganographic module is specifically configured to:
the pixels of the gray component Y are represented by P in a column scan 1 ,P 2 ,P 3 ,…,P m The signature stream and the super-parity stream are combined into a binary stream and denoted b 1 ,b 2 ,b 3 ,…,b n ;
Predicting a 3-bit prediction code according to the 3 pixel values, wherein the formula is as follows:
wherein B is i_1 ,B i_2 ,B i_3 A 3-bit predictive code representing the ith group,representing the ith group of nth pixels p i_n The lowest j-th bit of the binary value of (c),representing an exclusive or operation;
if the binary code b i_1 ,b i_2 ,b i_3 And a predictive code B i_1 ,B i_2 ,B i_3 Equal, then no modification is needed to the pixel, otherwise pixel P is modified i_1 ,P i_2 ,P i_3 Until the binary code is equal to the predicted code, completing steganography of the binary code, directly replacing the lowest binary bit of the pixels of the corresponding sequence by the code stream of which the number does not meet 3 bits to realize steganography, and completing the step of replacing 3 pixels P i_1 ,P i_2 ,P i_3 Hiding 3-bit binary code b i_1 ,b i_2 ,b i_3 Generating a reversible gray image G;
the decoding module is specifically configured to:
the method comprises the steps of decoding a super-prior code stream into super-prior according to an independent probability model, then decoding through a neural network to obtain probability model parameters of characteristics, thereby obtaining a prior probability model, decoding the characteristic code stream into the characteristics, and then synthesizing the characteristics through the neural network to obtain a color component U of a color image to be reconstructed R ,V R ;
The reconstruction module is specifically configured to:
combining the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed for reversible YUV2RGB conversion to obtain a reconstructed color image I R ,
The reversible YUV2RGB conversion formula is as follows:
wherein R, G, B represent pixel values of a color image; y represents a gray component value, U, V represents a color component value;indicating a rounding down.
The embodiment of the present invention is a system embodiment corresponding to the above method embodiment, and specific operations of each module may be understood with reference to the description of the method embodiment, which is not described herein again.
Apparatus embodiment one
An embodiment of the present invention provides a reversible gray scale device based on a neural network and image steganography, as shown in fig. 5, including: a memory 50, a processor 52 and a computer program stored on the memory 50 and executable on the processor 52, the computer program, when executed by the processor, implementing the steps of the above-described method embodiments.
Example II of the device
The embodiment of the present invention provides a computer-readable storage medium, on which an implementation program for information transmission is stored, and when the program is executed by the processor 52, the steps in the above method embodiments are implemented.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; the technical solutions of the embodiments of the present invention are not modified or replaced, and the essence of the corresponding technical solutions does not depart from the scope of the present invention.
Claims (10)
1. A reversible gray scale method based on neural network and image steganography is characterized by comprising the following steps,
s1, carrying out reversible RGB2YUV conversion on the original color image to obtain a gray component Y and color components U and V;
s2, carrying out neural network coding and arithmetic coding on the color components U and V to obtain a characteristic code stream and a super-first-check code stream;
s3, according to the image steganography, the characteristic code stream and the super-check code stream are steganographically written into the gray component Y, and a reversible gray image G is generated;
s4, reading the characteristic code stream and the super-check code stream in the reversible gray image G, and taking the read gray image G as the gray component Y of the color image to be reconstructed R ;
S5, carrying out neural network decoding and arithmetic decoding on the characteristic code stream and the super-check code stream to convert the characteristic code stream and the super-check code stream into a color component U of the color image to be reconstructed R And V R ;
S6, combining the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed for reversible YUVAfter 2RGB conversion, a reconstructed color image I is obtained R 。
2. The method according to claim 1, wherein the S1 specifically includes:
reversible RGB2YUV conversion is carried out on the original color image to obtain a gray component Y and color components U and V, wherein the reversible RGB2YUV conversion formula is as follows:
3. The method according to claim 2, wherein the S2 specifically includes:
the color components U and V are analyzed and coded through a neural network to obtain characteristics and super-prior, a prior probability model of the characteristics and an independent probability model of the super-prior are established, and then the characteristic heat exchange super-prior is converted into a characteristic code stream and a super-prior code stream respectively by combining arithmetic coding.
4. The method according to claim 3, wherein the S3 specifically comprises:
the pixels of the gray component Y are represented by P in a column scan 1 ,P 2 ,P 3 ,…,P m The signature stream and the super-parity stream are combined into a binary stream and denoted b 1 ,b 2 ,b 3 ,…,b n ;
Sequentially taking every 3 pixels and every 3-bit binary code as a group, and respectively representing the 3 pixels of the ith group as P i_1 ,P i_2 ,P i_3 And 3-bit binary codes are respectively represented as b i_1 ,b i_2 ,b i_3 ;
In step c, the steganography process realizes that 3 pixels of each group embed 3-bit binary code information, and the implementation is as follows:
first, a 3-bit predictive code is calculated from the values of 3 pixels:
wherein B is i_1 ,B i_2 ,B i_3 A 3-bit predictive code representing the ith group,representing the ith group of nth pixels p i_n The j-th bit with the lowest binary value indicates XOR operation;
if the code B is predicted i_1 ,B i_2 ,B i_3 And binary code b i_1 ,b i_2 ,b i_3 If the pixel values are equal, the pixel is not required to be modified; otherwise by modifying the pixel P i_1 ,P i_2 ,P i_3 Making the prediction code and the binary code equal, thereby completing the steganography of the binary code;
the code stream with the quantity not meeting the 3 bits directly replaces the lowest binary bit of the pixels of the corresponding sequence to realize steganography, and 3 pixels P are completed i+1 ,P i+2 ,P i+3 Hiding 3-bit binary code b i+1 ,b i+2 ,b i+3 And a reversible gradation image G is generated.
5. The method according to claim 4, wherein the S5 specifically includes:
the method comprises the steps of decoding a super-prior code stream into super-prior according to an independent probability model, then decoding through a neural network to obtain probability model parameters of characteristics, thereby obtaining a prior probability model, decoding the characteristic code stream into the characteristics, and then synthesizing the characteristics through the neural network to obtain a color component U of a color image to be reconstructed R ,V R 。
6. The method according to claim 5, wherein the S6 specifically comprises:
combining the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed for reversible YUV2RGB conversion to obtain a reconstructed color image I R ,
The reversible YUV2RGB conversion formula is as follows:
7. A reversible gray scale system based on neural network and image steganography is characterized by comprising,
a conversion module: the system is used for performing reversible RGB2YUV conversion on an original color image to obtain a gray component Y and color components U and V;
the coding module: the device is used for carrying out neural network coding and arithmetic coding on the color components U and V to obtain a characteristic code stream and a super-check code stream;
the steganography module is used for steganographically writing the characteristic code stream and the super-check code stream into the gray component Y according to the image steganography to generate a reversible gray image G;
a reading module: is used for reading the characteristic code stream and the super check code stream in the reversible gray level image G and taking the read gray level image G as the gray level image G to be treatedReconstructing the grayscale component Y of a color image R ;
A decoding module: the color component U is used for carrying out neural network decoding and arithmetic decoding on the characteristic code stream and the super-check code stream and converting the characteristic code stream and the super-check code stream into a color image to be reconstructed R And V R ;
A reconstruction module: is used for combining the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed to carry out reversible YUV2RGB conversion to obtain a reconstructed color image I R 。
8. The system of claim 7, wherein the conversion module is specifically configured to:
reversible RGB2YUV conversion is carried out on the original color image to obtain a gray component Y and color components U and V, wherein the reversible RGB2YUV conversion formula is as follows:
wherein Y represents a gray component value, and U and V represent color component values; r, G, B represent pixel values of an original color image;represents rounding down;
the encoding module is specifically configured to:
analyzing and coding the color components U and V through a neural network to obtain characteristics and super-prior, establishing a prior probability model of the characteristics and an independent probability model of the super-prior, and then combining arithmetic coding to convert the characteristic heat exchange super-prior into a characteristic code stream and a super-prior code stream respectively;
the steganographic module is specifically configured to:
the pixels of the gray component Y are represented by P in a column scan 1 ,P 2 ,P 3 ,…,P m The signature stream and the super-parity stream are combined into a binary stream and denoted b 1 ,b 2 ,b 3 ,…,b n ;
Take every 3 pixels in sequence andevery 3-bit binary code is a group, and 3 pixels of the ith group are respectively represented as P i_1 ,P i_2 ,P i_3 And 3-bit binary codes are respectively represented as b i_1 ,b i_2 ,b i_3 ;
In step c, the steganography process realizes that 3 pixels of each group embed 3-bit binary code information, and the implementation is as follows:
first, a 3-bit predictive code is calculated from the values of 3 pixels:
wherein B is i_1 ,B i_2 ,B i_3 A 3-bit predictive code representing the ith group,representing the ith group of nth pixels p i_n The j-th bit with the lowest binary value indicates XOR operation;
if the code B is predicted i_1 ,B i_2 ,B i_3 And binary code b i_1 ,b i_2 ,b i_3 If the pixel values are equal, the pixel is not required to be modified; otherwise by modifying the pixel P i_1 ,P i_2 ,P i_3 Making the prediction code and the binary code equal, thereby completing the steganography of the binary code;
the code stream with the quantity not meeting the 3 bits directly replaces the lowest binary bit of the pixels of the corresponding sequence to realize steganography, and 3 pixels P are completed i+1 ,P i+2 ,P i+3 Hiding 3-bit binary code b i+1 ,b i+2 ,b i+3 Generating a reversible gray image G;
the decoding module is specifically configured to:
the method comprises the steps of decoding a super-prior code stream into super-prior according to an independent probability model, then decoding through a neural network to obtain probability model parameters of characteristics, thereby obtaining a prior probability model, decoding the characteristic code stream into the characteristics, and then synthesizing the characteristics through the neural network to obtain a color component U of a color image to be reconstructed R ,V R ;
The reconstruction module is specifically configured to:
combining the gray component of the color image to be reconstructed and the color component of the color image to be reconstructed for reversible YUV2RGB conversion to obtain a reconstructed color image I R ,
The reversible YUV2RGB conversion formula is as follows:
9. A reversible grayscale device based on neural network and image steganography, comprising: memory, processor and computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the neural network and image steganography based reversible grayscale method according to any one of claims 1 to 6.
10. A computer-readable storage medium, on which an information transfer implementing program is stored, which when executed by a processor implements the steps of the neural network and image steganography-based reversible grayscale method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210834416.8A CN115049541B (en) | 2022-07-14 | 2022-07-14 | Reversible gray scale method, system and device based on neural network and image steganography |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210834416.8A CN115049541B (en) | 2022-07-14 | 2022-07-14 | Reversible gray scale method, system and device based on neural network and image steganography |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115049541A true CN115049541A (en) | 2022-09-13 |
CN115049541B CN115049541B (en) | 2024-05-07 |
Family
ID=83165878
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210834416.8A Active CN115049541B (en) | 2022-07-14 | 2022-07-14 | Reversible gray scale method, system and device based on neural network and image steganography |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049541B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7253836B1 (en) * | 1998-06-30 | 2007-08-07 | Nikon Corporation | Digital camera, storage medium for image signal processing, carrier wave and electronic camera |
CN111696026A (en) * | 2020-05-06 | 2020-09-22 | 华南理工大学 | Reversible gray scale map algorithm and computing device based on L0 regular term |
CN111882476A (en) * | 2020-07-17 | 2020-11-03 | 广州大学 | Image steganography method for automatically learning embedded cost based on deep reinforcement learning |
CN112801922A (en) * | 2021-04-01 | 2021-05-14 | 暨南大学 | Color image-gray image-color image conversion method |
CN113259676A (en) * | 2020-02-10 | 2021-08-13 | 北京大学 | Image compression method and device based on deep learning |
WO2021164176A1 (en) * | 2020-02-20 | 2021-08-26 | 北京大学 | End-to-end video compression method and system based on deep learning, and storage medium |
-
2022
- 2022-07-14 CN CN202210834416.8A patent/CN115049541B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7253836B1 (en) * | 1998-06-30 | 2007-08-07 | Nikon Corporation | Digital camera, storage medium for image signal processing, carrier wave and electronic camera |
CN113259676A (en) * | 2020-02-10 | 2021-08-13 | 北京大学 | Image compression method and device based on deep learning |
WO2021164176A1 (en) * | 2020-02-20 | 2021-08-26 | 北京大学 | End-to-end video compression method and system based on deep learning, and storage medium |
CN111696026A (en) * | 2020-05-06 | 2020-09-22 | 华南理工大学 | Reversible gray scale map algorithm and computing device based on L0 regular term |
CN111882476A (en) * | 2020-07-17 | 2020-11-03 | 广州大学 | Image steganography method for automatically learning embedded cost based on deep reinforcement learning |
CN112801922A (en) * | 2021-04-01 | 2021-05-14 | 暨南大学 | Color image-gray image-color image conversion method |
Non-Patent Citations (8)
Title |
---|
HABIBEH NADERI 等: "Generating High-Fidelity Images with Disentangled Adversarial VAEs and Structure-Aware Loss", 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 28 September 2020 (2020-09-28) * |
MENGHAN XIA 等: "Invertible Grayscale", ACM TRANSACTIONS ON GRAPHICS, vol. 37, no. 6, 30 November 2018 (2018-11-30), pages 1 - 246 * |
MIN WU 等: "emote Sensing Image Colorization Based on Multiscale SEnet GAN", 2019 12TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI), 23 January 2020 (2020-01-23) * |
林焕然 等: "基于颜色编码和图像隐 写术的可逆灰度方法", 计算机应用研究, vol. 41, no. 5, 17 November 2023 (2023-11-17) * |
林焕然: "基于神经网络编码和图像隐写术的可逆灰度方法", 中国优秀硕士学位论文全文数据库 信息科技辑, 15 February 2024 (2024-02-15), pages 138 - 134 * |
熊祥光;曹永锋;欧卫华;刘彬;韦立;李伟岸;: "采用插值和排序的大容量可逆数据隐藏算法", 计算机辅助设计与图形学学报, no. 10, 15 October 2018 (2018-10-15), pages 173 - 184 * |
石慧;冯斌;王相海;李明楚;宋传鸣;: "用于盗版追踪的格雷码加密域可逆水印研究", 中国图象图形学报, no. 11, 16 November 2018 (2018-11-16), pages 23 - 39 * |
苏一: "一文理解变分自编码器(VAE)", Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/64485020> * |
Also Published As
Publication number | Publication date |
---|---|
CN115049541B (en) | 2024-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11310509B2 (en) | Method and apparatus for applying deep learning techniques in video coding, restoration and video quality analysis (VQA) | |
CN109993678B (en) | Robust information hiding method based on deep confrontation generation network | |
CN109996073B (en) | Image compression method, system, readable storage medium and computer equipment | |
CN112801895B (en) | Two-stage attention mechanism-based GAN network image restoration algorithm | |
Chang et al. | Reversible steganographic method using SMVQ approach based on declustering | |
US7760399B2 (en) | Method and system for encoding color images to black-and-white bitmaps and decoding color images | |
CN111246206A (en) | Optical flow information compression method and device based on self-encoder | |
CN112132158A (en) | Visual picture information embedding method based on self-coding network | |
CN111464717B (en) | Reversible information hiding method with contrast ratio pull-up by utilizing histogram translation | |
Sadeeq et al. | Image compression using neural networks: a review | |
Wang et al. | A novel encryption-then-lossy-compression scheme of color images using customized residual dense spatial network | |
US20220335560A1 (en) | Watermark-Based Image Reconstruction | |
CN114332479A (en) | Training method of target detection model and related device | |
CN114037596A (en) | End-to-end image steganography method capable of resisting physical transmission deformation | |
CN115049541B (en) | Reversible gray scale method, system and device based on neural network and image steganography | |
CN111343458A (en) | Sparse gray image coding and decoding method and system based on reconstructed residual | |
CN116523985A (en) | Structure and texture feature guided double-encoder image restoration method | |
CN113949880B (en) | Extremely-low-bit-rate man-machine collaborative image coding training method and coding and decoding method | |
CN116416216A (en) | Quality evaluation method based on self-supervision feature extraction, storage medium and terminal | |
Nortje et al. | BINet: A binary inpainting network for deep patch-based image compression | |
CN114494387A (en) | Data set network generation model and fog map generation method | |
CN114900701A (en) | Video digital watermark embedding and extracting method and system based on deep learning | |
CN114677282A (en) | Image super-resolution reconstruction method and system | |
CN112887722A (en) | Lossless image compression method | |
Renji et al. | A Reversible Data Hiding Technique for Secure Image Transmission |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |