CN113329229A - High-capacity hiding method for H.265 video information with high-efficiency fidelity - Google Patents

High-capacity hiding method for H.265 video information with high-efficiency fidelity Download PDF

Info

Publication number
CN113329229A
CN113329229A CN202110470483.1A CN202110470483A CN113329229A CN 113329229 A CN113329229 A CN 113329229A CN 202110470483 A CN202110470483 A CN 202110470483A CN 113329229 A CN113329229 A CN 113329229A
Authority
CN
China
Prior art keywords
factor
embedding
block
video
compensation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110470483.1A
Other languages
Chinese (zh)
Inventor
刘秀萍
王灿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingmen Huiyijia Information Technology Co ltd
Original Assignee
Jingmen Huiyijia Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingmen Huiyijia Information Technology Co ltd filed Critical Jingmen Huiyijia Information Technology Co ltd
Priority to CN202110470483.1A priority Critical patent/CN113329229A/en
Publication of CN113329229A publication Critical patent/CN113329229A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention summarizes the existing video information hiding algorithm and analyzes the H.265 intra-frame prediction coding method, provides the H.265 video information hiding method based on the coupling coefficient pair, embeds 1bit data in the embedding coefficient of the DST/DCT coupling coefficient pair, compensates the corresponding compensation coefficient, and controls the transmission and accumulation of the error in the block, and the method greatly improves the video effect under the condition of the same embedding amount; the H.265 video information hiding method based on the texture features is provided, by analyzing the texture features of an image and combining the coupling coefficient pairs with the texture features, all reference pixels of a reference block cannot transmit embedding errors to adjacent prediction blocks, the intra-frame distortion drift problem is fundamentally solved, the method is simple and efficient, the method is suitable for high-definition videos and low-resolution videos, the embedding of secret information can hardly be perceived by naked eyes while high information embedding capacity and good visual secret performance are obtained, and the influence on code streams is small.

Description

High-capacity hiding method for H.265 video information with high-efficiency fidelity
Technical Field
The invention relates to a high-capacity hiding method for H.265 video information, in particular to a high-capacity hiding method for H.265 video information with high efficiency and fidelity, and belongs to the technical field of video information hiding.
Background
In recent years, with the application of high-definition and ultra-definition videos and videos with the resolution of 4K × 2K and 8K × 4K gradually popularized and applied in a large scale, the video compression technology has received great challenges. In addition, various digital video applications have entered into people's daily lives, and mobile short videos, digital video broadcasting, remote monitoring, medical imaging, portable photography, and the like have all entered into people's lives. The diversification and high-definition trend of video applications puts higher requirements on video compression performance. Under such circumstances, the new generation high-efficiency video coding standard H265 is rapidly developing.
Because the original video data volume is huge, the original video data can not be transmitted on the network, a key technology of video application is video coding, the aim is to remove redundant components in the video as far as possible and reduce the data volume after compression coding, H.265 is formally accepted as an international standard by IRU-R with excellent performance, and the H.265 video coding standard is twice as high as the H.264/AVC compression efficiency. H.265 adopts a hybrid coding framework and comprises modules of transformation, entropy coding, quantization, intra-frame prediction, inter-frame prediction, loop filtering and the like. But new coding techniques are introduced almost at each module.
Nowadays, mobile communication technology, multimedia and computer networks are more and more popularized, digital multimedia is deeply enjoyed, common digital multimedia contents comprise images, sound, videos and the like, but powerful sharing and transmission tools are developed vigorously, some hidden information needs to be stored and transmitted through networks, and information security problems can be caused by illegal access, random tampering and the like of the hidden information.
The information hiding technology is to prevent the user information from being illegally used, and is particularly important at present when the information amount on the internet is huge, so that the information hiding technology is also concerned all over the world. The information hiding technology embeds hidden information into a common file which is not easy to attract attention, so that an unauthorized person can not know whether the hidden information exists or not. The hidden information is transmitted by using a popular digital carrier so as to achieve the aim of ensuring the safe transmission of the hidden information. Information hiding technology has been successfully applied in many applications, such as covert communication, digital content authentication, and copyright protection, and its value is increasing. Information hiding refers to hiding information into media such as images, videos or audios, and the information hiding technology was originally traced back to ancient greek, which was called steganography at the time, and the information hiding methods at the time include invisible ink, miniature photography, homing pigeons, ancient poems in china, and the like.
The information hiding discipline has four major branches: hidden channel, hidden writing, anonymity and digital watermarking, wherein the hidden writing is a transfer event which prevents anyone outside a plan from knowing information; the hidden channel is a hidden channel for transmitting hidden information by utilizing a protocol or a vulnerability in system design; the digital watermarking adds copyright information in the digital works to protect the digital copyright or add authentication information to realize the authentication of the digital works; anonymity is the transmission route from which the covert message originates, i.e. the sender or the receiver or both the sender and the receiver of the covert message.
The information hiding framework comprises an encoder, a channel and a decoder, forms the basis of different analysis methods for data hiding problems, such as information theory and game theory analysis, the information hiding process is composed of the encoder and the decoder, and carrier data signals are from images, videos or audios. Depending on the application, the carrier signal may represent signal characteristics in the spatial, temporal or transform domain, and the hidden information may be encrypted before embedding to further improve security.
The current h.265 video information hiding methods are few, the video information hiding methods in the prior art have many defects, and the difficulties in the prior art and the problems solved by the present invention are mainly focused on the following aspects:
firstly, the h.265 coding standard adopts an intra-frame prediction module to perform pixel value prediction, and the principle is to use a coded pixel as a reference pixel to predict the pixel of a neighboring prediction unit, after the hidden information is embedded, the pixel value of the pixel is increased or decreased, and if a unit block after the hidden information is embedded is just used as a reference block of the neighboring block, an error is transmitted to the current prediction block, and by analogy, the error is transmitted all the time, so that the infinite accumulation of the error is caused, the video quality is seriously reduced, the embedding of the hidden information is not facilitated, the information safety is greatly influenced, and the subjective visual effect of the video cannot meet the requirement;
secondly, in the h.265 standard video information capacity hiding process, if only information is embedded in the qualified luminance blocks (4 × 4 luminance prediction block and 8 × 8 luminance prediction block) without compensation, a serious intra-frame distortion drift phenomenon is generated, errors are sequentially transmitted to adjacent luminance blocks, finally the video quality is seriously reduced, the safety of video hiding is reduced, and the embedding of hidden information can be perceived by naked eyes;
thirdly, in the intra prediction process, the last row or column of pixels of the current reference block will be used as the reference pixels of its 5 neighboring luminance (i.e. the upper right neighboring block, the lower neighboring block, and the lower left neighboring block, respectively) prediction blocks, so if there are embedding errors in these reference pixels, it will be possible (according to the actually selected intra prediction mode) to transfer these errors to these five neighboring blocks, and will similarly transfer them in sequence, resulting in intra distortion drift phenomenon, transfer and accumulation of errors, severe video quality reduction, poor embedding capacity and visual secret effect, and large influence on the code rate of the video stream;
fourth, in the video information hiding algorithm and the h.265 intra prediction encoding method in the prior art, an intra block embedding error is generated when information is embedded in DST/DCT coefficients, and the error is transmitted to an adjacent luminance block through an intra prediction process, so that intra frame distortion drift occurs, which affects the subjective visual effect of a video.
Disclosure of Invention
Aiming at the defects of the prior art, the invention theoretically analyzes the effect of the texture features on the high-capacity hiding of video information, deduces two intra-frame prediction mode constraint conditions based on two texture features, and the feasibility of the hiding algorithm based on the texture feature information in the application of the high-definition video is verified by experiments, by combining the coupling factor pair and the texture characteristics, the transmission of the error in the block is completely prevented, the intra-frame distortion drift phenomenon is completely eliminated, the algorithm is simple and efficient, the time complexity is low, the video has good subjective vision and almost no perception of the embedding of hidden information by naked eyes while obtaining higher information embedding capacity and better visual privacy performance, and the binary code stream file after the information is embedded is increased slightly, the method is also suitable for low-resolution videos, and excellent objective performance indexes and subjective visual effects which are indiscriminate by naked eyes can be obtained.
In order to achieve the technical effects, the technical scheme adopted by the invention is as follows:
the high-capacity hiding method for high-efficiency fidelity H.265 video information combines DST/DCT coupling factor pair with video image texture characteristics (namely, intra-frame prediction direction) to perform information hiding, and specifically comprises the following steps: firstly, analyzing errors generated in the embedding of hidden information in quantization factors obtained after partial decoding of H.265 video code streams based on H.265 coding characteristics, further analyzing and obtaining a coupling relation between an integer DST and a DCT factor matrix, providing an H.265 video information high-capacity hiding method based on coupling factor pairs based on embedding factors and compensation factors of the coupling factor pairs, and performing corresponding compensation while embedding information to reduce error transmission; secondly, analyzing the texture features of the image on the basis of the coupling factor to the video information high-capacity hiding method, analyzing the intra-frame prediction mode direction specified in the H.265 standard, obtaining an intra-frame prediction mode constraint condition I and a constraint condition II on the basis of the vertical texture and the horizontal texture features of the image, and combining the coupling factor pair and the texture features to completely block error transmission in the H.265 brightness block; thirdly, a super-clear video information high-capacity hiding algorithm based on texture features is provided, and the H.265 video information high-capacity hiding method is also suitable for low-resolution videos;
the first, H.265 video information hiding method based on coupling coefficient pair includes embedding based on coupling coefficient pair and extracting based on coupling coefficient pair, there is coupling relation in DST/DCT coefficient matrix, can make embedding error of last row or last column reference pixel in DST/DCT matrix 0, imbed 1bit data in embedding coefficient of DST/DCT coupling coefficient pair, and compensate corresponding compensation coefficient, control transmission and accumulation of error in block;
secondly, the H & 265 video information hiding method based on the texture features comprises embedding based on the texture feature pairs and extracting based on the texture feature pairs, on the basis of the H & 265 video information hiding method based on the coupling coefficient pairs, an intra-frame prediction mode constraint condition I and a constraint condition II are obtained by analyzing the texture features of an image, and the coupling coefficient pairs are combined with the texture features, so that all reference pixels of a reference block cannot transmit embedding errors to adjacent prediction blocks, and the method is suitable for high-definition videos and low-resolution videos, can not detect the embedding of secret information by naked eyes while obtaining high information embedding capacity and good visual secret performance, and has small influence on code streams;
the embedding process based on the texture features comprises the following steps: the method comprises the steps that an original H.265 video binary code stream is partially decoded by a sender to obtain a quantized DST factor or DCT factor, then screening is carried out, the screening condition is that according to a critical value defined by the sender, the larger the critical value is, the larger the difference between a current prediction block and a reference block is, the more complex the texture of the region is shown, then whether an adjacent block of a current brightness block meets a constraint condition I or a constraint condition II is judged, if yes, 1-bit data is embedded into an embedding factor of a factor matrix of the brightness block meeting the condition, compensation is carried out on the compensation factor to eliminate the error of the last line or the last column of the embedding block, and finally, partial coding is carried out on all quantized DST or DCT factors again to obtain the H.265 video binary code stream embedded with hidden information;
the extraction process based on the texture features comprises the following steps: the receiving party receives the binary code stream embedded with the hidden information transmitted by the transmitting party, the H.265 video binary code stream is partially decoded to obtain a quantized DST factor or DCT factor, and then screening is carried out, wherein the screening condition is that 1-bit hidden information is extracted from each qualified brightness block according to a critical value defined by the transmitting party and an intra-frame prediction mode of a current block adjacent to the current block, and the embedding factor determined by the transmitting party and the receiving party together.
High-fidelity H.265 video information high-capacity hiding method, further, H.265 video information high-capacity hiding method based on coupling factor pair: by utilizing the coupling relation in the quantization transformation factor matrix, the other DST factor or DCT factor in the coupling factor pair is adjusted while information is embedded into the embedding factor in the coupling factor pair, the intra-frame distortion drift error caused by embedding hidden information is compensated, the visual concealment is improved, and the embedded information capacity is improved;
the H.265 video information hiding method based on the coupling coefficient pair comprises embedding based on the coupling coefficient pair and extracting based on the coupling coefficient pair.
The high-capacity hiding method of the H.265 video information with high efficiency and fidelity further comprises the following embedding process based on the coupling factor pair: the method comprises the steps that an original H & 265 video binary code stream is firstly partially decoded by a sender to obtain quantized DST factors or DCT factors, then screening is carried out, the screening condition is that according to a critical value defined by the sender, the larger the critical value is, the larger the difference between a current prediction block and a reference block is, the texture of the region is shown to be complex, then 1-bit data is embedded into a specific position of a brightness block factor matrix which meets the condition, compensation is carried out at the position corresponding to the compensation factor, the error of the last line or the last column of an embedded block is eliminated, the transmission of the error is weakened, and finally, all the quantized DST or DCT factors are re-partially encoded to obtain the H & 265 video binary code stream embedded with hidden information.
High-fidelity H.265 video information high-capacity hiding method, and further, a quantized DST factor embedding method based on 4 x 4 luminance blocks:
judging whether the current frame is a brightness I frame or not, and if so, switching to a process two; if not, the flow is switched to the sixth flow;
a second flow is to obtain a quantized DST factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traverse all 4 x 4 basic units, judge whether the prediction size of the current brightness block is 4 x 4, and if so, switch to a third flow; if not, the flow is switched to the fifth flow;
calculating whether the absolute value of the maximum value in the quantization factors of the current brightness blocks is larger than a user-defined critical value R, and if so, switching to a flow four; if not, the flow is switched to the fifth flow;
embedding 1-bit data into the corresponding embedding factors, correspondingly adjusting the compensation factors, and turning to the process five;
step five, reading the next basic unit;
a sixth procedure, reading the next frame;
assume a 4 x 4 luma prediction block has an embedding factor of
Figure BDA0003045245130000051
With a compensation factor of
Figure BDA0003045245130000052
The specific embedding operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000053
When even, when the embedding factor is
Figure BDA0003045245130000054
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA0003045245130000055
Plus 1, compensation factor
Figure BDA0003045245130000056
Subtracting 1; if the embedding factor
Figure BDA0003045245130000057
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA0003045245130000058
Minus 1, compensation factor
Figure BDA0003045245130000059
Adding 1;
second, if the information bit to be embedded is 1 andembedding factor
Figure BDA00030452451300000510
If the number is odd, the embedding factor is not changed
Figure BDA00030452451300000511
And compensation factor
Figure BDA00030452451300000512
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300000513
When it is odd, when the embedding factor is
Figure BDA00030452451300000514
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA00030452451300000515
Plus 1, compensation factor
Figure BDA00030452451300000516
Subtracting 1; if the embedding factor
Figure BDA00030452451300000517
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300000518
Minus 1, compensation factor
Figure BDA00030452451300000519
Adding 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300000520
Even number, the embedding factor is not changed
Figure BDA00030452451300000521
And compensation factor
Figure BDA00030452451300000522
The value of (c).
High-fidelity H.265 video information high-capacity hiding method, and further, a quantization DCT factor embedding method based on 8 x 8 luminance blocks:
the process 1 is to judge whether the current frame is a brightness I frame, if so, the process is switched to the process 2; if not, then go to flow 6;
the flow 2, obtaining the quantization DCT factor, the intra-frame prediction mode, the coding block depth and the transformation block depth of the current coding unit CU of the I frame, then traversing all the basic units 8 × 8 to judge whether the prediction size of the current brightness block is 8 × 8, if so, switching to the flow 3; if not, go to flow 5;
a flow 3, calculating whether the absolute value of the maximum value in the quantization factors of the current brightness block is larger than a user-defined critical value R, if so, turning to a flow 4; if not, then go to flow 5;
a process 4, embedding 1bit data into the corresponding embedding factor, and correspondingly adjusting the compensation factor, and then turning to a process 5;
flow 5, reading the next basic unit;
flow 6, reading the next frame;
assume that the embedding factor of an 8 x 8 luma prediction block is
Figure BDA0003045245130000061
With a compensation factor of
Figure BDA0003045245130000062
The specific embedding operation method for each eligible 8 × 8 luma prediction block in the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000063
When even, when the embedding factor is
Figure BDA0003045245130000064
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA0003045245130000065
Adding 1; if the embedding factor
Figure BDA0003045245130000066
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA0003045245130000067
Subtracting 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000068
If the number is odd, the embedding factor is not changed
Figure BDA0003045245130000069
And compensation factor
Figure BDA00030452451300000610
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300000611
When it is odd, when the embedding factor is
Figure BDA00030452451300000612
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA00030452451300000613
Adding 1; if the embedding factor
Figure BDA00030452451300000614
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300000615
Subtracting 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300000616
Even number, the embedding factor is not changed
Figure BDA00030452451300000617
And compensation factor
Figure BDA00030452451300000618
The value of (c).
The high-capacity hiding method of the high-efficiency fidelity H.265 video information further comprises the following extraction process based on the coupling factor pair: the receiving party receives the binary code stream of the embedded hidden information transmitted by the transmitting party, the H.265 video binary code stream is partially decoded to obtain a quantized DST factor or DCT factor, and then screening is carried out, wherein the screening condition is to extract 1-bit hidden information from each qualified brightness block according to a critical value defined by the transmitting party and the embedded factor determined by the transmitting party and the receiving party together.
High-fidelity H.265 video information high-capacity hiding method, and further, a quantitative DST factor extracting method based on 4 x 4 luminance blocks: :
the flow (I) judges whether the current frame is a brightness I frame, if so, the flow (II) is switched to; if not, the flow is switched to the sixth step;
the second flow is to obtain the quantization DST factor, the intra-frame prediction mode, the coding block depth and the transformation block depth of the current coding unit CU of the I frame, then traverse all the basic units 4 × 4 to judge whether the prediction size of the current brightness block is 4 × 4, if so, then the third flow is carried out; if not, the flow is changed to the fifth flow;
the third flow is to calculate whether the absolute value of the maximum value in the quantization factors of the current brightness blocks is larger than a self-defined critical value R, and if so, the fourth flow is switched; if not, the flow is changed to the fifth flow;
a flow (IV) of extracting one bit of data from the corresponding embedding factor and then switching to a flow (V);
a fifth step of reading the next basic unit;
a sixth step of reading a next frame;
assume a 4 x 4 luma prediction block has an embedding factor of
Figure BDA0003045245130000071
With a compensation factor of
Figure BDA0003045245130000072
The specific extraction operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the embedding factor is large
Figure BDA0003045245130000073
If the number is even, 1bit data is extracted as 0;
second, if the embedding factor
Figure BDA0003045245130000074
If the number is odd, 1-bit data is extracted as 1.
High-fidelity H.265 video information high-capacity hiding method, further, H.265 video information high-capacity hiding method based on textural features: after embedding the embedding factors in the quantization factor matrix and compensating in the compensation factors, analyzing the relationship between the video texture features and the 33 intra-frame angular prediction modes, and finding two constraint conditions which should be satisfied by the intra-frame prediction modes of the adjacent luminance prediction blocks, wherein the two constraint conditions respectively enable the last row or the last column of pixels in the reference block not to be used as the reference pixels of the prediction block, so that the pixel errors of the last row or the last column cannot be transferred, and the intra-frame distortion drift phenomenon cannot be generated.
If the adjacent luminance block of the current embedded block meets one of the two constraint conditions, information embedding and compensation can be carried out in the corresponding coupling factor pair, if the constraint condition similar to the vertical texture is met, namely the constraint condition I indicates that the last row of key pixels cannot be used as the predicted reference pixels of the adjacent blocks, the HS coupling factor pair is selected for embedding, corresponding compensation is carried out, and the last row cannot be used as the predicted reference pixels of the adjacent blocks; if the constraint condition of horizontal texture is satisfied, namely constraint condition two, which indicates that the last row of key pixels cannot be used as the predicted reference pixels of the adjacent blocks, embedding in the VS coupling factor pair is selected, and corresponding compensation is performed, which indicates that the last column cannot be used as the predicted reference pixels of the adjacent blocks.
Intra prediction mode constraint: in the intra-frame prediction process, the last line or column of pixels of the current reference block is used as the reference pixels of 5 adjacent luminance prediction blocks, the intra-frame prediction modes of five adjacent blocks reflect the texture characteristics of the region, if the intra-frame prediction modes of the five adjacent blocks meet certain conditions, the last line or column of pixels of the current reference block is not used as the prediction reference pixels of the adjacent luminance blocks, then appropriate coupling factors are selected for embedding compensation, the remaining last column or column of pixels of the current reference block is not used as the prediction reference pixels of the adjacent luminance blocks, and the block error generated in embedding is not transmitted to the adjacent blocks;
vertical texture-like intra prediction mode condition one: the right neighboring block intra prediction mode of the current luminance block belongs to the modes 26 to 34 and the right upper neighboring block intra prediction mode belongs to the modes 10 to 34, and if the condition one is satisfied, edge pixels of the last column of the current luminance block are not used as prediction reference pixels of its right neighboring block and right upper neighboring block, so that information is embedded in the luminance block, an intra-block embedding error is not transferred to the current right neighboring block and right upper neighboring block, and an embedding error of the last row of the current block is 0, and no error is transferred to the left lower neighboring block, the lower neighboring block, and the right lower neighboring block.
Class-level texture-intra prediction mode condition two: the intra prediction mode of the next adjacent block of the current luminance block belongs to modes 2 to 10 and the intra prediction mode of the left next adjacent block belongs to modes 2 to 26, if the condition two is satisfied, the edge pixel of the last row of the current luminance block is not used as the prediction reference pixel of the next adjacent block and the next adjacent block, so that information is embedded in the luminance block, an intra-block embedding error is not transmitted to the current next adjacent block and the left next adjacent block, and the last column of the current block is embedded with an error of 0, and no error is transmitted to the upper right adjacent block, the right adjacent block and the lower right adjacent block.
High-fidelity H.265 video information high-capacity hiding method, and further, a quantitative DST factor embedding method based on texture characteristics of 4 x 4 luminance blocks:
step one, judging whether the current frame is a brightness I frame, if so, turning to step two; if not, turning to the seventh step;
step two, obtaining a quantization DST factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traversing all basic units 4 × 4 to judge whether the prediction size of the current brightness block is 4 × 4, and if so, turning to step three; if not, turning to step six;
step three, calculating whether the absolute value of the maximum value in the quantization factors of the current brightness block is larger than a user-defined critical value R, and if so, turning to step four; if not, turning to step six;
step four, judging whether the adjacent block of the current brightness block meets a constraint condition one or a constraint condition two, if so, turning to step six; if not, turning to the sixth step;
step five, embedding one bit of data into the corresponding embedding factor, and correspondingly adjusting the compensation factor, and turning to step six;
reading the next basic unit;
step seven, reading the next frame;
assume a 4 x 4 luma prediction block has an embedding factor of
Figure BDA0003045245130000081
With a compensation factor of
Figure BDA0003045245130000082
The specific embedding operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000083
When even, when the embedding factor is
Figure BDA0003045245130000084
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA0003045245130000085
Plus 1, compensation factor
Figure BDA0003045245130000086
Subtracting 1; if the embedding factor
Figure BDA0003045245130000087
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA0003045245130000088
Minus 1, compensation factor
Figure BDA0003045245130000089
Adding 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA00030452451300000810
If the number is odd, the embedding factor is not changed
Figure BDA00030452451300000811
And compensation factor
Figure BDA00030452451300000812
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300000813
When it is odd, when the embedding factor is
Figure BDA00030452451300000814
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA00030452451300000815
Plus 1, compensation factor
Figure BDA00030452451300000816
Subtracting 1; if the embedding factor
Figure BDA00030452451300000817
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300000818
Minus 1, compensation factor
Figure BDA00030452451300000819
Adding 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA0003045245130000091
Even number, the embedding factor is not changed
Figure BDA0003045245130000092
And compensation factor
Figure BDA0003045245130000093
The value of (c).
High-fidelity H.265 video information high-capacity hiding method, and further, a quantization DCT factor embedding method based on texture characteristics of 8 x 8 luminance blocks:
step 1, judging whether the current frame is a brightness I frame, if so, turning to step 2; if not, the step 7 is carried out;
step 2, obtaining a quantization DCT (discrete cosine transformation) factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traversing all basic units 8 × 8 to judge whether the prediction size of the current brightness block is 8 × 8, and if so, turning to step 3; if not, the step 6 is carried out;
step 3, calculating whether the absolute value of the maximum value in the quantization factor of the current brightness block is larger than a user-defined critical value R, and if so, turning to step 4; if not, the step 6 is carried out;
step 4, judging whether the adjacent block of the current brightness block meets a constraint condition one or a constraint condition two, and if so, turning to step 6; if not, turning to step 6;
step 5, embedding a bit of data into the corresponding embedding factor, correspondingly adjusting the compensation factor, and turning to step 6;
step 6, reading the next basic unit;
step 7, reading the next frame;
assume that the embedding factor of an 8 x 8 luma prediction block is
Figure BDA0003045245130000094
With a compensation factor of
Figure BDA0003045245130000095
The specific embedding operation method for each eligible 8 × 8 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000096
When even, when the embedding factor is
Figure BDA0003045245130000097
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA0003045245130000098
Adding 1; if the embedding factor
Figure BDA0003045245130000099
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300000910
Subtracting 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA00030452451300000911
If the number is odd, the embedding factor is not changed
Figure BDA00030452451300000912
And compensation factor
Figure BDA00030452451300000913
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300000914
When it is odd, when the embedding factor is
Figure BDA00030452451300000915
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA00030452451300000916
Adding 1; if the embedding factor
Figure BDA00030452451300000917
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300000918
Subtracting 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300000919
Even number, the embedding factor is not changed
Figure BDA00030452451300000920
And compensation factor
Figure BDA00030452451300000921
The value of (c).
Compared with the prior art, the invention has the following contributions and innovation points:
firstly, the invention provides a built-in extraction method suitable for high-capacity hiding of H.265 video information, which solves the problem of intra-frame distortion drift and effectively improves the video quality, the method combines DST/DCT coupling factor pair with video image texture characteristics (namely intra-frame prediction direction) to perform information steganography, and experimental results show that: the method is simple and efficient, the peak signal-to-noise ratio before and after embedding has small change at the same time of higher embedding capacity, the difference in subjective vision is not caused after the video is embedded with information, the embedded information can be completely extracted, and the method has high practical value and wide market application prospect;
secondly, the method analyzes the error generated when the hidden information is embedded in the quantization factor obtained after the H.265 video code stream is partially decoded based on the H.265 coding characteristic, further analyzes and obtains the coupling relation between the integer DST and the DCT factor matrix, provides an H.265 video information high-capacity hiding method based on the coupling factor pair based on the embedding factor and the compensation factor of the coupling factor pair, performs corresponding compensation while embedding information, and reduces the transmission of the error; under the same embedding capacity, compared with the video which is not compensated after being embedded, the PSNR1 value and the PSNR2 value of the video which is compensated after being embedded are greatly improved, and the subjective visual effect is also obviously improved;
thirdly, on the basis of the high-capacity hiding method of the coupling factor to the video information, the texture characteristics of the image are analyzed, and the direction of an intra-frame prediction mode specified in the H.265 standard is analyzed, an intra-frame prediction mode constraint condition I and a constraint condition II are obtained based on the vertical texture and horizontal texture characteristics of an image, a coupling factor pair and the texture characteristics are combined, the intra-frame distortion drift phenomenon is completely eliminated, the error transmission in an H.265 brightness block is completely blocked, an ultra-clear video information high-capacity hiding algorithm based on the texture characteristics is provided, the subjective visual effect is good while the high embedding amount is obtained, and the embedding of hidden information cannot be perceived by naked eyes, the bit rate of the binary video code stream is increased by a small percentage, the influence of information embedding on the group by adopting different coupling factors on subjective and objective evaluation indexes of the video is small, and the H.265 video information high-capacity hiding method is also suitable for low-resolution videos;
fourthly, the invention provides a built-in embedding extraction method based on H.265 video information high-capacity hiding, which effectively controls intra-frame distortion drift, weakens the accumulation effect, improves the visual effect of the H.265 standard video embedded with the hidden information, utilizes the coupling relation in a quantization transformation factor matrix, and compensates the intra-frame distortion drift error caused by the embedding of the hidden information by embedding information into the embedding factor in the coupling factor pair and adjusting the other DST factor or DCT factor (namely, compensation factor) in the coupling factor pair at the same time, thereby improving the visual concealment and further improving the capacity of the embedded information;
fifth, the present invention provides a h.265 video information high capacity concealment algorithm that completely solves the intra-frame distortion drift problem, the distortion error of the current embedded luminance block may be transmitted to the five neighboring luminance blocks through the intra-frame prediction process, in order to prevent the intra-block embedded error from being transmitted continuously, after embedding the embedding factor in the quantization factor matrix and compensating in the compensation factor, and analyzing the relationship between the video texture feature and the intra-frame 33 angle prediction modes, find two constraints that should be satisfied by the intra-frame prediction modes of the neighboring luminance prediction blocks, and these two constraints make the last row or the last column of pixels in the reference block not be the reference pixels of the prediction block, so that the pixel error of this last row or the last column is not transmitted, and the intra-frame distortion drift phenomenon is not generated, thus fundamentally preventing the transmission and accumulation of errors, the method can well ensure the video quality, obtain higher embedding capacity and good visual hiding effect, and has less influence on the code rate of the video stream.
Drawings
Fig. 1 is a flow chart of the embedding operation of the h.265 video concealment coupling coefficient pair method of the present invention.
Fig. 2 is a flow chart of the extraction operation of the h.265 video concealment coupling coefficient pair method of the present invention.
FIG. 3 is a diagram illustrating the position relationship between a current block and five adjacent non-encoded predicted luma blocks according to the present invention.
Fig. 4 is a structural diagram of a first intra prediction mode constraint condition according to the present invention.
Fig. 5 is a schematic structural diagram of an intra prediction mode constraint two according to the present invention.
FIG. 6 is a structural schematic of the coupling coefficient pair HS and constraint one of the present invention.
FIG. 7 is a structural diagram of the coupling coefficient pair VS and constraint two of the present invention.
Fig. 8 is an embedding flowchart of the h.265 video hiding texture-based coupling factor pair method.
Fig. 9 is a flow chart of quantized DST coefficient embedding based on 4 × 4 luma block texture features.
Fig. 10 is a flow chart of quantized DCT coefficient embedding based on 8 x 8 luminance block texture features.
Fig. 11 is a flow chart of quantized DST coefficient extraction based on texture features of a 4 × 4 luma block.
Fig. 12 is a flowchart of quantized DCT coefficient extraction based on texture features of an 8 × 8 luminance block.
Detailed Description
The technical solution of the h.265 video information high-capacity hiding method with high fidelity provided by the present invention will be further described below with reference to the accompanying drawings, so that those skilled in the art can better understand the present invention and can implement the same.
The invention provides a built-in embedding extraction method suitable for high-capacity hiding of H.265 video information, which eliminates video intra-frame distortion drift, improves video quality, and performs information steganography by combining DST/DCT coupling factor pairs with video image texture characteristics (namely intra-frame prediction direction), and experimental results show that: the method is practical and efficient, the peak signal-to-noise ratio change before and after embedding is very small while the embedding capacity is high, the difference in subjective vision is not caused after the video is embedded into information, and the embedded information can be completely extracted, and the method specifically comprises the following steps:
firstly, the method analyzes the error generated when the hidden information is embedded in the quantization factor obtained after the H.265 video code stream is partially decoded based on the H.265 coding characteristic, further analyzes and obtains the coupling relation between the integer DST and the DCT factor matrix, provides an H.265 video information high-capacity hiding method based on the coupling factor pair based on the embedding factor and the compensation factor of the coupling factor pair, performs corresponding compensation while embedding information, and reduces the transmission of the error; the following results were obtained by HM10.0 standard platform test: under the same embedding capacity, compared with the video which is not compensated after being embedded, the PSNR1 value and the PSNR2 value of the video which is compensated after being embedded are greatly improved, and the subjective visual effect is also obviously improved;
secondly, analyzing the texture characteristics of the image on the basis of the coupling factor to a video information high-capacity hiding method, analyzing the intra-frame prediction mode direction specified in the H.265 standard, obtaining an intra-frame prediction mode constraint condition I and a constraint condition II on the basis of the vertical texture and the horizontal texture characteristics of the image, combining the coupling factor pair and the texture characteristics, completely eliminating the intra-frame distortion drift phenomenon, and completely blocking the error transmission in the H.265 brightness block;
thirdly, an ultra-clear video information high-capacity hiding algorithm based on texture features is provided, experiments are carried out on an HM10.0 platform, a 4 × 4 luminance block, an 8 × 8 luminance block, a 4 × 4 luminance block and an 8 × 8 luminance block are respectively tested, the H.265 video information high-capacity hiding method is realized on the HM10.0 standard platform, and results show that: the video has good objective evaluation performance after information embedding, the average embedding capacity of an I frame is 1149 to 14651bits, the difference of PSNR1 values before and after embedding is 0.1002 to 0.4859dB, the average of PSNR2 values is more than 48dB, the subjective visual effect is good while high embedding amount is obtained, the embedding with hidden information cannot be perceived by naked eyes, the bit rate increase percentage of binary video code streams is small, and the influence of adopting different coupling factors to carry out information embedding on a pair group on the subjective and objective evaluation index of the video is small;
fourthly, the h.265 video information high capacity hiding method proposed by the present invention is applied to low resolution (QCIF, CIF and 4CIF) video sequences, and the result shows that: when the critical value R is 0 and the coupling factor pair is
Figure BDA0003045245130000121
Sum of DST factors
Figure BDA0003045245130000122
Under the condition of quantizing DCT factors, the video quality after embedding information is not reduced basically, the average embedding capacity of an I frame is 151-1611 bits, the difference of PSNR1 values before and after embedding is small and is 0.0981-0.3996 dB, and the average of PSNR2 value is more than 48dB, so that the H.265 video information high-capacity hiding method is also suitable for low-resolution videos.
H.265 video information high-capacity hiding method based on coupling factor pair
In the H265 standard video information high-capacity concealment process, if only information is embedded in the qualified luminance blocks (4 × 4 luminance prediction block and 8 × 8 luminance prediction block) without compensation, an intra-frame distortion drift phenomenon occurs, errors are sequentially transmitted to adjacent luminance blocks, and finally the video quality is seriously degraded, and the safety of video concealment is reduced.
After the residual error matrix R of the brightness block at the encoding end is subjected to integer DCT/DST transformation, a quantized DCT/DST coefficient matrix is obtained
Figure BDA0003045245130000123
Figure BDA0003045245130000124
Wherein QstepIs the quantization step size, whose value is determined by the QP quantization parameter,/represents the dot division operation in a matrix operation, round () is a rounding function.
The invention provides a built-in embedding extraction method based on H.265 video information high-capacity hiding, which can effectively control intra-frame distortion drift, weaken accumulation effect and improve the visual effect of an H.265 standard video embedded with hidden information.
Coupling factor pair-based embedding extraction method
1. Embedding based on coupling factor pairs
Fig. 1 is an embedding flow chart of h.265 video hiding coupling factor pair method. The sender (hidden information embedder) firstly carries out partial decoding (CABAC decoding and the like) on the original H & 265 video binary code stream to obtain quantized DST factors or DCT factors, then carries out screening, the screening condition is that according to a critical value defined by the sender, the larger the critical value is, the larger the difference between the current prediction block and a reference block is, the texture of the area is shown to be complex, then embeds 1-bit data into a specific position (embedding factor) of a brightness block factor matrix meeting the condition, carries out compensation at the position corresponding to the compensation factor, eliminates the error of the last row or the last column of the embedded block, weakens the transmission of the error, effectively controls the intra-frame distortion drift, and finally carries out partial encoding (CABAC encoding and the like) on all quantized DST or DCT factors again to obtain the H & 265 video binary code stream embedded with the hidden information.
(1) Quantized DST factor embedding method based on 4 x 4 luminance block
Quantized DST factor embedding procedure based on 4 × 4 luma block:
judging whether the current frame is a brightness I frame or not, and if so, switching to a process two; if not, the flow is switched to the sixth flow;
a second flow is to obtain a quantized DST factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traverse all 4 x 4 basic units, judge whether the prediction size of the current brightness block is 4 x 4, and if so, switch to a third flow; if not, the flow is switched to the fifth flow;
calculating whether the absolute value of the maximum value in the quantization factors of the current brightness blocks is larger than a user-defined critical value R, and if so, switching to a flow four; if not, the flow is switched to the fifth flow;
embedding 1-bit data into the corresponding embedding factors, correspondingly adjusting the compensation factors, and turning to the process five;
step five, reading the next basic unit;
and a sixth step of reading the next frame.
Assume a 4 x 4 luma prediction block has an embedding factor of
Figure BDA0003045245130000131
With a compensation factor of
Figure BDA0003045245130000132
The specific embedding operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000133
When even, when the embedding factor is
Figure BDA0003045245130000134
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA0003045245130000135
Plus 1, compensation factor
Figure BDA0003045245130000136
Subtracting 1; if the embedding factor
Figure BDA0003045245130000137
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA0003045245130000138
Minus 1, compensation factor
Figure BDA0003045245130000139
Adding 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA00030452451300001310
If the number is odd, the embedding factor is not changed
Figure BDA00030452451300001311
And compensation factor
Figure BDA00030452451300001312
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300001313
When it is odd, when the embedding factor is
Figure BDA00030452451300001314
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA0003045245130000141
Plus 1, compensation factor
Figure BDA0003045245130000142
Subtracting 1; if the embedding factor
Figure BDA0003045245130000143
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA0003045245130000144
Minus 1, compensation factor
Figure BDA0003045245130000145
Adding 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA0003045245130000146
Even number, the embedding factor is not changed
Figure BDA0003045245130000147
And compensation factor
Figure BDA0003045245130000148
The value of (c).
(2) Quantization DCT factor embedding method based on 8 x 8 brightness block
Quantization DCT factor embedding procedure based on 8 × 8 luminance blocks:
the process 1 is to judge whether the current frame is a brightness I frame, if so, the process is switched to the process 2; if not, then go to flow 6;
the flow 2, obtaining the quantization DCT factor, the intra-frame prediction mode, the coding block depth and the transformation block depth of the current coding unit CU of the I frame, then traversing all the basic units 8 × 8 to judge whether the prediction size of the current brightness block is 8 × 8, if so, switching to the flow 3; if not, go to flow 5;
a flow 3, calculating whether the absolute value of the maximum value in the quantization factors of the current brightness block is larger than a user-defined critical value R, if so, turning to a flow 4; if not, then go to flow 5;
a process 4, embedding 1bit data into the corresponding embedding factor, and correspondingly adjusting the compensation factor, and then turning to a process 5;
flow 5, reading the next basic unit;
and 6, reading the next frame.
Assume that the embedding factor of an 8 x 8 luma prediction block is
Figure BDA0003045245130000149
With a compensation factor of
Figure BDA00030452451300001410
The specific embedding operation method for each eligible 8 × 8 luma prediction block in the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA00030452451300001411
When even, when the embedding factor is
Figure BDA00030452451300001412
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA00030452451300001413
Adding 1; if the embedding factor
Figure BDA00030452451300001414
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300001415
Subtracting 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA00030452451300001416
If the number is odd, the embedding factor is not changed
Figure BDA00030452451300001417
And compensation factor
Figure BDA00030452451300001418
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300001419
When it is odd, when the embedding factor is
Figure BDA00030452451300001420
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA00030452451300001421
Adding 1; if the embedding factor
Figure BDA00030452451300001422
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300001423
Subtracting 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300001424
Even number, the embedding factor is not changed
Figure BDA00030452451300001425
And compensation factor
Figure BDA00030452451300001426
The value of (c).
2. Extraction based on coupling factor pairs
Fig. 2 is a flow chart of the extraction operation of the h.265 video hiding coupling factor pair method.
The receiving party (hidden information extractor) receives the binary code stream embedded with the hidden information transmitted by the transmitting party (hidden information embedder), firstly, the H.265 video binary code stream is subjected to partial decoding (CABAC decoding), quantized DST factors or DCT factors are obtained, and then screening is carried out, wherein the screening conditions are that 1-bit hidden information is extracted from each qualified brightness block according to the critical value defined by the transmitting party and the embedding factor determined by the transmitting party and the receiving party together.
(1) Quantitative DST factor extraction method based on 4 x 4 luminance block
Quantized DST factor extraction flow based on 4 × 4 luma block:
the flow (I) judges whether the current frame is a brightness I frame, if so, the flow (II) is switched to; if not, the flow is switched to the sixth step;
the second flow is to obtain the quantization DST factor, the intra-frame prediction mode, the coding block depth and the transformation block depth of the current coding unit CU of the I frame, then traverse all the basic units 4 × 4 to judge whether the prediction size of the current brightness block is 4 × 4, if so, then the third flow is carried out; if not, the flow is changed to the fifth flow;
the third flow is to calculate whether the absolute value of the maximum value in the quantization factors of the current brightness blocks is larger than a self-defined critical value R, and if so, the fourth flow is switched; if not, the flow is changed to the fifth flow;
a flow (IV) of extracting one bit of data from the corresponding embedding factor and then switching to a flow (V);
a fifth step of reading the next basic unit;
and a flow (six) reading the next frame.
Assume a 4 x 4 luma prediction block has an embedding factor of
Figure BDA0003045245130000151
With a compensation factor of
Figure BDA0003045245130000152
The specific extraction operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the embedding factor is large
Figure BDA0003045245130000153
If the number is even, 1bit data is extracted as 0;
second, if the embedding factor
Figure BDA0003045245130000154
If the number is odd, 1-bit data is extracted as 1.
(2) Quantization DCT factor extraction method based on 8 x 8 brightness block
Quantization DCT factor extraction procedure based on 8 × 8 luminance blocks:
the flow (1) judges whether the current frame is a brightness I frame, if so, the flow (2) is switched to; if not, the flow (6) is switched;
the flow (2) is to obtain the quantization DCT factor, the intra-frame prediction mode, the coding block depth and the transformation block depth of the current coding unit CU of the I frame, then traverse all the basic units 8 × 8 to judge whether the prediction size of the current brightness block is 8 × 8, if so, go to the flow (3); if not, the process is switched to the flow (5);
the flow (3) is used for calculating whether the absolute value of the maximum value in the quantization factors of the current brightness block is larger than a self-defined critical value R, and if so, the flow (4) is switched to; if not, the flow (5) is switched;
a process (4) of extracting 1-bit data from the corresponding embedding factor and then switching to a process (5);
a flow (5) for reading the next basic unit;
and (6) reading the next frame.
Assume that the embedding factor of an 8 x 8 luma prediction block is
Figure BDA0003045245130000161
With a compensation factor of
Figure BDA0003045245130000162
The specific extraction operation method for each eligible 8 × 8 luma prediction block of the h.265 original video is as follows:
first, if the embedding factor is large
Figure BDA0003045245130000163
If the number is even, 1bit data is extracted as 0;
second, if the embedding factor
Figure BDA0003045245130000164
If the number is odd, 1-bit data is extracted as 1.
3. Results and analysis of the experiments
Implemented on the H265 video standard reference software HM, the quantized DST factors of the 4 x 4 blocks and the quantized DCT factors of the 8 x 8 blocks are tested, respectively.
The experimental environment was as follows: testing a video sequence YUV, wherein the quantization parameters of all I frames during coding are 28; the frame rate of test video at resolutions 1280 × 720, 1920 × 1080, and 2560 × 1600 is 30 frames per second; the coding interval is 4; 100 frames are encoded, with 25I-frames in each video. PSNR1 shows the peak snr calculated by comparing the video after embedding the hidden information with the original YUV video before encoding, and PSNR2 shows the peak snr calculated by comparing the video after embedding the hidden information with the video after decoding the h.265 video stream. Meanwhile, the peak signal-to-noise ratio is used for measuring the objective quality of the video embedded with the hidden information, and a more real and objective conclusion is obtained, wherein the PSNR1 and the PSNR2 are average values of the peak signal-to-noise ratios of all frames.
The embedding capacity represents the number of bits embedded on average per I frame, and R is a critical value.
(1) Quantized DST factor embedding results and analysis based on 4 x 4 luma block
The experimental result shows that the images generate serious distortion, more shadows and squares are generated, the colors of the images are different from the original video, compared with the video images which are not compensated after information is embedded, the visual effect of the method is obviously improved, the number of the shadows and the squares is eliminated, and along with the increase of the critical value R, fewer screening blocks which meet the conditions are, the smaller the embedded capacity is, and the better the visual effect is. The information is embedded in the embedding factors in the quantized DST factor matrix of the 4 multiplied by 4 luminance block and the corresponding compensation factors are adjusted, compared with the method that the information is only embedded in the embedding factors and the compensation is not carried out, the video effect is much better, the PSNR1 and the PSNR2 are greatly improved, the method is consistent with the theoretical analysis, and the intra-frame distortion drift is controlled to a certain extent. The size of the video embedding capacity is directly related to the critical value R, and the larger the embedding capacity is, the poorer the video effect is; and vice versa.
(2) Quantized DCT-factor embedding results and analysis based on 8 x 8 luminance blocks
Information hiding method for coupling factor of quantization DCT factor based on 8 x 8 brightness block, using embedding factor as
Figure BDA0003045245130000165
With a compensation factor of
Figure BDA0003045245130000171
It can be seen from the uncompensated video image after the video is embedded with the hidden information that the image has serious distortion, generates more shadows and squares, and has a certain difference between the color of the image and the original video image, and it can be seen from the uncompensated video image after the video is embedded with the hidden information that the visual effect is greatly improved, and compared with the 4 × 4 luminance block, the 8 × 8 luminance block is much less, so the embedding capacity is relatively less.
Compared with the method that information is embedded in embedding factors in a quantized DCT factor matrix of an 8 x 8 luminance block and compensation is not carried out, the video effect is much better, PSNR1 and PSNR2 are greatly improved, and the method is consistent with theoretical analysis and controls intra-frame distortion drift to a certain extent. The size of the video embedding capacity is directly related to the critical value R, and the larger the embedding capacity is, the poorer the video effect is; and vice versa.
Experiments verify the feasibility of an information hiding algorithm based on coupling factors, compared with the method of only embedding and not compensating video, the coupling factor pairs in the DST factor of a 4 x 4 brightness block and the DCT factor of an 8 x 8 brightness block are respectively embedded and compensated, under the condition of the same embedding capacity, the PSNR1 and PSNR2 are greatly improved, the subjective visual effect of the video is obviously improved, the algorithm is simple and efficient, the time complexity is low, the algorithm has very large information embedding capacity, the video effect is greatly improved, but the video can be discovered by carefully observing the video, the video has certain distortion and potential danger on the safety of hidden information,
h.265 video information high-capacity hiding method based on texture features
The invention provides an H.265 video information high-capacity hiding algorithm for completely solving the intra-frame distortion drift problem. Theoretical analysis shows that: the current embedded luminance block distortion error may be transmitted to five adjacent luminance blocks through an intra-frame prediction process, and in order to prevent the intra-block embedded error from being transmitted continuously, after embedding the embedding factors in the quantization factor matrix and compensating in the compensation factors, and analyzing the relationship between the video texture feature and the intra 33 angular prediction modes, two constraints which should be satisfied by the intra-frame prediction modes of the adjacent luminance prediction blocks are found, wherein the two constraints respectively enable the last row or the last column of pixels in the reference block not to be used as the reference pixels of the prediction block, so that the pixel errors of the last row or the last column cannot be transmitted, and the intra-frame distortion drift phenomenon cannot be generated.
If the adjacent luminance block of the current embedded block meets one of the two constraint conditions, information embedding and compensation can be carried out in the corresponding coupling factor pair, if the constraint condition similar to the vertical texture is met, namely the constraint condition I indicates that the last row of key pixels cannot be used as the predicted reference pixels of the adjacent blocks, the HS coupling factor pair is selected for embedding, corresponding compensation is carried out, and the last row cannot be used as the predicted reference pixels of the adjacent blocks; if the constraint condition of horizontal texture is satisfied, namely constraint condition two, which indicates that the last row of key pixels cannot be used as the predicted reference pixels of the adjacent blocks, embedding in the VS coupling factor pair is selected, and corresponding compensation is performed, which indicates that the last column cannot be used as the predicted reference pixels of the adjacent blocks. Both ways of embedding the hidden information make all the pixels in the last row and the last column not be used as the reference pixels of the adjacent luminance prediction blocks, thus fundamentally preventing the transmission and accumulation of errors. The experimental result shows that the video quality can be well guaranteed, higher embedding capacity and good visual hiding effect can be obtained, and the influence on the code rate of the video stream is small.
Intra prediction mode
H265 luma component intra prediction supports 5 sizes of prediction units: 4 × 4, 8 × 8, 16 × 16, 32 × 32, and 64 × 64, where PUs of each size correspond to 35 prediction modes, including Plannar mode, DC mode, and 33 angular modes. Wherein, the mode 0 is a Planar mode, the mode 1 is a DC mode, and the modes 2 to 34 are 33 angle modes, wherein the modes 2 to 17 are called horizontal mode and the modes 18 to 34 are called vertical mode.
(II) Intra prediction mode constraints
In the intra prediction process, the last row or column of pixels of the current reference block are used as the reference pixels of its 5 neighboring luma (i.e., the upper right neighboring block, the lower neighboring block, and the lower left neighboring block) prediction blocks, so if there are embedded errors in these reference pixels, it will be possible (according to the actually selected intra prediction mode) to pass these errors to these five neighboring blocks, and to pass them similarly in turn.
As shown in fig. 3, the position relationship between the current block and its five neighboring unencoded predicted luma blocks, the intra prediction modes of the five neighboring blocks reflect the texture characteristics of the region, if the intra prediction modes of the five neighboring blocks satisfy certain conditions, the last row or column of pixels of the current reference block will not be used as the predicted reference pixels of its neighboring luma blocks, and then an appropriate coupling factor pair is selected for embedding compensation, so that the remaining last column or row of pixels of the current reference block will not be used as the predicted reference pixels of its neighboring luma blocks, and the intra error generated during embedding will not be transmitted to its neighboring blocks, thereby avoiding the generation of the intra distortion drift phenomenon.
1. Similar vertical texture
Vertical texture-like intra prediction mode condition one: the right-adjacent block intra prediction mode of the current luminance block belongs to the modes 26 to 34 and the right-upper-adjacent block intra prediction mode belongs to the modes 10 to 34. As shown in fig. 4, if the condition one is satisfied, the edge pixel of the last column of the current luma block is not used as the prediction reference pixel of its right and upper right neighbors, so information is embedded in this luma block, an intra-block embedding error is not transferred to the current right and upper right neighbors, and the last line of the current block embedding error is 0, and no error is transferred to the left, lower, and lower right neighbors.
2. Level-like texture
Class-level texture-intra prediction mode condition two: the lower neighboring block intra prediction mode of the current luminance block belongs to modes 2 to 10 and the left lower neighboring block intra prediction mode belongs to modes 2 to 26. As shown in fig. 5, if the condition two is satisfied, the edge pixel of the last row of the current luma block is not used as the prediction reference pixel for its next neighboring block and its next neighboring block, so information is embedded in this luma block, intra-block embedding errors are not transferred to the current next neighboring block and left next neighboring block, and the last column embedding error of the current block is 0, and no error is transferred to the upper-right neighboring block, and right next neighboring block.
(III) Intra prediction mode and coupling factor pairs
On the basis of the coupling factor pair, the problem of intra-frame distortion drift is completely solved by combining with a prediction mode constraint condition. The embodiment is described with a 4 × 4 luminance block, and the same applies to an 8 × 8 luminance block, where HS in the pair of coupling factors is embedded and compensated so that the last row value of the embedded error matrix becomes 0. As shown in fig. 6, in the case that the embedding error of the last row is equal to 0, if the constraint condition one is satisfied, the pixels in the last column of the reference luminance block are not used as the reference pixels of the right neighboring block and the upper right neighboring block, and no error is transmitted to five neighboring blocks of the current luminance block, thereby completely avoiding the generation of the intra-frame distortion drift phenomenon.
Similarly, to change the value of the last column of the embedded error matrix to 0, embedding and compensation are performed for VS in the coupling factor pair. As shown in fig. 7, in the case where the embedding error of the last column is equal to 0, if the constraint condition two is satisfied, the pixel of the last row of the reference luminance block is not taken as the reference pixel of its left and lower neighboring blocks. Therefore, errors can not be transmitted to five adjacent blocks of the current brightness block, and the generation of the intra-frame distortion drift phenomenon is completely avoided.
(IV) H.265 video information high-capacity embedding and extracting method based on texture characteristics
1. Texture feature based embedding
FIG. 8 is a flow chart of the embedding operation of the H.265 video hiding texture-based coupling factor pair method. The sender (hidden information embedder) firstly carries out partial decoding (CABAC decoding) on the original H.265 video binary code stream to obtain a quantized DST factor or DCT factor, then carries out screening, the screening condition is that according to a critical value defined by the sender, the larger the critical value is, the larger the difference between the current prediction block and a reference block is, the texture of the area is complex, then judges whether the adjacent block of the current brightness block meets a constraint condition one or a constraint condition two, if the adjacent block meets the constraint condition one, 1bit of data is embedded into a specific position (embedding factor) of a factor matrix of the brightness block meeting the condition, compensation is carried out at the corresponding position (compensation factor), the error of the last line or the last column of the embedded block is eliminated, the transmission of the error is weakened to a certain degree, the intra-frame distortion drift is effectively controlled, and finally, all the quantized DST or DCT factors are subjected to partial coding (CABAC coding) again to obtain the H.265 video code stream embedded with the hidden information .
(1) Quantized DST factor embedding method based on texture features of 4 x 4 luminance blocks
As shown in fig. 9, the quantized DST factor embedding procedure based on the texture feature of 4 × 4 luma block:
step one, judging whether the current frame is a brightness I frame, if so, turning to step two; if not, turning to the seventh step;
step two, obtaining a quantization DST factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traversing all basic units 4 × 4 to judge whether the prediction size of the current brightness block is 4 × 4, and if so, turning to step three; if not, turning to step six;
step three, calculating whether the absolute value of the maximum value in the quantization factors of the current brightness block is larger than a user-defined critical value R, and if so, turning to step four; if not, turning to step six;
step four, judging whether the adjacent block of the current brightness block meets a constraint condition one or a constraint condition two, if so, turning to step six; if not, turning to the sixth step;
step five, embedding one bit of data into the corresponding embedding factor, and correspondingly adjusting the compensation factor, and turning to step six;
reading the next basic unit;
and step seven, reading the next frame.
Assume a 4 x 4 luma prediction block has an embedding factor of
Figure BDA0003045245130000201
With a compensation factor of
Figure BDA0003045245130000202
The specific embedding operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000203
When even, when the embedding factor is
Figure BDA0003045245130000204
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA0003045245130000205
Plus 1, compensation factor
Figure BDA0003045245130000206
Subtracting 1; if the embedding factor
Figure BDA0003045245130000207
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA0003045245130000208
Minus 1, compensation factor
Figure BDA0003045245130000209
Adding 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA00030452451300002010
If the number is odd, the embedding factor is not changed
Figure BDA00030452451300002011
And compensation factor
Figure BDA00030452451300002012
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300002013
When it is odd, when the embedding factor is
Figure BDA00030452451300002014
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA00030452451300002015
Plus 1, compensation factor
Figure BDA00030452451300002016
Subtracting 1; if the embedding factor
Figure BDA00030452451300002017
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300002018
Minus 1, compensation factor
Figure BDA00030452451300002019
Adding 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300002020
Even number, the embedding factor is not changed
Figure BDA00030452451300002021
And compensation factor
Figure BDA00030452451300002022
The value of (c).
(2) Quantization DCT factor embedding method based on texture characteristics of 8 x 8 brightness block
As shown in fig. 10, the quantization DCT factor embedding process based on texture features of 8 × 8 luminance blocks:
step 1, judging whether the current frame is a brightness I frame, if so, turning to step 2; if not, the step 7 is carried out;
step 2, obtaining a quantization DCT (discrete cosine transformation) factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traversing all basic units 8 × 8 to judge whether the prediction size of the current brightness block is 8 × 8, and if so, turning to step 3; if not, the step 6 is carried out;
step 3, calculating whether the absolute value of the maximum value in the quantization factor of the current brightness block is larger than a user-defined critical value R, and if so, turning to step 4; if not, the step 6 is carried out;
step 4, judging whether the adjacent block of the current brightness block meets a constraint condition one or a constraint condition two, and if so, turning to step 6; if not, turning to step 6;
step 5, embedding a bit of data into the corresponding embedding factor, correspondingly adjusting the compensation factor, and turning to step 6;
step 6, reading the next basic unit;
and 7, reading the next frame.
Assume that the embedding factor of an 8 x 8 luma prediction block is
Figure BDA0003045245130000211
With a compensation factor of
Figure BDA0003045245130000212
The specific embedding operation method for each eligible 8 × 8 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000213
When even, when the embedding factor is
Figure BDA0003045245130000214
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA0003045245130000215
Adding 1; if the embedding factor
Figure BDA0003045245130000216
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA0003045245130000217
Subtracting 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure BDA0003045245130000218
If the number is odd, the embedding factor is not changed
Figure BDA0003045245130000219
And compensation factor
Figure BDA00030452451300002110
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300002111
When it is odd, when the embedding factor is
Figure BDA00030452451300002112
If greater than 0, it is reduced by 1 by a compensation factor
Figure BDA00030452451300002113
Adding 1; if the embedding factor
Figure BDA00030452451300002114
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure BDA00030452451300002115
Subtracting 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure BDA00030452451300002116
Even number, the embedding factor is not changed
Figure BDA00030452451300002117
And compensation factor
Figure BDA00030452451300002118
The value of (c).
2. Extraction based on textural features
Fig. 2 is a flow chart of the extraction of the h.265 video hiding texture-feature-based coupling factor pair method. The receiving party (hidden information extractor) receives the binary code stream embedded with the hidden information transmitted by the transmitting party (hidden information embedder), firstly, the H.265 video binary code stream is subjected to partial decoding (CABAC decoding), quantized DST factors or DCT factors are obtained, then screening is carried out, the screening conditions are that 1-bit hidden information is extracted from each qualified brightness block according to the critical value defined by the transmitting party and the intra-frame prediction mode of the current block adjacent to the current block, and the embedding factors determined by the transmitting party and the receiving party together.
(1) Quantitative DST factor extraction process based on texture features of 4 x 4 luminance block
As shown in fig. 11, the quantized DST factor extraction flow based on 4 × 4 luma blocks:
step one, judging whether the current frame is a brightness I frame, if so, turning to step two; if not, turning to the step (seven);
step (two), obtaining a quantization DST factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of an I frame, traversing all basic units 4 × 4 to judge whether the prediction size of a current brightness block is 4 × 4, and if so, turning to step (three); if not, turning to the step (six);
step three, calculating whether the absolute value of the maximum value in the quantization factor of the current brightness block is larger than a self-defined critical value R, and if so, turning to step four; if not, turning to the step (six);
step four, judging whether the intra-frame prediction mode of the adjacent block of the current brightness meets a constraint condition one or a constraint condition two, and if so, turning to the step five; if not, turning to the step (six);
step five, extracting one bit of data from the corresponding embedding factor, and turning to step six;
reading the next basic unit;
and step seven, reading the next frame.
Assume a 4 x 4 luma prediction block has an embedding factor of
Figure BDA0003045245130000221
With a compensation factor of
Figure BDA0003045245130000222
Specific extraction operation for each eligible 4 × 4 luma prediction block of H.265 original videoThe method comprises the following steps:
first, if the embedding factor is large
Figure BDA0003045245130000223
If the number is even, 1bit data is extracted as 0;
second, if the embedding factor
Figure BDA0003045245130000224
If the number is odd, 1-bit data is extracted as 1.
(2) Quantization DCT factor extraction method based on 8 x 8 brightness block texture feature
As shown in fig. 12, the flow of quantization DCT factor extraction based on 8 × 8 luminance blocks:
step (1), judging whether the current frame is a brightness I frame, if so, turning to step (2); if not, turning to the step (7);
step (2), obtaining the quantization DCT factor, the intra-frame prediction mode, the coding block depth and the transformation block depth of the current coding unit CU of the I frame, traversing all the basic units 8 × 8 to judge whether the prediction size of the current brightness block is 8 × 8, and if so, turning to step (3); if not, turning to the step (6);
step (3), calculating whether the absolute value of the maximum value in the quantization factors of the current brightness block is larger than a self-defined critical value R, and if so, turning to step (4); if not, turning to the step (6);
step (4), judging whether the intra-frame prediction mode of the adjacent block of the current brightness meets a constraint condition one or a constraint condition two, and if so, turning to step (5); if not, turning to the step (6);
step (5), extracting one bit of data from the corresponding embedding factor, and turning to step (6);
step (6), reading the next basic unit;
and (7) reading the next frame.
Assume that the embedding factor of an 8 x 8 luma prediction block is
Figure BDA0003045245130000225
With a compensation factor of
Figure BDA0003045245130000226
The specific extraction operation method for each eligible 8 × 8 luma prediction block of the h.265 original video is as follows:
first, if the embedding factor is large
Figure BDA0003045245130000227
If the number is even, 1bit data is extracted as 0;
second, if the embedding factor
Figure BDA0003045245130000228
If the number is odd, 1-bit data is extracted as 1.
(V) results and analysis of the experiments
The experiment was carried out on the H265 video standard reference software HM, testing the quantized DST factors of 4 x 4 blocks and the quantized DCT factors of 8 x 8 blocks, respectively. The experimental environment was as follows: testing a video sequence YUV, wherein the quantization parameters of all I frames during coding are 28; the frame rate of test video at resolutions 1280 × 720, 1920 × 1080, and 2560 × 1600 is 30 frames per second; the coding interval is 4, 100 frames are coded, and there are 25I frames in each video. PSNR1 represents the peak signal-to-noise ratio of the video after embedding the hidden information and the original YUV video before being coded, PSNR2 represents the peak signal-to-noise ratio of the video after embedding the hidden information and the video after decoding the H.265 video code stream, and the peak signal-to-noise ratio is used for measuring the objective quality of the video after embedding the hidden information, so that a more real and objective conclusion can be obtained, wherein PSNR1 and PSNR2 are average values of the peak signal-to-noise ratios of all frames.
1. Quantized DST factor video hiding algorithm based on texture features of 4 x 4 luminance blocks
The 4 x 4 luminance block which meets the condition is selected to embed the hidden information, the coupling factor pair is K1, the algorithm has good visual effect, and the video images before and after embedding have no obvious distortion which can be detected by naked eyes.
When the threshold R is 0 and the coupling factor pair is K1, PSNR1 values before and after embedding of the quantized DST factor video information high-capacity concealment algorithm based on texture features of 4 × 4 luminance blocks are reduced by 0.0751 to 0.4291, PSNR2 values are all above 48dB, concealment capacities are 896 to 11049bits per I frame, and the concealment capacity of a video sequence with relatively rich details is relatively large because most of regions with rich details adopt 4 × 4 luminance prediction blocks, but the concealment capacity of a video sequence is relatively small, so that the video sequence is relatively gentle, 4 × 4 luminance prediction blocks are relatively small, and when the concealment capacity is large, the PSNR1 values are correspondingly reduced more, and the file increment is relatively large.
2. Quantization DCT (discrete cosine transformation) factor video hiding algorithm based on texture features of 8 x 8 brightness blocks
The method selects the eligible 8 x 8 luminance blocks for embedding the hidden information, the coupling factor pair is K3, the algorithm has good visual effect, and the video before and after embedding has no obvious distortion which can be detected by naked eyes. When the critical value R is 0 and the coupling factor pair is K1, the PSNR1 values before and after the embedding of the quantized DST factor video information high-capacity concealment algorithm based on the texture characteristics of the 4 x 4 luminance block are reduced by 0.0256 to 0.0671B, the PSNR2 values are all above 56dB, and the concealment capacity is 255 to 3916bits per I frame. The 8 × 8 luminance prediction blocks that meet the constraint are much smaller than the 4 × 4 blocks because an 8 × 8 block size is equal to 4 × 4 block sizes, and each qualified luminance block is embedded with 1bit of information, compared with the 4 × 4 luminance block, the 8 × 8 prediction blocks that meet the constraint are fewer, so the embedding capacity is less, and the difference between the PSNR1 values before and after embedding and the file size increment are much lower, and the video has better subjective effect.
3. H.265 video information high-capacity hiding method based on textural features
From the above embedding of the 4 × 4 and 8 × 8 luminance blocks alone, it can be seen that the embedding capacity of the 4 × 4 block is high, while the embedding capacity of the 8 × 8 block, although small, has almost no influence on the visual effect of the video, so in order to increase the embedding capacity and reduce the degradation of the video quality as much as possible, the eligible 4 × 4 and 8 × 8 luminance blocks are selected for simultaneous embedding.
(1) Embedded capacity
The embedding capacity is an important index for evaluating an information hiding algorithm, and the video hiding has the advantage of larger embedding capacity. When the condition for screening the luminance block is more strict, that is, the critical value R is larger, the number of embedded blocks meeting the condition is smaller, and the embedding capacity under different coupling factor pairs is the same, because the condition is judged to be the same.
(2) PSNR1 value
For a single graph, when the threshold R is larger, the embedded capacity is smaller, and the difference between the PSNR1 before and after embedding is smaller, which means that the similarity between the original non-encoded video before and after embedding is closer. Different coupling factor pair groups are selected for embedding, the PSNR1 values before and after embedding have the same change trend, and under the condition of the same video sequence and the critical value R, the PSNR1 value difference before and after embedding is small, which shows that the influence of the selection of the coupling factor pair groups on the video privacy performance is small. The smaller the difference between the PSNR1 values before and after embedding, the better the privacy of the video and the better the subjective visual effect. The difference in PSNR1 before and after embedding decreases as the threshold value R increases, and selecting different coupling factor groups for embedding video information has little effect on the PSNR1 value.
(3) PSNR2 value
For a single graph, when the critical value R is larger, the embedded capacity is smaller, and the PSNR2 value is also larger, which means that the similarity between the embedded video and the video after decoding the video code stream is closer and closer. The PSNR2 values change in the same trend when different coupling factor group sets are selected for embedding, and the PSNR2 value difference is small under the same video sequence and the threshold R condition, which indicates that the influence of the selection of the coupling factor group sets on the video privacy performance is small. When the embedding capacity is smaller, the PSNR2 value is larger, the subjective visual effect of the video is better, the similarity with the non-embedded video is higher, the visual privacy is stronger, and the hidden information is more safe to hide.
(4) Bit rate
The larger the critical value R, the less the file bit rate increases. The bit rate increase values before and after embedding have similar variation trends for different coupling factors, and the bit rate increase values before and after embedding have similar values under the same video sequence and the same critical value R.
(5) Complexity of algorithm
In the information embedding process, the algorithm takes a CU as a basic unit, the size of the CU is 64 x 64, a video total C frame is assumed, M CUs exist in one frame, the number of 4 x 4 luminance blocks and 8 x 8 luminance blocks contained in each CU is N, the time complexity is O (C x M x N), only some basic condition judgment and addition and subtraction operations are performed during information embedding, the time complexity is low, and the algorithm is efficient and simple.
(6) Subjective visual effect
The same video sequences before and after embedding are compared to find that the subjective visual effects before and after embedding are almost the same, and the existence of the hidden information cannot be perceived by naked eyes, so that the safe transmission of the hidden information can be ensured.
The invention theoretically analyzes the effect of the textural features on the high-capacity hiding of video information, deduces two intra-frame prediction mode constraint conditions based on two textural features, and the feasibility of the hiding algorithm based on the texture feature information in the application of the high-definition video is verified by experiments, by combining the coupling factor pair and the texture characteristics, the transmission of the error in the block is completely prevented, the intra-frame distortion drift phenomenon is completely eliminated, the algorithm is simple and efficient, the time complexity is low, the video has good subjective vision and almost no perception of the embedding of hidden information by naked eyes while obtaining higher information embedding capacity and better visual privacy performance, and the binary code stream file after the information is embedded is increased slightly, the method is also suitable for low-resolution videos, and excellent objective performance indexes and subjective visual effects which are indiscriminate by naked eyes can be obtained.
The invention summarizes the existing video information hiding algorithm and analyzes the H.265 intra-frame prediction coding method, the intra-block embedding error is generated when information is embedded in the DST/DCT coefficient, the error is transmitted to an adjacent brightness block through the intra-frame prediction process, intra-frame distortion drift is generated, and the subjective visual effect of the video is influenced.
Firstly, a method for hiding H.265 video information based on coupling coefficient pair is proposed, wherein the coupling relation exists in a DST/DCT coefficient matrix, the embedding error of the last row or the last column of reference pixels in the DST/DCT matrix can be made to be 0, 1-bit data is embedded in the embedding coefficient of the DST/DCT coupling coefficient pair, the corresponding compensation coefficient is compensated, and the transmission and accumulation of the error in the block are controlled.
Secondly, in order to thoroughly prevent the transmission of errors, on the basis of the H & 265 video information hiding method based on the coupling coefficient pair, the first constraint condition and the second constraint condition of an intra-frame prediction mode are obtained by analyzing the texture characteristics of an image, and the coupling coefficient pair is combined with the texture characteristics, so that all reference pixels of a reference block can not transmit embedding errors to adjacent prediction blocks, and the intra-frame distortion drift problem is fundamentally solved.

Claims (10)

1. The high-capacity hiding method for high-efficiency fidelity H.265 video information is characterized in that DST/DCT coupling factor pairs are combined with texture features (namely intra-frame prediction directions) of video images to perform information hiding, and specifically comprises the following steps: firstly, analyzing errors generated in the embedding of hidden information in quantization factors obtained after partial decoding of H.265 video code streams based on H.265 coding characteristics, further analyzing and obtaining a coupling relation between an integer DST and a DCT factor matrix, providing an H.265 video information high-capacity hiding method based on coupling factor pairs based on embedding factors and compensation factors of the coupling factor pairs, and performing corresponding compensation while embedding information to reduce error transmission; secondly, analyzing the texture features of the image on the basis of the coupling factor to the video information high-capacity hiding method, analyzing the intra-frame prediction mode direction specified in the H.265 standard, obtaining an intra-frame prediction mode constraint condition I and a constraint condition II on the basis of the vertical texture and the horizontal texture features of the image, and combining the coupling factor pair and the texture features to completely block error transmission in the H.265 brightness block; thirdly, a super-clear video information high-capacity hiding algorithm based on texture features is provided, and the H.265 video information high-capacity hiding method is also suitable for low-resolution videos;
the first, H.265 video information hiding method based on coupling coefficient pair includes embedding based on coupling coefficient pair and extracting based on coupling coefficient pair, there is coupling relation in DST/DCT coefficient matrix, can make embedding error of last row or last column reference pixel in DST/DCT matrix 0, imbed 1bit data in embedding coefficient of DST/DCT coupling coefficient pair, and compensate corresponding compensation coefficient, control transmission and accumulation of error in block;
secondly, the H & 265 video information hiding method based on the texture features comprises embedding based on the texture feature pairs and extracting based on the texture feature pairs, on the basis of the H & 265 video information hiding method based on the coupling coefficient pairs, an intra-frame prediction mode constraint condition I and a constraint condition II are obtained by analyzing the texture features of an image, and the coupling coefficient pairs are combined with the texture features, so that all reference pixels of a reference block cannot transmit embedding errors to adjacent prediction blocks, and the method is suitable for high-definition videos and low-resolution videos, can not detect the embedding of secret information by naked eyes while obtaining high information embedding capacity and good visual secret performance, and has small influence on code streams;
the embedding process based on the texture features comprises the following steps: the method comprises the steps that an original H.265 video binary code stream is partially decoded by a sender to obtain a quantized DST factor or DCT factor, then screening is carried out, the screening condition is that according to a critical value defined by the sender, the larger the critical value is, the larger the difference between a current prediction block and a reference block is, the more complex the texture of the region is shown, then whether an adjacent block of a current brightness block meets a constraint condition I or a constraint condition II is judged, if yes, 1-bit data is embedded into an embedding factor of a factor matrix of the brightness block meeting the condition, compensation is carried out on the compensation factor to eliminate the error of the last line or the last column of the embedding block, and finally, partial coding is carried out on all quantized DST or DCT factors again to obtain the H.265 video binary code stream embedded with hidden information;
the extraction process based on the texture features comprises the following steps: the receiving party receives the binary code stream embedded with the hidden information transmitted by the transmitting party, the H.265 video binary code stream is partially decoded to obtain a quantized DST factor or DCT factor, and then screening is carried out, wherein the screening condition is that 1-bit hidden information is extracted from each qualified brightness block according to a critical value defined by the transmitting party and an intra-frame prediction mode of a current block adjacent to the current block, and the embedding factor determined by the transmitting party and the receiving party together.
2. The fidelity-efficient h.265 video information high-capacity concealment method according to claim 1, wherein the h.265 video information high-capacity concealment method based on the coupling factor pair: by utilizing the coupling relation in the quantization transformation factor matrix, the other DST factor or DCT factor in the coupling factor pair is adjusted while information is embedded into the embedding factor in the coupling factor pair, the intra-frame distortion drift error caused by embedding hidden information is compensated, the visual concealment is improved, and the embedded information capacity is improved;
the H.265 video information hiding method based on the coupling coefficient pair comprises embedding based on the coupling coefficient pair and extracting based on the coupling coefficient pair.
3. The fidelity-efficient H265 video information high-capacity concealment method according to claim 2, wherein the embedding procedure based on the coupling factor pair is: the method comprises the steps that an original H & 265 video binary code stream is firstly partially decoded by a sender to obtain quantized DST factors or DCT factors, then screening is carried out, the screening condition is that according to a critical value defined by the sender, the larger the critical value is, the larger the difference between a current prediction block and a reference block is, the texture of the region is shown to be complex, then 1-bit data is embedded into a specific position of a brightness block factor matrix which meets the condition, compensation is carried out at the position corresponding to the compensation factor, the error of the last line or the last column of an embedded block is eliminated, the transmission of the error is weakened, and finally, all quantized DST or DCT factors are re-partially encoded to obtain the H & 265 video binary code stream embedded with hidden information.
4. The fidelity-efficient h.265 video information high-capacity concealment method according to claim 3, wherein the 4 x 4 luminance block based quantized DST factor embedding method:
judging whether the current frame is a brightness I frame or not, and if so, switching to a process two; if not, the flow is switched to the sixth flow;
a second flow is to obtain a quantized DST factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traverse all 4 x 4 basic units, judge whether the prediction size of the current brightness block is 4 x 4, and if so, switch to a third flow; if not, the flow is switched to the fifth flow;
calculating whether the absolute value of the maximum value in the quantization factors of the current brightness blocks is larger than a user-defined critical value R, and if so, switching to a flow four; if not, the flow is switched to the fifth flow;
embedding 1-bit data into the corresponding embedding factors, correspondingly adjusting the compensation factors, and turning to the process five;
step five, reading the next basic unit;
a sixth procedure, reading the next frame;
assume a 4 x 4 luma prediction block has an embedding factor of
Figure FDA0003045245120000021
With a compensation factor of
Figure FDA0003045245120000022
The specific embedding operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure FDA0003045245120000031
When even, when the embedding factor is
Figure FDA0003045245120000032
If greater than 0, it is reduced by 1 by a compensation factor
Figure FDA0003045245120000033
Plus 1, compensation factor
Figure FDA0003045245120000034
Subtracting 1; if the embedding factor
Figure FDA0003045245120000035
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure FDA0003045245120000036
Minus 1, compensation factor
Figure FDA0003045245120000037
Adding 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure FDA0003045245120000038
If the number is odd, the embedding factor is not changed
Figure FDA0003045245120000039
And compensation factor
Figure FDA00030452451200000310
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure FDA00030452451200000311
When it is odd, when the embedding factor is
Figure FDA00030452451200000312
If greater than 0, it is reduced by 1 by a compensation factor
Figure FDA00030452451200000313
Plus 1, compensation factor
Figure FDA00030452451200000314
Subtracting 1; if the embedding factor
Figure FDA00030452451200000315
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure FDA00030452451200000316
Minus 1, compensation factor
Figure FDA00030452451200000317
Adding 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure FDA00030452451200000318
Even number, the embedding factor is not changed
Figure FDA00030452451200000319
And compensation factor
Figure FDA00030452451200000320
The value of (c).
5. The fidelity-efficient h.265 video information high-capacity concealment method according to claim 3, wherein the quantization DCT factor embedding method based on 8 x 8 luminance blocks:
the process 1 is to judge whether the current frame is a brightness I frame, if so, the process is switched to the process 2; if not, then go to flow 6;
the flow 2, obtaining the quantization DCT factor, the intra-frame prediction mode, the coding block depth and the transformation block depth of the current coding unit CU of the I frame, then traversing all the basic units 8 × 8 to judge whether the prediction size of the current brightness block is 8 × 8, if so, switching to the flow 3; if not, go to flow 5;
a flow 3, calculating whether the absolute value of the maximum value in the quantization factors of the current brightness block is larger than a user-defined critical value R, if so, turning to a flow 4; if not, then go to flow 5;
a process 4, embedding 1bit data into the corresponding embedding factor, and correspondingly adjusting the compensation factor, and then turning to a process 5;
flow 5, reading the next basic unit;
flow 6, reading the next frame;
assume that the embedding factor of an 8 x 8 luma prediction block is
Figure FDA00030452451200000321
With a compensation factor of
Figure FDA00030452451200000322
The specific embedding operation method for each eligible 8 × 8 luma prediction block in the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure FDA00030452451200000323
When even, when the embedding factor is
Figure FDA00030452451200000324
If greater than 0, it is reduced by 1 by a compensation factor
Figure FDA00030452451200000325
Adding 1; if the embedding factor
Figure FDA00030452451200000326
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure FDA00030452451200000327
Subtracting 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure FDA0003045245120000041
If the number is odd, the embedding factor is not changed
Figure FDA0003045245120000042
And compensation factor
Figure FDA0003045245120000043
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure FDA0003045245120000044
When it is odd, when the embedding factor is
Figure FDA0003045245120000045
If greater than 0, it is reduced by 1 by a compensation factor
Figure FDA0003045245120000046
Adding 1; if the embedding factor
Figure FDA0003045245120000047
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure FDA0003045245120000048
Subtracting 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure FDA0003045245120000049
Even number, the embedding factor is not changed
Figure FDA00030452451200000410
And compensation factor
Figure FDA00030452451200000411
The value of (c).
6. The method according to claim 2, wherein the extraction procedure based on the coupling factor pair is as follows: the receiving party receives the binary code stream of the embedded hidden information transmitted by the transmitting party, the H.265 video binary code stream is partially decoded to obtain a quantized DST factor or DCT factor, and then screening is carried out, wherein the screening condition is to extract 1-bit hidden information from each qualified brightness block according to a critical value defined by the transmitting party and the embedded factor determined by the transmitting party and the receiving party together.
7. The fidelity-efficient h.265 video information high-capacity concealment method according to claim 6, wherein the 4 x 4 luminance block based quantized DST factor extraction method: :
the flow (I) judges whether the current frame is a brightness I frame, if so, the flow (II) is switched to; if not, the flow is switched to the sixth step;
the second flow is to obtain the quantization DST factor, the intra-frame prediction mode, the coding block depth and the transformation block depth of the current coding unit CU of the I frame, then traverse all the basic units 4 × 4 to judge whether the prediction size of the current brightness block is 4 × 4, if so, then the third flow is carried out; if not, the flow is changed to the fifth flow;
the third flow is to calculate whether the absolute value of the maximum value in the quantization factors of the current brightness blocks is larger than a self-defined critical value R, and if so, the fourth flow is switched; if not, the flow is changed to the fifth flow;
a flow (IV) of extracting one bit of data from the corresponding embedding factor and then switching to a flow (V);
a fifth step of reading the next basic unit;
a sixth step of reading a next frame;
assume a 4 x 4 luma prediction block has an embedding factor of
Figure FDA00030452451200000417
With a compensation factor of
Figure FDA00030452451200000418
The specific extraction operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the embedding factor is large
Figure FDA00030452451200000415
If the number is even, 1bit data is extracted as 0;
second, if the embedding factor
Figure FDA00030452451200000416
If the number is odd, 1-bit data is extracted as 1.
8. The fidelity-efficient H265 video information high-capacity concealment method according to claim 1, wherein the H265 video information high-capacity concealment method based on texture features: after embedding the embedding factors in the quantization factor matrix and compensating the embedding factors in the compensation factors, analyzing the relationship between the video texture characteristics and 33 intra-frame angle prediction modes, and finding two constraint conditions which should be met by the intra-frame prediction modes of the adjacent brightness prediction blocks, wherein the two constraint conditions respectively enable the last row or the last column of pixels in the reference block not to be used as the reference pixels of the prediction blocks, so that the pixel errors of the last row or the last column cannot be transferred, and the intra-frame distortion drift phenomenon cannot be generated;
if the adjacent luminance block of the current embedded block meets one of the two constraint conditions, information embedding and compensation can be carried out in the corresponding coupling factor pair, if the constraint condition similar to the vertical texture is met, namely the constraint condition I indicates that the last row of key pixels cannot be used as the predicted reference pixels of the adjacent blocks, the HS coupling factor pair is selected for embedding, corresponding compensation is carried out, and the last row cannot be used as the predicted reference pixels of the adjacent blocks; if the constraint condition of the horizontal texture is met, namely the constraint condition two indicates that the last row of key pixels cannot be used as the prediction reference pixels of the adjacent blocks, embedding in VS coupling factor pairs, and performing corresponding compensation to indicate that the last column cannot be used as the prediction reference pixels of the adjacent blocks;
intra prediction mode constraint: in the intra-frame prediction process, the last line or column of pixels of the current reference block is used as the reference pixels of 5 adjacent luminance prediction blocks, the intra-frame prediction modes of five adjacent blocks reflect the texture characteristics of the region, if the intra-frame prediction modes of the five adjacent blocks meet certain conditions, the last line or column of pixels of the current reference block is not used as the prediction reference pixels of the adjacent luminance blocks, then appropriate coupling factors are selected for embedding compensation, the remaining last column or column of pixels of the current reference block is not used as the prediction reference pixels of the adjacent luminance blocks, and the block error generated in embedding is not transmitted to the adjacent blocks;
vertical texture-like intra prediction mode condition one: the right adjacent block intra prediction mode of the current luminance block belongs to the modes 26 to 34 and the right upper adjacent block intra prediction mode belongs to the modes 10 to 34, if the condition one is satisfied, the edge pixel of the last column of the current luminance block is not used as the prediction reference pixel of the right adjacent block and the right upper adjacent block thereof, so that information is embedded in the luminance block, an intra-block embedding error is not transmitted to the current right adjacent block and the right upper adjacent block, and the last row of the current block is embedded with an error of 0, and no error is transmitted to the left lower adjacent block, the lower adjacent block and the right lower adjacent block;
class-level texture-intra prediction mode condition two: the intra prediction mode of the next adjacent block of the current luminance block belongs to modes 2 to 10 and the intra prediction mode of the left next adjacent block belongs to modes 2 to 26, if the condition two is satisfied, the edge pixel of the last row of the current luminance block is not used as the prediction reference pixel of the next adjacent block and the next adjacent block, so that information is embedded in the luminance block, an intra-block embedding error is not transmitted to the current next adjacent block and the left next adjacent block, and the last column of the current block is embedded with an error of 0, and no error is transmitted to the upper right adjacent block, the right adjacent block and the lower right adjacent block.
9. The fidelity-efficient h.265 video information high-capacity concealment method according to claim 1, wherein the quantized DST factor embedding method based on texture features of 4 x 4 luminance blocks:
step one, judging whether the current frame is a brightness I frame, if so, turning to step two; if not, turning to the seventh step;
step two, obtaining a quantization DST factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traversing all basic units 4 × 4 to judge whether the prediction size of the current brightness block is 4 × 4, and if so, turning to step three; if not, turning to step six;
step three, calculating whether the absolute value of the maximum value in the quantization factors of the current brightness block is larger than a user-defined critical value R, and if so, turning to step four; if not, turning to step six;
step four, judging whether the adjacent block of the current brightness block meets a constraint condition one or a constraint condition two, if so, turning to step six; if not, turning to the sixth step;
step five, embedding one bit of data into the corresponding embedding factor, and correspondingly adjusting the compensation factor, and turning to step six;
reading the next basic unit;
step seven, reading the next frame;
assume a 4 x 4 luma prediction block has an embedding factor of
Figure FDA00030452451200000626
With a compensation factor of
Figure FDA00030452451200000627
The specific embedding operation method for each qualified 4 × 4 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure FDA0003045245120000064
When even, when the embedding factor is
Figure FDA0003045245120000065
If greater than 0, it is reduced by 1 by a compensation factor
Figure FDA0003045245120000066
Plus 1, compensation factor
Figure FDA0003045245120000067
Subtracting 1; if the embedding factor
Figure FDA0003045245120000068
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure FDA0003045245120000069
Minus 1, compensation factor
Figure FDA00030452451200000610
Adding 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure FDA00030452451200000611
If the number is odd, the embedding factor is not changed
Figure FDA00030452451200000612
And compensation factor
Figure FDA00030452451200000628
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure FDA00030452451200000615
When it is odd, when the embedding factor is
Figure FDA00030452451200000616
If greater than 0, it is reduced by 1 by a compensation factor
Figure FDA00030452451200000617
Plus 1, compensation factor
Figure FDA00030452451200000618
Subtracting 1; if the embedding factor
Figure FDA00030452451200000619
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure FDA00030452451200000620
Minus 1, compensation factor
Figure FDA00030452451200000621
Adding 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure FDA00030452451200000622
Even number, the embedding factor is not changed
Figure FDA00030452451200000623
And compensation factor
Figure FDA00030452451200000629
The value of (c).
10. The fidelity-efficient h.265 video information high-capacity concealment method according to claim 1, wherein the quantization DCT factor embedding method based on texture features of 8 x 8 luminance blocks:
step 1, judging whether the current frame is a brightness I frame, if so, turning to step 2; if not, the step 7 is carried out;
step 2, obtaining a quantization DCT (discrete cosine transformation) factor, an intra-frame prediction mode, a coding block depth and a transformation block depth of a current coding unit CU of the I frame, traversing all basic units 8 × 8 to judge whether the prediction size of the current brightness block is 8 × 8, and if so, turning to step 3; if not, the step 6 is carried out;
step 3, calculating whether the absolute value of the maximum value in the quantization factor of the current brightness block is larger than a user-defined critical value R, and if so, turning to step 4; if not, the step 6 is carried out;
step 4, judging whether the adjacent block of the current brightness block meets a constraint condition one or a constraint condition two, and if so, turning to step 6; if not, turning to step 6;
step 5, embedding a bit of data into the corresponding embedding factor, correspondingly adjusting the compensation factor, and turning to step 6;
step 6, reading the next basic unit;
step 7, reading the next frame;
assume that the embedding factor of an 8 x 8 luma prediction block is
Figure FDA00030452451200000719
With a compensation factor of
Figure FDA00030452451200000720
The specific embedding operation method for each eligible 8 × 8 luma prediction block of the h.265 original video is as follows:
first, if the information bit to be embedded is 1 and the embedding factor is large
Figure FDA0003045245120000073
When even, when the embedding factor is
Figure FDA0003045245120000074
If greater than 0, it is reduced by 1 by a compensation factor
Figure FDA0003045245120000075
Adding 1; if the embedding factor
Figure FDA0003045245120000076
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure FDA0003045245120000077
Subtracting 1;
second, if the information bit to be embedded is 1 and the embedding factor is large
Figure FDA0003045245120000078
If the number is odd, the embedding factor is not changed
Figure FDA0003045245120000079
And compensation factor
Figure FDA00030452451200000710
A value of (d);
third, if the information bit to be embedded is 0 and the embedding factor is 0
Figure FDA00030452451200000711
When it is odd, when the embedding factor is
Figure FDA00030452451200000712
If greater than 0, it is reduced by 1 by a compensation factor
Figure FDA00030452451200000713
Adding 1; if the embedding factor
Figure FDA00030452451200000714
If the value is less than or equal to 0, the value is added by 1 to compensate the factor
Figure FDA00030452451200000715
Subtracting 1;
fourth, if the information bit to be embedded is 0 and the embedding factor is 0
Figure FDA00030452451200000716
Even number, the embedding factor is not changed
Figure FDA00030452451200000717
And compensation factor
Figure FDA00030452451200000718
The value of (c).
CN202110470483.1A 2021-04-29 2021-04-29 High-capacity hiding method for H.265 video information with high-efficiency fidelity Pending CN113329229A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110470483.1A CN113329229A (en) 2021-04-29 2021-04-29 High-capacity hiding method for H.265 video information with high-efficiency fidelity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110470483.1A CN113329229A (en) 2021-04-29 2021-04-29 High-capacity hiding method for H.265 video information with high-efficiency fidelity

Publications (1)

Publication Number Publication Date
CN113329229A true CN113329229A (en) 2021-08-31

Family

ID=77413955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110470483.1A Pending CN113329229A (en) 2021-04-29 2021-04-29 High-capacity hiding method for H.265 video information with high-efficiency fidelity

Country Status (1)

Country Link
CN (1) CN113329229A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150627A (en) * 2022-06-30 2022-10-04 四川大学 DST-based video compression robustness blind watermarking resisting method
CN116320471A (en) * 2023-05-18 2023-06-23 中南大学 Video information hiding method, system, equipment and video information extracting method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150627A (en) * 2022-06-30 2022-10-04 四川大学 DST-based video compression robustness blind watermarking resisting method
CN115150627B (en) * 2022-06-30 2024-04-19 四川大学 DST-based video compression robustness-resistant blind watermarking method
CN116320471A (en) * 2023-05-18 2023-06-23 中南大学 Video information hiding method, system, equipment and video information extracting method
CN116320471B (en) * 2023-05-18 2023-08-22 中南大学 Video information hiding method, system, equipment and video information extracting method

Similar Documents

Publication Publication Date Title
Yang et al. An information hiding algorithm based on intra-prediction modes and matrix coding for H. 264/AVC video stream
US10368051B2 (en) 3D-HEVC inter-frame information hiding method based on visual perception
Noorkami et al. Compressed-domain video watermarking for H. 264
Yin et al. Error concealment using data hiding
CN113329229A (en) High-capacity hiding method for H.265 video information with high-efficiency fidelity
CN107318022B (en) video steganography method based on H.265 standard undistorted drift
Wang et al. An Information Hiding Algorithm for HEVC Based on Angle Differences of Intra Prediction Mode.
CN109819260B (en) Video steganography method and device based on multi-embedded domain fusion
CN104581176A (en) H.264/AVC (advanced video coding) compressed domain robust video watermark embedding and extracting methods free from intra-frame error drift
CN104602016A (en) HEVC video information hiding method based on intra-frame prediction mode difference
Bouchama et al. H. 264/AVC data hiding based on intra prediction modes for real-time applications
Zhou et al. An intra-drift-free robust watermarking algorithm in high efficiency video coding compressed domain
He et al. HEVC video information hiding scheme based on adaptive double-layer embedding strategy
Mansouri et al. Toward a secure video watermarking in compressed domain
Fan et al. Adaptive QIM with minimum embedding cost for robust video steganography on social networks
CN109361926B (en) Lossless reversible information hiding method for H.264/AVC video visual quality
Chen et al. Intra-frame error concealment scheme using 3D reversible data hiding in mobile cloud environment
Sakazawa et al. H. 264 native video watermarking method
CN111770334A (en) Data encoding method and device, and data decoding method and device
Hsu et al. Blind video watermarking for H. 264
Zhang et al. A video watermarking algorithm of H. 264/AVC for content authentication
Lie et al. Error resilient coding based on reversible data embedding technique for H. 264/AVC video
Yao et al. Adaptive video error concealment using reversible data hiding
Sakib et al. A robust DWT-based compressed domain video watermarking technique
CN114598887A (en) Anti-recompression video watermarking method for controlling bit rate increase

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination