WO2014198021A1 - 基于视频像素时空相关性的通用视频隐写分析方法 - Google Patents

基于视频像素时空相关性的通用视频隐写分析方法 Download PDF

Info

Publication number
WO2014198021A1
WO2014198021A1 PCT/CN2013/077092 CN2013077092W WO2014198021A1 WO 2014198021 A1 WO2014198021 A1 WO 2014198021A1 CN 2013077092 W CN2013077092 W CN 2013077092W WO 2014198021 A1 WO2014198021 A1 WO 2014198021A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
image
slice
difference
images
Prior art date
Application number
PCT/CN2013/077092
Other languages
English (en)
French (fr)
Inventor
谭铁牛
董晶
许锡锴
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2013/077092 priority Critical patent/WO2014198021A1/zh
Publication of WO2014198021A1 publication Critical patent/WO2014198021A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0085Time domain based watermarking, e.g. watermarks spread over several images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device

Definitions

  • the present invention relates to the field of passive blind forensics in image and video, and more particularly to a steganalysis method based on spatiotemporal correlation of video pixels for video steganography.
  • Steganography is to hide secret information into carrier information. Without affecting the sensory effect and use value of the host signal, it is difficult for a possible attacker to judge whether the secret information exists or not, and it is more difficult to intercept, thereby ensuring information. Passed security. Due to the openness of Internet communication and the universality of applications, steganography technology may also be used by criminals and even terrorist organizations to evade monitoring and dissemination of illegal intelligence. Therefore, research on steganalysis has important theoretical and practical significance.
  • Steganographic vectors are most common with images, and image-based steganography and steganalysis techniques are developing rapidly, but due to the limited size of images, there is bound to be a limit on the embedding capacity.
  • the video is composed of one frame of image, and the amount of information redundancy is very large.
  • the use of video as a carrier to transmit secret messages can greatly improve the embedding capacity.
  • the steganography technology with video as the carrier has become more and more important.
  • a large number of video hidden Write algorithm At the same time, steganographic analysis techniques for video have been slow to develop. In view of the rich video resources, video steganography technology can realize large-capacity covert communication, and its security issue is especially important.
  • the detection of hidden information in the carrier medium is called steganographic analysis.
  • the targeted steganalysis method utilizes some characteristics of specific steganography and analyzes it. It does not apply to other steganographic methods other than steganography.
  • the general steganographic analysis method does not need to know the video to be analyzed in advance.
  • the steganography can be used to analyze arbitrary steganography algorithms, so it is more applicable. For an image, there is a strong correlation between the pixels in the local area, and the steganographic embedding will weaken the correlation. Try to describe this correlation and implement steganographic analysis, usually with Markov. Models to describe this correlation, such as the literature D.
  • the method of the present invention includes extracting the omnidirectional neighborhood correlation feature for the video frame and the video slice and the training learning of the video steganographic analysis classifier based on the feature.
  • the use of pixel correlation is currently the main steganographic analysis method. For an image, there is a strong correlation between the pixels in the local area, and the steganographic embedding will weaken the correlation, and the correlation is statistically correlated. Changes can achieve the purpose of steganalysis.
  • the present invention can use a joint probability distribution matrix of N-1 difference values according to the description of the correlation of N pixels, and combine the joint probability distribution matrix of all difference value combinations into a one-dimensional vector as a steganalysis feature, and then utilize
  • the machine learning method trains the classifier model, performs steganographic analysis on the analyzed video, and fuses the analysis results of multiple frames or slices of the video to give a judgment result on whether the entire video has been steganographically written.
  • a general video steganalysis method based on spatiotemporal correlation of video pixels proposed by the present invention includes the following steps:
  • Step S decompressing each video in the training set, and extracting a plurality of image segments according to a fixed length, and performing slice processing for each image segment to obtain a plurality of slice images corresponding to each image segment;
  • Step S2 performing differential filtering on each slice image obtained in the step SI by using a plurality of different templates to obtain corresponding multiple difference images;
  • Step S3 performing thresholding processing on the plurality of difference images obtained in the step S2 using a predetermined threshold T to obtain a plurality of new difference images;
  • Step S4 arbitrarily selecting two or more of all difference images corresponding to each slice image obtained in the step S3, for each difference image pair or difference image group composed, using a pixel neighborhood relationship Description, that is, calculating a joint probability distribution of corresponding position pixel points, obtaining a joint probability distribution matrix corresponding to each difference image pair or a difference image group, thereby obtaining a plurality of joint probability distribution matrices corresponding to the plurality of slice images, each matrix
  • the elements are combined into a one-dimensional vector as a feature vector for steganographic analysis;
  • Step S5 marking the plurality of feature vectors obtained by the step S4, and inputting all feature vectors marked with the class information into the classifier for training to obtain a classifier model;
  • Step S6 extracting a feature vector for the steganographic analysis according to a step similar to the step S1-4;
  • Step S7 input the feature vector of the video to be analyzed into the classifier model obtained in step S5 to perform classification, obtain the category information of each image segment, and fuse the category information output of all image segments.
  • the category information of the segment video is obtained as to whether the video to be analyzed is an analysis result of the steganographic video.
  • FIG. 1 is a flow chart of a general video steganalysis method based on spatiotemporal correlation of video pixels according to the present invention.
  • FIG. 2 is a schematic diagram of a video slicing operation in accordance with an embodiment of the present invention.
  • Figure 3 is a block image of a video to be analyzed containing hidden information used in an embodiment of the present invention.
  • the general video steganography analysis method based on spatiotemporal correlation of video pixels includes the following steps, wherein: S1-S5 is the training process, and steps S6-S7 are the classification process:
  • Step S decompressing each video in the training set, and extracting a plurality of image segments according to a fixed length, and performing slice processing for each image segment to obtain a plurality of slice images corresponding to each image segment;
  • the length of the image segment can be adjusted as needed. In one embodiment of the invention, the length of the image segment is equal to the width of each frame of image.
  • the operation of obtaining the slice image means that each image segment obtained by the extraction is regarded as a three-dimensional signal (cube), and then the cube is sliced according to each line of the video frame, that is, The same line of all frames constitutes a row slice, which results in a row slice of the same number as the number of rows. In the same way, you can get the same number of column slices as the number of columns. These slices can be viewed as a single image, and then the features for steganographic analysis are extracted for these slice images.
  • a schematic diagram of a video slicing operation according to an embodiment of the present invention is shown in FIG. 2.
  • the template is differentially filtered to obtain a corresponding plurality of difference images;
  • the differential filtering is to convolute the slice image and the differential filter template to obtain a difference image A:
  • the differential filter template includes templates of various scales and respective differential directions, specifically, within a set neighborhood scale, centered on a certain pixel, and any other pixel is subjected to a difference operation.
  • the templates are:
  • Step S3 performing thresholding processing on the plurality of difference images obtained in the step S2 using a predetermined threshold ⁇ to obtain a plurality of new difference images;
  • the predetermined threshold ⁇ is a positive integer, such as 4 or 3;
  • the thresholding process is: the value greater than ⁇ in the difference image is replaced by ,, less than
  • the joint probability distribution matrix of two difference values can describe the relationship among three pixels
  • the joint probability distribution matrix of three difference values can describe the relationship among four pixels
  • the elements of these joint probability distribution matrices (all or selected Partially combined together as a feature vector for video steganography analysis.
  • the joint probability distribution of the difference image reflects the correlation of the video pixel in the spatial domain
  • the joint probability distribution of the difference image reflects the correlation of the video pixel in space and time (for the slice
  • the present invention can be said to obtain feature vectors for steganographic analysis based on spatiotemporal correlation of video pixels.
  • the joint probability distribution of the two difference images is calculated as:
  • Step S5 performing tagging of the category information on the plurality of feature vectors obtained in the step S4, and inputting all feature vectors marked with the class information into the classifier for training, to obtain a classifier model;
  • the classifier model is obtained by performing feature extraction on video samples of known category labels (steganized video or unsteganized video) and then performing classifier training.
  • the classifier uses an SVM classifier and uses a radial basis function as its kernel function to find an optimal classifier model parameter by traversing the search.
  • the SVM classifier is a classifier commonly used in the prior art. It mainly seeks to separate a sample of different category labels in the feature space by finding a classification interface.
  • Step S6 The analysis video is extracted according to a step similar to the step S1-4 to obtain a feature vector for the steganographic analysis, which is specifically:
  • the image to be analyzed is decompressed to extract each image segment, and the image to be analyzed is sliced according to the extracted image segment to obtain a corresponding plurality of slice images; and then, each slice image is used separately.
  • Different templates are differentially filtered to obtain corresponding multiple differential images;
  • two or more of the new difference images corresponding to each slice image are arbitrarily selected, and for each difference image pair or the difference image group composed, the joint probability distribution of the corresponding position pixel points is calculated, and each corresponding flag is obtained.
  • the joint probability distribution matrix of the difference image pair or the difference image group thereby obtaining a plurality of joint probability distribution matrices corresponding to the plurality of slice images, and merging the elements of each matrix into a one-dimensional vector as the feature vector for the steganographic analysis.
  • FIG. 3 is a certain frame image of a video to be analyzed containing hidden information used in an embodiment of the present invention, wherein FIG. 3(a) is a certain frame image in the video to be analyzed, and (b) is a video of the video.
  • a slice of the image segment, (c) a column slice that is segmented by an image of the video.
  • Step S7 input the feature vector of the video to be analyzed into the classifier model obtained in step S5 to perform classification, obtain the category information of each image segment, and fuse the category information output of all image segments.
  • the category information of the segment video is obtained as to whether the video to be analyzed is an analysis result of the steganographic video.
  • a threshold is set, such as a percentage threshold.
  • the proportion of the image segment belonging to the steganographic category in the video to be analyzed is greater than the threshold, the video is determined to be steganographically video.
  • the invention overcomes the shortcoming that the current correlation-based method can only describe the correlation of adjacent pixels in a certain several directions, and fully utilizes the omnidirectional dependence between adjacent pixels, thereby achieving better video hiding.
  • the present invention also proposes a method of slicing a video and extracting features from the slice, which fully utilizes the temporal correlation of the video and improves the steganalysis effect.
  • the present invention does not need to know in advance the steganographic algorithm used for the video to be analyzed, and thus can be applied to a system for analyzing a plurality of different types of video steganography algorithms.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于视频像素时空相关性的通用视频隐写分析方法,该方法包括:对训练集中的视频进行解压并提取多个图像分段,对每一图像分段进行切片获得多个切片图像;对每个切片图像进行差分滤波和阈值化处理;在每个切片图像的所有差分图像中任选两个或多个,利用像素邻域关系的描述得到与切片图像对应的多个联合概率分布矩阵,将每个矩阵的元素合并成一维向量作为特征向量;对特征向量进行类别标记后输入到分类器中得到分类器模型;按照上述歩骤提取待分析视频的特征向量,并输入到分类器模型进行分类,得到隐写分析结果。本发明充分利用了视频的时域相关性,提高了隐写分析效果,且可应用于多种不同类型视频隐写算法的分析系统中。

Description

基于视频像素时空相关性的通用视频隐写分析方法
技术 C 本发明涉及图像及视频被动盲取证领域, 特别涉及针对视频隐写术 进行的基于视频像素时空相关性的隐写分析方法。
背景技术 隐写就是将秘密信息隐藏到载体信息中, 在不影响宿主信号的感观 效果和使用价值的情况下, 使得可能的攻击者难以从中判断秘密信息是 否存在, 更加难以截获, 从而保证信息传递的安全性。 由于因特网通信 的开放性和应用的普遍性, 隐写技术也可能被不法分子甚至恐怖组织所 利用, 达到逃避监控和传播非法情报的目的。 因此, 开展隐写分析方面 的研究具有重要的理论价值和现实意义。
隐写的载体以图像最为常见, 基于图像的隐写与隐写分析技术发展 较快, 但是由于图像的大小有限, 必然会对嵌入容量有很大的限制。 而 视频是由一帧帧图像组成, 信息冗余量非常大, 以视频作为载体来传输 秘密消息可以极大地提高嵌入容量。 随着网络技术的进歩, 网络流媒体 业务快速发展, 视频在网络上的传输越来越普遍, 使得以视频为载体的 隐写技术变得越来越重要, 近年来出现了大量的视频隐写算法。 与此同 时, 针对视频的隐写分析技术却发展缓慢。 鉴于视频资源比较丰富, 而 视频隐写技术可以实现大容量隐蔽通信, 其安全问题尤其重要。
检测载体媒体中的隐藏信息称为隐写分析。 按照使用的所需条件划 分, 包括针对性隐写分析方法和通用隐写分析方法。 针对性隐写分析方 法利用特定隐写术的一些特点, 对其进行分析, 不适用于除该种隐写术 以外的其他隐写方法; 而通用隐写分析方法不需要预先知道待分析视频 使用的隐写术, 可以用于分析任意的隐写术算法, 因此适用性更强。 对 于一幅图像, 其局部区域内的像素间有较强的相关性, 而隐写嵌入会消 弱这种相关性,设法描述这种相关性,可以实现隐写分析,通常用 Markov 模型来描述这种相关性, 比如文献 D. Zou, Y.Q. Shi, W. Su, and G. Xuan,"Steganalysis based on Markov Model of Thresholded Prediction-Error Image," Multi-media and Expo, IEEE International Conference on, pp. 1365-1368, 2006。 目前, 在图像隐写分析领域, 基于 邻域相关性的方法有很多, 典型的如文献 T. Pevny, P. Bas, J. Fridrich,"Steganalysis by Subtractive Pixel Adjacency Matrix," Information Forensics and Security, IEEE Transactions on, Vol. 5, Issue: 2, pp. 215-224, 2010 禾口 Qingxiao Guan, Jing Dong, Tieniu Tan,"An effective image steganalysis method based on neighborhood information of pixels," Proc. of ICIP, pp. 2721-2724, IEEE, 2011中的方法。 这些方法可以直接应用到视 频隐写分析中, 其对视频逐帧提取上述特征, 然后判别每一帧是否含有 隐藏信息。 但是, 这些方法在描述相邻像素间的相关性时, 只选择了少 部分方向, 并不能全面地描述相邻像素间的相关性。 由于视频编解码比 较复杂多样, 视频隐写分析相对于图像更加困难, 因此本发明充分挖掘 邻像素间的时空相关性,提出一种更有效,更鲁棒的视频隐写分析方法。
发明内容 本发明的目的是提供一种基于视频像素时空相关性的通用视频隐 写分析方法, 能够实现对多种视频隐写算法的有效隐写分析。
本发明方法包括对视频帧和视频切片提取全方位的邻域相关性特 征与基于该特征的视频隐写分析分类器的训练学习这两个过程。 利用像 素相关性是目前主要的隐写分析手段, 对于一幅图像, 其局部区域内的 像素间有较强的相关性, 而隐写嵌入会消弱这种相关性, 统计这种相关 性的变化即可达到隐写分析的目的。本发明根据描述 N个像素的相关性 可使用 N-1个差分值的联合概率分布这一特点, 通过计算所有差分值组 合的联合概率分布矩阵, 合并成一维向量作为隐写分析特征, 然后利用 机器学习的方法训练分类器模型, 对待分析视频进行隐写分析, 融合视 频多个帧或切片的分析结果, 给出对整段视频是否经过隐写的判定结果。 本发明所提出的一种基于视频像素时空相关性的通用视频隐写分 析方法包括以下歩骤:
歩骤 Sl, 对于训练集中的每个视频进行解压, 并按照固定长度提取 多个图像分段, 对于每一图像分段进行切片获得与每一图像分段对应的 多个切片图像;
歩骤 S2, 对所述歩骤 SI中得到的每个切片图像分别使用多个不同 的模板进行差分滤波, 得到相应的多个差分图像;
歩骤 S3, 对所述歩骤 S2中得到的多个差分图像, 使用预定阈值 T 进行阈值化处理得到多个新的差分图像;
歩骤 S4, 在所述歩骤 S3中得到的对应每个切片图像的所有差分图 像中任意挑选两个或多个, 对于组成的每个差分图像对或差分图像组, 利用像素邻域关系的描述, 即计算对应位置像素点的联合概率分布, 得 到对应每个差分图像对或差分图像组的联合概率分布矩阵, 进而得到对 应多个切片图像的多个联合概率分布矩阵, 将每个矩阵的元素合并成一 维向量作为用于隐写分析的特征向量;
歩骤 S5 ,对所述歩骤 S4得到的多个特征向量进行类别信息的标记, 并将标记好类别信息的所有特征向量输入到分类器中进行训练, 得到分 类器模型;
歩骤 S6: 对待分析视频按照与所述歩骤 S1-4类似的歩骤提取得到 其用于隐写分析的特征向量;
歩骤 S7: 将所述待分析视频的特征向量输入到所述歩骤 S5中得到 的分类器模型进行分类, 得到每一图像分段的类别信息后, 融合所有图 像分段的类别信息输出整段视频的类别信息, 得到所述待分析视频是否 为隐写视频的分析结果。
本发明方法可以用于鉴定视频是否含有隐藏信息, 监控重要数据是 否出现外流情况等。 由于本发明方法不需要使用具体隐藏算法的特点, 因此可以作为通用的方法分析数字视频中是否含有隐藏信息。 附图说明 图 1是本发明基于视频像素时空相关性的通用视频隐写分析方法流 程图。
图 2 是根据本发明一实施例的视频切片操作示意图。
图 3 是本发明一实施例中所用到的含有隐藏信息的待分析视频的 某一帧图像。
具体实施方式 为使本发明的目的、 技术方案和优点更加清楚明白, 以下结合具体 实施例, 并参照附图, 对本发明进一歩详细说明。
图 1是本发明基于视频像素时空相关性的通用视频隐写分析方法流 程图, 如图 1所示, 所述基于视频像素时空相关性的通用视频隐写分析 方法包括以下歩骤, 其中歩骤 S1-S5为训练过程, 歩骤 S6-S7为分类过 程:
歩骤 Sl, 对于训练集中的每个视频进行解压, 并按照固定长度提取 多个图像分段, 对于每一图像分段进行切片获得与每一图像分段对应的 多个切片图像;
所述图像分段的长度可根据需要来调整, 在本发明一实施例中, 所 述图像分段的长度等于每一帧图像的宽度。
所述歩骤 S1 中, 获得切片图像的操作, 是指把提取得到的每一图 像分段看成一个三维信号 (立方体), 然后按照视频帧的每一行对这个 立方体进行切片操作, 也就是说所有帧的同一行构成一个行切片, 这样 可以得到和行数数量相同的行切片。 同理, 可以得到和列数数量相同的 列切片。 这些切片可以看成是一幅幅图像, 然后接下来对这些切片图像 提取用于隐写分析的特征。 根据本发明一实施例的视频切片操作示意图 如图 2所示。 的模板进行差分滤波, 得到相应的多个差分图像;
所述差分滤波为将所述切片图像与差分滤波器模板进行卷积运算, 得到差分图像 A:
Dk=hk®Y (1) 其中, ¾为差分滤波器模板。
所述差分滤波器模板包括各种尺度、 各个差分方向的模板, 具体而 言就是在设定的邻域尺度范围内, 以某一像素为中心, 其它任一像素与 其做差分运算。在本发明一实施例中,所述差分滤波器模板 ¾的个数为 8, 从而对于每个切片图像得到 8个差分图像 = 1,2,···, 8), 这 8 个差分滤波器模板分别为:
Figure imgf000006_0001
实际应用中, 也可以使用更大尺度 (比如 4X4、 5X5 等) 的差分 滤波器模板, 其特点与 3X3的模板类似, 均是只包含两个非零元素: 1 和 -1。 歩骤 S3, 对所述歩骤 S2中得到的多个差分图像, 使用预定阈值 Τ 进行阈值化处理得到多个新的差分图像;
所述预定阈值 Τ为正整数, 比如 4或者 3;
所述阈值化处理为: 所述差分图像中大于 τ的数值用 Τ代替, 小于
-Τ的数值用 -Τ代替:
D(iJ) -T<D(iJ)<T
D(i ) T D(iJ)≥T (2)
-Τ D(iJ)≤-T 其中, / (,_/)表示差分图像 < 像中任意挑选两个或多个, 对于组成的每个差分图像对或差分图像组, 利用像素邻域关系的描述, 即计算对应位置像素点的联合概率分布, 得 到对应每个差分图像对或差分图像组的联合概率分布矩阵, 进而得到对 应多个切片图像的多个联合概率分布矩阵, 将每个矩阵的元素合并成一 维向量作为用于隐写分析的特征向量;
两差分值的联合概率分布矩阵可以描述三个像素间的关系, 三个差 分值的联合概率分布矩阵可以描述四个像素间的关系, 以此类推, 这些 联合概率分布矩阵的元素 (全部或挑选部分) 合并在一起可作为用于视 频隐写分析的特征向量。 对于视频帧来说, 其差分图像的联合概率分布 体现的是视频像素在空域的相关性; 对于切片图像来说, 其差分图像的 联合概率分布体现的是视频像素在时空的相关性 (对于切片图像来说, 同一行像素来自于同一帧; 具有空域相关性, 不同行像素来自不同帧, 具有时域相关性)。 因此, 本发明可以说是基于视频像素时空相关性得 到用于隐写分析的特征向量。
在本发明一实施例中,对于得到的 8个差分图像 ( = 1,2,· · ·, 8) 计算其中任意两个的联合概率分布, 去掉重复计算的情况, 共有 20 中 不同的选择: (D D4\ (D D5\ (D D6\ (D D7\ (D2,D4 (D2,D5\ (D2,D6\ {D2,D7, (/¾,/¾), (D3,D5\ (D3,D6\ (/¾,/¾), (/¾,/¾), (D4,D6 (D4,D7\ (D4,D8), (D5,D7), (D5,D8), 即可以得到 20个联合概率分布矩阵, 每个矩阵的大小为 (2T+1 ) X (2T+1 ) 即 9x9, 总共为 20x9x9=1620维 特征。
在本发明一实施例中, 两个差分图像的联合概率分布的计算公式为:
Figure imgf000007_0001
其中 m,w e {-Γ · · · Γ} k,l^ {\,2, - - - ,^},k≠l 其中, H,w为差分图像的高和宽; 为冲击函数, 等号成立时取 1, 不 成立时为 0; 多个差分图像的联合概率分布矩阵可用类似的方法计算。 歩骤 S5 ,对所述歩骤 S4得到的多个特征向量进行类别信息的标记, 并将标记好类别信息的所有特征向量输入到分类器中进行训练, 得到分 类器模型;
分类器模型是通过对已知类别标签 (隐写视频或未隐写视频) 的视 频样本, 进行特征提取, 然后进行分类器训练得到的。
在本发明一实施例中, 所述分类器使用 SVM分类器, 并采用径向 基函数作为其核函数, 通过遍历搜索的方式找到最优的分类器模型参数。 SVM分类器是一种现有技术中常用的分类器,它主要是通过寻找一个分 类界面, 最大程度的将不同类别标签的样本在特征空间中分开。 歩骤 S6: 对待分析视频按照与所述歩骤 S1-4类似的歩骤提取得到 其用于隐写分析的特征向量, 具体为:
首先, 对待分析视频进行解压提取每一图像分段, 并根据提取得到 的每一图像分段对所述待分析视频进行切片获得对应的多个切片图像; 然后, 对每个切片图像分别使用多个不同的模板进行差分滤波, 得 到相应的多个差分图像;
然后, 对所述多个差分图像使用预定阈值 T进行阈值化处理得到多 个新的差分图像;
最后, 在对应每个切片图像的所有新的差分图像中任意挑选两个或 多个, 对于组成的每个差分图像对或差分图像组, 计算对应位置像素点 的联合概率分布, 得到对应每个差分图像对或差分图像组的联合概率分 布矩阵, 进而得到对应多个切片图像的多个联合概率分布矩阵, 将每个 矩阵的元素合并成一维向量作为用于隐写分析的特征向量。
图 3 是本发明一实施例中所用到的含有隐藏信息的待分析视频的 某一帧图像, 其中, 图 3(a)为待分析视频中的某一帧图像, (b)为该视频 某图像分段的行切片, (c) 为该视频某图像分段的列切片。 歩骤 S7 : 将所述待分析视频的特征向量输入到所述歩骤 S5中得到 的分类器模型进行分类, 得到每一图像分段的类别信息后, 融合所有图 像分段的类别信息输出整段视频的类别信息, 得到所述待分析视频是否 为隐写视频的分析结果。
在融合所有图像分段的类别信息时, 设定一个阈值, 比如百分比阈 值, 当所述待分析视频中的属于隐写类别的图像分段的比例大于此阈值 时, 即判定该视频为隐写视频。
本发明克服了当前基于相关性的方法只能描述相邻像素在特定几 个方向上的相关性这一不足, 充分利用了相邻像素间全方位的依赖关系, 从而达到了较好的视频隐写分析效果。 除了对视频帧提取特征外, 本发 明还提出对视频进行切片操作, 对切片提取特征的方法, 充分利用了视 频的时域相关性, 提高了隐写分析效果。 本发明不需要提前知道待分析 视频所用的隐写算法, 故可应用于分析多种不同类型视频隐写算法的系 统中。
以上所述的具体实施例, 对本发明的目的、 技术方案和有益效果进 行了进一歩详细说明, 所应理解的是, 以上所述仅为本发明的具体实施 例而已, 并不用于限制本发明, 凡在本发明的精神和原则之内, 所做的 任何修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。

Claims

权 利 要 求
1、 一种基于视频像素时空相关性的通用视频隐写分析方法, 其特 征在于, 该方法包括以下歩骤:
歩骤 Sl, 对于训练集中的每个视频进行解压, 并按照固定长度提取 多个图像分段, 对于每一图像分段进行切片获得与每一图像分段对应的 多个切片图像;
歩骤 S2, 对所述歩骤 SI中得到的每个切片图像分别使用多个不同 的模板进行差分滤波, 得到相应的多个差分图像;
歩骤 S3, 对所述歩骤 S2中得到的多个差分图像, 使用预定阈值 T 进行阈值化处理得到多个新的差分图像;
歩骤 S4, 在所述歩骤 S3中得到的对应每个切片图像的所有差分图 像中任意挑选两个或多个, 对于组成的每个差分图像对或差分图像组, 利用像素邻域关系的描述, 即计算对应位置像素点的联合概率分布, 得 到对应每个差分图像对或差分图像组的联合概率分布矩阵, 进而得到对 应多个切片图像的多个联合概率分布矩阵, 将每个矩阵的元素合并成一 维向量作为用于隐写分析的特征向量;
歩骤 S5 ,对所述歩骤 S4得到的多个特征向量进行类别信息的标记, 并将标记好类别信息的所有特征向量输入到分类器中进行训练, 得到分 类器模型;
歩骤 S6: 对待分析视频按照与所述歩骤 S1-4类似的歩骤提取得到 其用于隐写分析的特征向量;
歩骤 S7: 将所述待分析视频的特征向量输入到所述歩骤 S5中得到 的分类器模型进行分类, 得到每一图像分段的类别信息后, 融合所有图 像分段的类别信息输出整段视频的类别信息, 得到所述待分析视频是否 为隐写视频的分析结果。
2、 根据权利要求 1所述的方法, 其特征在于, 在所述歩骤 S1中获 得与每一图像分段对应的多个切片图像时, 把提取得到的每一图像分段 看成一个三维信号, 即立方体; 然后按照视频帧的每一行对所述立方体 进行切片操作, 得到和行数数量相同的行切片, 再按照视频帧的每一列 对所述立方体进行切片操作, 得到和列数数量相同的列切片, 所述行切 片和列切片组成了所述多个切片图像。
3、 根据权利要求 1 所述的方法, 其特征在于, 所述差分滤波为将 所述切片图像与差分滤波器模板进行卷积运算, 得到差分图像
Dk=h®Y, 其中, ¾为差分滤波器模板。
4、 根据权利要求 3 所述的方法, 其特征在于, 所述差分滤波器模 板¾包括各种尺度、 各个差分方向的模板。
5、 根据权利要求 4所述的方法, 其特征在于, 所述差分滤波器模 板¾的个数为 8, 其可分别表示为:
Figure imgf000011_0001
6、 根据权利要求 1所述的方法, 其特征在于, 所述歩骤 S3中的阈 值化处理为:所述差分图像中大于 Τ的数值用 Τ代替, 小于 -Τ的数值用 -Τ代替:
D(iJ) -T<D(iJ)<T
D(iJ) T D(iJ)≥T
-Τ D(iJ)≤-T 其中, /(,_/)表示差分图像。
7、 根据权利要求 1所述的方法, 其特征在于, 所述歩骤 S4中, 两 个差分图像的联合概率分布利用下式来计算:
∑"∑i"S(Dk(iJ) = m,D1(iJ) = n)
Pk (m, n) = P {Dk (, j) = m\ Dt (, j) = n} = -
HxW
其中 m,w e {-Γ · · · Γ} / e {1,2, · · · ,8} ≠ / 其中, H,W为差分图像的高和宽; 为冲击函数, 等号成立时取 1, 不 成立时为 0。
8、 根据权利要求 1所述的方法, 其特征在于, 所述分类器为 SVM 分类器, 并采用径向基函数作为其核函数。
9、 根据权利要求 1所述的方法, 其特征在于, 在所述歩骤 S7中融 合所有图像分段的类别信息时, 设定一阈值, 当所述待分析视频中的属 于隐写类别的图像分段的比例大于此阈值时, 即判定该视频为隐写视频。
10、 根据权利要求 9所述的方法, 其特征在于, 所述阈值为百分比 阈值。
PCT/CN2013/077092 2013-06-09 2013-06-09 基于视频像素时空相关性的通用视频隐写分析方法 WO2014198021A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/077092 WO2014198021A1 (zh) 2013-06-09 2013-06-09 基于视频像素时空相关性的通用视频隐写分析方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/077092 WO2014198021A1 (zh) 2013-06-09 2013-06-09 基于视频像素时空相关性的通用视频隐写分析方法

Publications (1)

Publication Number Publication Date
WO2014198021A1 true WO2014198021A1 (zh) 2014-12-18

Family

ID=52021553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/077092 WO2014198021A1 (zh) 2013-06-09 2013-06-09 基于视频像素时空相关性的通用视频隐写分析方法

Country Status (1)

Country Link
WO (1) WO2014198021A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190313114A1 (en) * 2018-04-06 2019-10-10 Qatar University System of video steganalysis and a method of using the same

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101151622A (zh) * 2005-01-26 2008-03-26 新泽西理工学院 用于隐写分析的系统和方法
US20080175429A1 (en) * 2007-01-19 2008-07-24 New Jersey Institute Of Technology Method and apparatus for steganalysis for texture images
CN102722858A (zh) * 2012-05-29 2012-10-10 中国科学院自动化研究所 基于对称邻域信息的盲隐写分析方法
CN103281473A (zh) * 2013-06-09 2013-09-04 中国科学院自动化研究所 基于视频像素时空相关性的通用视频隐写分析方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101151622A (zh) * 2005-01-26 2008-03-26 新泽西理工学院 用于隐写分析的系统和方法
US20080175429A1 (en) * 2007-01-19 2008-07-24 New Jersey Institute Of Technology Method and apparatus for steganalysis for texture images
CN102722858A (zh) * 2012-05-29 2012-10-10 中国科学院自动化研究所 基于对称邻域信息的盲隐写分析方法
CN103281473A (zh) * 2013-06-09 2013-09-04 中国科学院自动化研究所 基于视频像素时空相关性的通用视频隐写分析方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190313114A1 (en) * 2018-04-06 2019-10-10 Qatar University System of video steganalysis and a method of using the same
US11611773B2 (en) * 2018-04-06 2023-03-21 Qatar Foundation For Education, Science And Community Development System of video steganalysis and a method for the detection of covert communications

Similar Documents

Publication Publication Date Title
US11676408B2 (en) Identification of neural-network-generated fake images
Aloraini et al. Sequential and patch analyses for object removal video forgery detection and localization
D'Avino et al. Autoencoder with recurrent neural networks for video forgery detection
Moreira et al. Pornography classification: The hidden clues in video space–time
Lin et al. A passive approach for effective detection and localization of region-level video forgery with spatio-temporal coherence analysis
Kang et al. Robust median filtering forensics using an autoregressive model
CN103281473B (zh) 基于视频像素时空相关性的通用视频隐写分析方法
CN110853033B (zh) 基于帧间相似度的视频检测方法和装置
JP2020064637A (ja) 畳み込みニューラルネットワークを介してイメージ偽変造を探知するシステム、方法、及びこれを利用して無補正探知サービスを提供する方法
CN104504669B (zh) 一种基于局部二值模式的中值滤波检测方法
Meikap et al. Directional PVO for reversible data hiding scheme with image interpolation
Wahab et al. Passive video forgery detection techniques: A survey
Yang et al. A robust hashing algorithm based on SURF for video copy detection
Bourouis et al. Recent advances in digital multimedia tampering detection for forensics analysis
Fayyaz et al. An improved surveillance video forgery detection technique using sensor pattern noise and correlation of noise residues
Abdulrahman et al. Color images steganalysis using RGB channel geometric transformation measures
Feng et al. An energy-based method for the forensic detection of re-sampled images
Moliner et al. Bootstrapped representation learning for skeleton-based action recognition
Banerjee et al. Report on ugˆ2+ challenge track 1: Assessing algorithms to improve video object detection and classification from unconstrained mobility platforms
Hu et al. Effective forgery detection using DCT+ SVD-based watermarking for region of interest in key frames of vision-based surveillance
Roka et al. Deep stacked denoising autoencoder for unsupervised anomaly detection in video surveillance
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
Gao et al. Real-time jellyfish classification and detection algorithm based on improved YOLOv4-tiny and improved underwater image enhancement algorithm
Li et al. A compact representation of sensor fingerprint for camera identification and fingerprint matching
Jin et al. Video logo removal detection based on sparse representation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13886915

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13886915

Country of ref document: EP

Kind code of ref document: A1