CN105160664A - Low-rank model based compressed sensing video reconstruction method - Google Patents

Low-rank model based compressed sensing video reconstruction method Download PDF

Info

Publication number
CN105160664A
CN105160664A CN201510523631.6A CN201510523631A CN105160664A CN 105160664 A CN105160664 A CN 105160664A CN 201510523631 A CN201510523631 A CN 201510523631A CN 105160664 A CN105160664 A CN 105160664A
Authority
CN
China
Prior art keywords
video
msub
blocks
frame
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510523631.6A
Other languages
Chinese (zh)
Other versions
CN105160664B (en
Inventor
刘芳
李婉
郝红侠
焦李成
李玲玲
杨淑媛
尚荣华
马文萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510523631.6A priority Critical patent/CN105160664B/en
Publication of CN105160664A publication Critical patent/CN105160664A/en
Application granted granted Critical
Publication of CN105160664B publication Critical patent/CN105160664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本发明公开了一种基于低秩模型的压缩感知视频重构方法,主要解决压缩感知视频重构不准确和低鲁棒的问题,其实现过程为:(1)接收测量数据;(2)初始化单帧协方差矩阵集合;(3)基于联合稀疏和高斯分布的分段线性估计构造初始重构视频;(4)利用视频帧间和帧内的相关性搜索每个图像块的相似块;(5)求解视频的低秩结构;(6)更新重构视频;(7)判断是否到达迭代停止条件;(8)输出重构视频。本发明与现有视频重构技术相比,具有重构图像质量高,鲁棒性好的优点,可用于自然场景视频的重构。

The invention discloses a compressed sensing video reconstruction method based on a low-rank model, which mainly solves the problems of inaccurate and low robustness of the compressed sensing video reconstruction. The implementation process is: (1) receiving measurement data; (2) initializing A collection of single-frame covariance matrices; (3) Construct an initial reconstructed video based on piecewise linear estimation of joint sparse and Gaussian distributions; (4) Search for similar blocks for each image block using the inter- and intra-frame correlation of the video; ( 5) Solving the low-rank structure of the video; (6) updating the reconstructed video; (7) judging whether the iteration stop condition is reached; (8) outputting the reconstructed video. Compared with the existing video reconstruction technology, the invention has the advantages of high reconstructed image quality and good robustness, and can be used for natural scene video reconstruction.

Description

基于低秩模型的压缩感知视频重构方法Compressed Sensing Video Reconstruction Method Based on Low-rank Model

技术领域technical field

本发明属于图像处理技术领域,更进一步涉及视频编码技术领域中的一种基于压缩感知系统的视频重构方法。本发明挖掘了视频序列帧内和帧间的相似性,基于低秩模型对视频进行重构,可用于对自然图像视频序列进行重构。The invention belongs to the technical field of image processing, and further relates to a video reconstruction method based on a compressed sensing system in the technical field of video coding. The invention excavates the similarity between frames and frames of the video sequence, reconstructs the video based on the low-rank model, and can be used to reconstruct the video sequence of the natural image.

背景技术Background technique

近几年,在信号处理领域出现了一种新的数据理论压缩感知CS,该理论在数据采集的同时实现压缩,突破了传统奈奎采集斯特采样定理的限制,为数据采集技术带来了革命性的变化,使得该理论在压缩成像系统、军事密码学、无线传感等领域有着广阔的应用前景。压缩感知理论主要包括信号的稀疏表示、信号的观测和信号的重构等三个方面。其中设计快速有效的重构算法是将CS理论成功推广并应用于实际数据模型和采集系统的重要环节。In recent years, a new data theory compressive sensing CS has emerged in the field of signal processing. This theory realizes compression while collecting data, breaks through the limitations of the traditional Nyquist sampling theorem, and brings new advantages to data collection technology. The revolutionary changes make the theory have broad application prospects in compressed imaging systems, military cryptography, wireless sensing and other fields. Compressed sensing theory mainly includes three aspects: sparse representation of signal, observation of signal and reconstruction of signal. Among them, designing a fast and effective reconstruction algorithm is an important part of successfully promoting and applying CS theory to actual data models and acquisition systems.

从科学到体育多种应用领域中,高速摄像机在捕捉快动作中发挥着重要的作用,但是测量高速视频对摄像机的设计来说是一种挑战。压缩感知通过低帧率的压缩测量,能够捕捉高帧率视频信息,因此压缩感知被用于高速视频信息的捕捉,从而减轻高速摄像机设计的困难。High-speed cameras play an important role in capturing fast motion in applications ranging from science to sports, but measuring high-speed video presents a challenge to camera design. Compressed sensing can capture high frame rate video information through low frame rate compression measurement, so compressed sensing is used to capture high-speed video information, thereby reducing the difficulty of high-speed camera design.

GuoshenYu等人在其发表的论文“SolvingInverseProblemsWithPiecewiseLinearEstimators:FromGaussianMixtureModelstoStructuredSparsity”(《IEEETransactionsonImageProcessing》,2012,21(5):2481-2499)中提出一种使用分段线性估计求解图像逆问题的方法。该方法将问题建模为高斯混合模型,即一个图像块服从多个多变量高斯分布中的某一个分布,其在对包括图像重构的图像逆问题上取得了较好的结果。该方法存在的不足之处是,该方法针对二维图像,并不能直接用于视频序列的重构,并且运用统计方法抓取图像块之间的相似性并不精准。In their paper "SolvingInverseProblemsWithPiecewiseLinearEstimators: FromGaussianMixtureModelstoStructuredSparsity" ("IEEE Transactions on Image Processing", 2012, 21(5): 2481-2499), GuoshenYu et al. proposed a method for solving image inverse problems using piecewise linear estimation. This method models the problem as a Gaussian mixture model, that is, an image patch obeys one of multiple multivariate Gaussian distributions, and it achieves better results on image inverse problems including image reconstruction. The disadvantage of this method is that this method is aimed at two-dimensional images and cannot be directly used for the reconstruction of video sequences, and it is not accurate to use statistical methods to capture the similarity between image blocks.

JanboYang等人在其发表的论文“VideoCompressiveSensingUsingGaussianMixtureModels”(《IEEETransactionsonImageProcessingAPublicationoftheIEEESignalProcessingSociety》,2014,23)中提出一种基于高斯混合模型的方法。该方法通过对时空视频块建立高斯混合模型,对时间压缩的视频序列进行重构,获得了较好的重构效果,但是该方法仍然存在的不足是,该重构方法只是对图像建立了一个高斯混合模型,并没有抓住视频帧内和帧间的相关性,从而导致该方法重构出的视频序列不够准确。Janbo Yang et al. proposed a method based on the Gaussian mixture model in their paper "VideoCompressiveSensingUsingGaussianMixtureModels" ("IEEETransaction on Image Processing A Publication of the IEEE Signal Processing Society", 2014, 23). This method reconstructs the time-compressed video sequence by establishing a Gaussian mixture model for the spatio-temporal video block, and obtains a better reconstruction effect, but the method still has the disadvantage that the reconstruction method only establishes a The Gaussian mixture model does not capture the correlation between video frames and frames, which leads to inaccurate video sequences reconstructed by this method.

发明内容Contents of the invention

本发明的目的在于针对上述现有技术领域中的时空视频压缩感知重构技术存在的视频帧内和帧间的相关性的问题,提出一种基于低秩模型的压缩感知视频重构方法,提高重构图像的质量。The object of the present invention is to propose a method for compressive sensing video reconstruction based on a low-rank model to improve the correlation between video frames and inter-frames in the spatio-temporal video compression sensing reconstruction technology in the prior art. The quality of the reconstructed image.

实现本发明目的技术思路是:利用视频帧间的相关性,即视频不同帧相同位置具有相似性,建模不同帧的相同位置的图像块具有相同的高斯分布,从而求得初始的重构视频序列;对帧间和帧内所有相似的图像块建立低秩模型,对视频的低秩结构和重构的视频序列进行迭代优化求解,实现了高质量的时间视频压缩感知重构。The technical idea of realizing the purpose of the present invention is: using the correlation between video frames, that is, the same position of different frames of video has similarity, and the image blocks of the same position in different frames of modeling have the same Gaussian distribution, so as to obtain the initial reconstructed video Sequence; establish a low-rank model for all similar image blocks between frames and within a frame, and iteratively optimize and solve the low-rank structure of the video and the reconstructed video sequence, realizing high-quality temporal video compression perception reconstruction.

实现本发明目的的具体步骤如下:The concrete steps that realize the object of the present invention are as follows:

(1)接收测量数据:(1) Receive measurement data:

(1a)压缩感知发送方对视频数据进行观测,将每H帧视频数据用一个随机掩模观测矩阵进行一次观测的结果,组成一帧测量数据,并发送测量数据和随机掩模观测矩阵,其中,H表示取值范围为1至20的正整数;(1a) The compressed sensing sender observes the video data, and uses a random mask observation matrix for each H frame of video data to make one observation result to form a frame of measurement data, and sends the measurement data and random mask observation matrix, where , H represents a positive integer ranging from 1 to 20;

(1b)接收方接收发送方发送的测量数据和随机掩模观测矩阵;(1b) The receiver receives the measurement data and random mask observation matrix sent by the sender;

(2)初始化单帧协方差矩阵集合:(2) Initialize the single frame covariance matrix set:

(2a)生成18幅人工黑白图,每幅人工黑白图的大小为65×65像素,每幅人工黑白图代表一个方向;(2a) Generate 18 artificial black-and-white images, the size of each artificial black-and-white image is 65×65 pixels, and each artificial black-and-white image represents a direction;

(2b)采用窗口大小为8×8像素,以1个像素的步长,分别在每个方向的人工黑白图像上滑窗选取大小为8×8像素的所有块,得到每个方向的方向块集合;(2b) Using a window size of 8×8 pixels, with a step size of 1 pixel, slide the window on the artificial black and white image in each direction to select all blocks with a size of 8×8 pixels, and obtain the direction blocks in each direction gather;

(2c)分别对每个方向的方向块集合进行主成分分析PCA分解,得到主成分分析PCA正交基和特征值矩阵,保留每个方向上的前8个最大特征值和对应的主成分正交基,得到相应的特征值矩阵和方向基;(2c) Perform principal component analysis (PCA) decomposition on the set of direction blocks in each direction to obtain the PCA orthogonal basis and eigenvalue matrix of principal component analysis, and retain the first 8 largest eigenvalues and corresponding principal component positive values in each direction. Intersect basis to get the corresponding eigenvalue matrix and direction basis;

(2d)计算每个人工黑白图所代表方向上的单帧协方差矩阵,得到单帧协方差矩阵集合;(2d) Calculate the single-frame covariance matrix in the direction represented by each artificial black-and-white image to obtain a single-frame covariance matrix set;

(3)基于联合稀疏和高斯分布的分段线性估计构造初始重构视频:(3) Construct the initial reconstructed video based on the piecewise linear estimation of the joint sparse and Gaussian distribution:

(3a)对每个人工黑白图所代表的方向,将其方向上的H个单帧协方差矩阵放在矩阵对角线上,构造每个人工黑白图所代表方向的用于重构三维视频数据的联合稀疏的视频协方差矩阵,其中,第k个方向的视频协方差矩阵如下:(3a) For the direction represented by each artificial black-and-white image, place the H single-frame covariance matrices in the direction on the diagonal of the matrix, and construct the direction represented by each artificial black-and-white image for reconstructing 3D video The joint sparse video covariance matrix of the data, where the video covariance matrix in the k-th direction is as follows:

其中,表示第k个方向的视频协方差矩阵,Pk表示第k个方向的单帧协方差矩阵,k表示人工黑白图所代表的方向编号,k=1,2,...,18;in, Represent the video covariance matrix of the kth direction, P k represents the single frame covariance matrix of the kth direction, k represents the direction number represented by the artificial black and white image, k=1,2,...,18;

(3b)将M×N×H维的重构视频初始为零矩阵,对初始重构视频的每一帧以步长p的大小划分为n×n维的S个图像块,并保留分块的位置,将每一帧相同位置的图像块组成视频块,得到视频块集合{x1,...,xl,...,xS},其中,xl表示第l个视频块,表示重构视频第t帧的第l个大小为n×n维的图像块,T表示转置操作,对于人工黑白图所代表的方向k上的第l个视频块服从均值为0,协方差矩阵为的高斯分布,表示第k个方向的视频协方差矩阵,M、N、H分别表示重构视频中的第一维、第二维、第三维的大小,p、n分别表示小于等于M、N维数中最小的值的正整数,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目;(3b) The M×N×H-dimensional reconstructed video is initially a zero matrix, and each frame of the initial reconstructed video is divided into n×n-dimensional S image blocks with a step size of p, and the blocks are reserved The position of each frame is composed of image blocks at the same position as video blocks, and a set of video blocks {x 1 ,...,x l ,...,x S } is obtained, where, x l represents the lth video block, Represents the l-th image block with size n×n dimensions of the t-th frame of the reconstructed video, T represents the transpose operation, for the l-th video block in the direction k represented by the artificial black-and-white image Obedience mean is 0, covariance matrix is Gaussian distribution, Represents the video covariance matrix in the k-th direction, M, N, H represent the size of the first dimension, the second dimension, and the third dimension in the reconstructed video, respectively, p, n represent the smallest dimension of M and N, respectively is a positive integer of the value of , k represents the direction number represented by the artificial black-and-white image, k=1,2,...,18, l=1,2,...,S, S represents the number of video blocks, that is, each The number of image blocks divided into one frame;

(3c)按照下式,计算视频块在人工黑白图中所代表方向上的估计值:(3c) Calculate the estimated value of the video block in the direction represented by the artificial black and white image according to the following formula:

xx ll kk == PP ‾‾ kk ΦΦ ll TT (( ΦΦ ll PP ‾‾ kk ΦΦ ll TT ++ σσ 22 II dd )) -- 11 ythe y ll

其中,表示第l个视频块在人工黑白图所代表的方向k上的估计值,表示第k个视频协方差矩阵,Φl表示从随机掩模观测矩阵取出的第l个视频块的观测矩阵,yl表示从测量数据取出的第l个视频块的测量数据,σ的取值范围为0至1,Id表示d维单位矩阵,T表示转置操作,·-1表示矩阵求逆,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目;in, Indicates the estimated value of the lth video block in the direction k represented by the artificial black and white image, Indicates the k-th video covariance matrix, Φ l represents the observation matrix of the l-th video block taken from the random mask observation matrix, y l represents the measurement data of the l-th video block taken from the measurement data, and the value of σ The range is 0 to 1, I d represents the d-dimensional unit matrix, T represents the transpose operation, -1 represents the matrix inversion, k represents the direction number represented by the artificial black and white image, k=1,2,...,18 , l represents the numbering of video blocks, l=1,2,...,S, S represents the number of video blocks, i.e. the number of image blocks divided by each frame;

(3d)按照下式,计算人工黑白图所代表方向的最优方向:(3d) According to the following formula, calculate the optimal direction of the direction represented by the artificial black and white image:

kk ~~ ll == argarg mm ii nno kk (( || || ΦΦ ll xx ll kk -- ythe y ll || || 22 ++ σσ 22 (( xx ll kk )) TT PP ‾‾ kk -- 11 (( xx ll kk )) ++ σσ 22 ll oo gg || PP ‾‾ kk || ))

其中,表示第l个视频块在人工黑白图所代表方向中的最优方向,表示返回使目标函数值最小时k的取值,Φl表示从随机掩模观测矩阵取出的第l个视频块的观测矩阵,表示第l个视频块在人工黑白图所代表的方向k上的估计值,yl表示从测量数据取出的第l个视频块的测量数据,表示第k个视频协方差矩阵,σ的取值范围为0至1,‖·‖2表示范数的平方,|·|表示行列式的值,T表示转置操作,·-1表示矩阵求逆,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目;in, Indicates the optimal direction of the lth video block in the direction represented by the artificial black and white image, Indicates to return the value of k when the objective function value is minimized, Φ l represents the observation matrix of the lth video block taken from the random mask observation matrix, Indicates the estimated value of the l-th video block in the direction k represented by the artificial black-and-white image, y l represents the measurement data of the l-th video block taken out from the measurement data, Indicates the kth video covariance matrix, the value of σ ranges from 0 to 1, ‖·‖ 2 represents the square of the norm, |·| represents the value of the determinant, T represents the transpose operation, · -1 represents the matrix calculation Inverse, k represents the direction number represented by the artificial black and white image, k=1,2,...,18, l represents the number of video blocks, l=1,2,...,S, S represents the number of video blocks , that is, the number of image blocks divided by each frame;

(3e)将每个视频块在其最优方向上的估计值,按照步骤(3b)划分视频块时所保留的分块的位置,组合成初始重构视频;(3e) the estimated value of each video block on its optimal direction, according to step (3b) the position of the sub-block that is reserved when dividing the video block, is combined into initial reconstructed video;

(3f)设置迭代次数s以及最大外部迭代次数U,令当前迭代次数s为1,并将初始重构视频作为第一次迭代的重构视频,其中,U表示正整数;(3f) Set the number of iterations s and the maximum number of external iterations U, make the current number of iterations s 1, and use the initial reconstructed video as the reconstructed video of the first iteration, where U represents a positive integer;

(4)利用视频帧间和帧内的相关性搜索每个图像块的相似块:(4) Use the correlation between video frames and frames to search for similar blocks of each image block:

(4a)将重构视频的每一帧以步长p的大小划分为n×n维的块,将所有帧的块组成二维图像块集合G1,其中,p、n表示小于等于M、N维数中最小的值的正整数,M、N表示重构视频的第一维、第二维的大小;(4a) Divide each frame of the reconstructed video into n×n-dimensional blocks with a step size of p, and form blocks of all frames into a two-dimensional image block set G 1 , where p, n represent less than or equal to M, A positive integer of the minimum value in N dimensions, M and N represent the size of the first dimension and the second dimension of the reconstructed video;

(4b)将重构视频的每一帧以步长为1划分为n×n维的块,将所有帧的块组成二维图像块集合G2,并记录视频块在G2中的索引,其中,n表示小于等于M、N维数中最小的值的正整数,M、N表示重构视频的第一维、第二维的大小;(4b) Divide each frame of the reconstructed video into n×n-dimensional blocks with a step size of 1, form blocks of all frames into a two-dimensional image block set G 2 , and record the index of the video block in G 2 , Wherein, n represents a positive integer less than or equal to the smallest value in M and N dimensions, and M and N represent the size of the first dimension and the second dimension of the reconstructed video;

(4c)对G1中的每一个块,从G2中取出所有在该块周围Z×Z×H窗口内的块,记为该块的近邻块,其中,Z表示窗口第一维和第二维的大小,H表示窗口第三维的大小,G1表示以步长p的大小划分块的二维图像块集合,G2表示以步长为1划分块的二维图像块集合;(4c) For each block in G 1 , take out all blocks in the Z×Z×H window around the block from G 2 , and record them as the neighboring blocks of the block, where Z represents the first dimension of the window and the second Dimension size, H represents the size of the third dimension of the window, G 1 represents a two-dimensional image block set divided into blocks with a step size of p, G 2 represents a two-dimensional image block set divided into blocks with a step size of 1;

(4d)计算二维视频块集合G1中的每一个块与其近邻块的欧式距离,按欧式距离从小到大排序,选择前Q个块作为对应块的相似块,记录每个块的相似块在G2中的索引,其中,Q表示小于近邻块块数一半的正整数;(4d) Calculate the Euclidean distance between each block in the two-dimensional video block set G 1 and its neighbor blocks, sort according to the Euclidean distance from small to large, select the first Q blocks as similar blocks of the corresponding block, and record the similar blocks of each block Index in G2, where Q represents a positive integer less than half the number of adjacent blocks;

(5)按照下式,求解视频的低秩结构:(5) Solve the low-rank structure of the video according to the following formula:

LL ll tt sthe s ++ 11 == argarg mm ii nno LL ll tt || || RR ~~ ll tt (( Xx sthe s )) -- LL ll tt || || Ff 22 ++ λλ || || LL ll tt || || **

其中,表示第s+1次迭代的第t帧第l个图像块的低秩结构,表示取当目标函数值最小时图像块的低秩结构的值,表示提取第t帧第l个图像块的所有相似块的提取变换,Xs表示第s次迭代的重构视频, R ~ l t ( X s ) = ( R l t 1 ( X s ) , ... , R l t q ( X s ) , ... R l t Q ( X s ) ) , 表示提取第s次迭代重构视频Xs的第t帧第l个图像块的所有相似块的提取变换,表示提取第t帧第l个图像块的第q个相似块的提取矩阵,表示提取第s次迭代的重构视频Xs的第t帧第l个图像块的第q个相似块的提取矩阵,λ的取值为0.75,表示做弗罗贝尼乌斯Frobenius范数的平方操作,‖·‖*表示核范数操作,H表示重构视频第三维的大小,t表示视频帧的编号,即第t帧图像块的编号,t=1,2,...,H,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目,q=1,2,...,Q,Q表示相似块的数目;in, Represents the low-rank structure of the lth image block in the tth frame of the s+1th iteration, Represents the low-rank structure of the image block when the objective function value is minimized the value of Indicates the extraction transformation for extracting all similar blocks of the l-th image block in the t-th frame, X s represents the reconstructed video of the s-th iteration, R ~ l t ( x the s ) = ( R l t 1 ( x the s ) , ... , R l t q ( x the s ) , ... R l t Q ( x the s ) ) , Represents the extraction transformation of all similar blocks of the l-th image block in the t-th frame of the s-th iteratively reconstructed video X s , Represents the extraction matrix of the qth similar block that extracts the lth image block of the tth frame, Represents the extraction matrix of the qth similar block of the tth frame l image block of the reconstructed video X s of the sth iteration, the value of λ is 0.75, Indicates the square operation of the Frobenius norm of Frobenius, ‖·‖ * indicates the nuclear norm operation, H indicates the size of the third dimension of the reconstructed video, t indicates the number of the video frame, that is, the number of the image block of the tth frame , t=1,2,...,H, l represents the numbering of video blocks, l=1,2,...,S, S represents the number of video blocks, i.e. the number of image blocks divided by each frame, q=1,2,...,Q, Q represents the number of similar blocks;

(6)按照下式,更新重构视频:(6) Update and reconstruct the video according to the following formula:

Xx sthe s ++ 11 == argarg mm ii nno xx || || ythe y -- ΦΦ Xx || || 22 22 ++ ηη ΣΣ tt ΣΣ ll || || RR ~~ ll tt (( Xx )) -- LL ll tt sthe s ++ 11 || || Ff 22

其中,Xs+1表示第s+1次迭代的重构视频,表示取当目标函数值最小时重构视频X的值,y表示由测量数据拉成的一维向量,Φ表示由随机掩模观测矩阵生成的视频观测矩阵,表示提取第t帧第l个图像块的所有相似块的提取变换,X表示重构视频, R ~ l t ( X ) = ( R l t 1 ( X ) , ... , R l t q ( X ) , ... R l t Q ( X ) ) , R ~ l t ( X ) 表示提取重构视频X的第t帧第l个图像块的所有相似块的提取变换,表示提取第t帧第l个图像块的第q个相似块的提取矩阵,表示提取重构视频X的第t帧第l个图像块的第q个相似块的提取矩阵,表示第s+1次迭代的第t帧第l个图像块的低秩结构,η的取值为1,Σ表示做求和操作,表示做2范数的平方操作,表示做弗罗贝尼乌斯Frobenius范数的平方操作,H表示重构视频第三维的大小,t表示视频帧的编号,即第t帧图像块的编号,t=1,2,...,H,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目,q=1,2,...,Q,Q表示相似块的数目;Among them, X s+1 represents the reconstructed video of the s+1th iteration, Indicates to take the value of the reconstructed video X when the value of the objective function is the smallest, y represents the one-dimensional vector drawn from the measurement data, Φ represents the video observation matrix generated by the random mask observation matrix, Indicates the extraction transformation of all similar blocks of the l-th image block in the t-th frame, X represents the reconstructed video, R ~ l t ( x ) = ( R l t 1 ( x ) , ... , R l t q ( x ) , ... R l t Q ( x ) ) , R ~ l t ( x ) Represents the extraction transformation of all similar blocks of the l-th image block in the t-th frame of the reconstructed video X, Represents the extraction matrix of the qth similar block that extracts the lth image block of the tth frame, Represents the extraction matrix for extracting the qth similar block of the tth frame l image block of the reconstructed video X, Represents the low-rank structure of the lth image block in the tth frame of the s+1th iteration, the value of η is 1, and Σ represents the summation operation, Indicates the square operation of the 2 norm, Indicates the square operation of the Frobenius norm of Frobenius, H indicates the size of the third dimension of the reconstructed video, t indicates the number of the video frame, that is, the number of the image block of the tth frame, t=1,2,... , H, l represents the numbering of video blocks, l=1,2,...,S, S represents the number of video blocks, that is, the number of image blocks divided by each frame, q=1,2,..., Q, Q represents the number of similar blocks;

(7)判断当前迭代次数是否大于最大外部迭代次数,若是,执行步骤(8),否则将当前迭代次数加1,执行步骤(4);(7) Determine whether the current iteration number is greater than the maximum external iteration number, if so, execute step (8), otherwise add 1 to the current iteration number, and execute step (4);

(8)输出重构视频。(8) Output reconstructed video.

本发明与现有技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:

第一,本发明利用基于联合稀疏和高斯分布的分段线性估计构造初始重构视频的方法,克服了现有技术中分段线性重构的方法不能直接用于视频序列的重构,使得本发明具有提高视频重构方法鲁棒性的优点。First, the present invention utilizes the method of constructing the initial reconstructed video based on the piecewise linear estimation of the joint sparse and Gaussian distribution, which overcomes the fact that the piecewise linear reconstruction method in the prior art cannot be directly used for the reconstruction of the video sequence, making this The invention has the advantage of improving the robustness of the video reconstruction method.

第二,本发明利用视频帧间和帧内的相关性搜索每个图像块的相似块,求解视频的低秩结构,通过迭代优化求解重构视频,克服了现有技术中基于高斯混合模型的方法没有利用视频帧内和帧间的相关性的缺点,使得本发明具有提高重构视频精确性的优点。Second, the present invention utilizes the correlation between video frames and intra-frames to search for similar blocks of each image block, solves the low-rank structure of the video, and solves the reconstructed video through iterative optimization, which overcomes the problem of Gaussian mixture model in the prior art. The method does not utilize the disadvantages of correlation within and between video frames, so that the present invention has the advantage of improving the accuracy of reconstructed video.

附图说明Description of drawings

图1是本发明的流程图;Fig. 1 is a flow chart of the present invention;

图2是本发明与现有技术在H=8时对车辆视频重构结果图;Fig. 2 is the vehicle video reconstruction result figure when the present invention and prior art are H=8;

图3是用本发明方法与现有技术重构出来的车辆视频的峰值信噪比PSNR折线图。Fig. 3 is a broken line diagram of peak signal-to-noise ratio PSNR of vehicle video reconstructed by the method of the present invention and the prior art.

具体实施方式detailed description

下面结合附图对本发明做进一步的描述。The present invention will be further described below in conjunction with the accompanying drawings.

参照附图1,本发明的具体步骤如下。With reference to accompanying drawing 1, concrete steps of the present invention are as follows.

步骤1,接收测量数据。Step 1, receiving measurement data.

压缩感知发送方对视频数据进行观测,将每H帧视频数据用一个随机掩模观测矩阵进行一次观测的结果,组成一帧测量数据,并发送测量数据和随机掩模观测矩阵,其中,H表示取值范围为1至20的正整数。The compressed sensing sender observes the video data, and uses a random mask observation matrix for each H frame of video data to make one observation result to form a frame of measurement data, and sends the measurement data and random mask observation matrix, where H represents A positive integer with a value ranging from 1 to 20.

接收方接收发送方发送的测量数据和随机掩模观测矩阵。The receiver receives the measurement data and the random mask observation matrix sent by the sender.

M×N×H维的原始视频数据用M×N×H维的随机掩模观测矩阵A进行观测,得到一帧M×N维测量数据Y: Y i , j = [ A i , j , 1 , A i , j , 2 , ... , A i , j , t , ... , A i , j , H ] [ x ^ i , j , 1 , x ^ i , j , 2 , ... , x ^ i , j , t , ... , x ^ i , j , H ] T , ·T表示做向量的转置操作,Yi,j表示测量数据Y的第i行j列的元素,其中i=1,2,...,M,j=1,2,...,N,表示原始视频数据第t帧上第i行j列的元素,其中t=1,2,...,H,Ai,j,n表示随机掩模观测矩阵A的第t帧上第i行j列的元素,Ai,j,n以概率r随机为0或者1。Raw video data in M×N×H dimensions Use the M×N×H-dimensional random mask observation matrix A for observation, and obtain a frame of M×N-dimensional measurement data Y: Y i , j = [ A i , j , 1 , A i , j , 2 , ... , A i , j , t , ... , A i , j , h ] [ x ^ i , j , 1 , x ^ i , j , 2 , ... , x ^ i , j , t , ... , x ^ i , j , h ] T , · T means to do the transpose operation of the vector, Y i, j means the elements of the i-th row j column of the measurement data Y, where i=1,2,...,M, j=1,2,..., N, Represents raw video data Elements in the i-th row and j-column on the t-th frame, where t=1,2,...,H, A i,j,n represent the elements of the i-th row and j-column on the t-th frame of the random mask observation matrix A , A i, j, n are randomly 0 or 1 with probability r.

步骤2,初始化单帧协方差矩阵集合。Step 2, initialize the single frame covariance matrix set.

生成18幅人工黑白图,每幅人工黑白图的大小为65×65像素,每幅人工黑白图代表一个方向。Generate 18 artificial black-and-white images, the size of each artificial black-and-white image is 65×65 pixels, and each artificial black-and-white image represents a direction.

采用窗口大小为8×8像素,以1个像素的步长,分别在每个方向的人工黑白图像上滑窗选取大小为8×8像素的所有块,得到每个方向的方向块集合。Using a window size of 8×8 pixels, with a step size of 1 pixel, slide the window to select all blocks with a size of 8×8 pixels on the artificial black and white image in each direction, and obtain a set of direction blocks in each direction.

分别对每个方向的方向块集合进行主成分分析PCA分解,得到主成分分析PCA正交基和特征值矩阵,保留每个方向上的前8个最大特征值和对应的主成分正交基,得到相应的特征值矩阵和方向基。Perform principal component analysis (PCA) decomposition on the set of direction blocks in each direction to obtain the PCA orthogonal basis and eigenvalue matrix of principal component analysis, and retain the first 8 largest eigenvalues and corresponding principal component orthogonal basis in each direction. The corresponding eigenvalue matrix and direction basis are obtained.

计算每个人工黑白图所代表方向上的单帧协方差矩阵,得到单帧协方差矩阵集合。Calculate the single-frame covariance matrix in the direction represented by each artificial black-and-white image to obtain a single-frame covariance matrix set.

按照下式,计算每个黑白图所代表方向上的单帧协方差矩阵:According to the following formula, calculate the single-frame covariance matrix in the direction represented by each black and white image:

Pk=BkDkBk TP k = B k D k B k T ,

其中,Pk表示第k个方向上的单帧协方差矩阵,Bk表示第k个方向上的方向基,Dk表示第k个方向上的特征值矩阵,T表示转置操作,k表示方向的编号,k=1,2,...,18。Among them, P k represents the single-frame covariance matrix in the k-th direction, B k represents the direction basis in the k-th direction, D k represents the eigenvalue matrix in the k-th direction, T represents the transpose operation, and k represents Direction number, k=1,2,...,18.

主成分分析PCA分解的步骤如下:The steps of principal component analysis PCA decomposition are as follows:

从人工黑白图的所有方向中选择一个方向,按照下式,求出所选方向方向块集合的协方差矩阵:Select a direction from all the directions of the artificial black and white image, and calculate the covariance matrix of the direction block set of the selected direction according to the following formula:

P=E[fifi T]P=E[f i f i T ]

其中,P表示所选方向的方向块集合的协方差矩阵,E表示数学期望,fi表示所选取方向的方向块集合中的第i个块,T表示转置操作。Among them, P represents the covariance matrix of the direction block set of the selected direction, E represents the mathematical expectation, f i represents the ith block in the direction block set of the selected direction, and T represents the transpose operation.

按照下式,对协方差矩阵进行对角化,得到主成分分析PCA正交基和特征值矩阵:According to the following formula, the covariance matrix is diagonalized to obtain the PCA orthogonal basis and eigenvalue matrix of principal component analysis:

P=BDBT P = BDB T

其中,P表示所选取方向的方向块集合的协方差矩阵,B表示所选取方向的主成分分析PCA正交基,D表示该方向的特征值矩阵,T表示转置操作。Among them, P represents the covariance matrix of the direction block set of the selected direction, B represents the PCA orthogonal basis of the selected direction, D represents the eigenvalue matrix of the direction, and T represents the transpose operation.

步骤3,基于联合稀疏和高斯分布的分段线性估计构造初始重构视频。Step 3. Construct an initial reconstructed video based on piecewise linear estimation of joint sparse and Gaussian distributions.

对每个人工黑白图所代表的方向,将其方向上的H个单帧协方差矩阵放在矩阵对角线上,以联合稀疏表示每个人工黑白图所代表方向的对三维视频数据表示的视频协方差矩阵,其中,第k个方向的视频协方差矩阵如下:For the direction represented by each artificial black-and-white image, put the H single-frame covariance matrices in the direction on the diagonal of the matrix, and jointly sparsely represent the three-dimensional video data representation of the direction represented by each artificial black-and-white image Video covariance matrix, wherein, the video covariance matrix of the kth direction is as follows:

其中,表示第k个方向的视频协方差矩阵,Pk表示第k个方向的单帧协方差矩阵,k表示人工黑白图所代表的方向编号,k=1,2,...,18。in, Represents the video covariance matrix of the k-th direction, P k represents the single-frame covariance matrix of the k-th direction, k represents the direction number represented by the artificial black and white image, k=1,2,...,18.

将M×N×H维的重构视频初始为零矩阵,对初始重构视频的每一帧以步长p的大小划分为n×n维的S个图像块,并保留分块的位置,将每一帧相同位置的图像块组成视频块,得到视频块集合{x1,...,xl,...,xS},其中,xl表示第l个视频块,表示重构视频第t帧的第l个大小为n×n维的图像块,T表示转置操作,对于人工黑白图所代表的方向k上的第l个视频块服从均值为0,协方差矩阵为的高斯分布,表示第k个方向的视频协方差矩阵,M、N、H分别表示重构视频中的第一维、第二维、第三维的大小,p、n分别表示小于等于M、N维数中最小的值的正整数,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目。The M×N×H-dimensional reconstructed video is initially a zero matrix, and each frame of the initial reconstructed video is divided into n×n-dimensional S image blocks with a step size of p, and the position of the block is reserved. The image blocks at the same position in each frame are composed of video blocks to obtain a set of video blocks {x 1 ,...,x l ,...,x S }, where, x l represents the lth video block, Represents the l-th image block with size n×n dimensions of the t-th frame of the reconstructed video, T represents the transpose operation, for the l-th video block in the direction k represented by the artificial black-and-white image Obedience mean is 0, covariance matrix is Gaussian distribution, Represents the video covariance matrix in the k-th direction, M, N, H represent the size of the first dimension, the second dimension, and the third dimension in the reconstructed video, respectively, p, n represent the smallest dimension of M and N, respectively is a positive integer of the value of , k represents the direction number represented by the artificial black-and-white image, k=1,2,...,18, l=1,2,...,S, S represents the number of video blocks, that is, each The number of image blocks divided into one frame.

用分段线性估计计算视频块在人工黑白图所代表方向上的最优方向的步骤如下:Using piecewise linear estimation to calculate the optimal direction of the video block in the direction represented by the artificial black and white image, the steps are as follows:

按照下式,计算视频块在人工黑白图中所代表方向上的估计值:Calculate the estimated value of the video block in the direction represented by the artificial black and white image according to the following formula:

xx ll kk == PP ‾‾ kk ΦΦ ll TT (( ΦΦ ll PP ‾‾ kk ΦΦ ll TT ++ σσ 22 II dd )) -- 11 ythe y ll

其中,表示第l个视频块在人工黑白图所代表的方向k上的估计值,表示第k个视频协方差矩阵,Φl表示从随机掩模观测矩阵取出的第l个视频块的观测矩阵,yl表示从测量数据取出的第l个视频块的测量数据,σ的取值范围为0至1,Id表示d维单位矩阵,T表示转置操作,·-1表示矩阵求逆,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目。in, Indicates the estimated value of the lth video block in the direction k represented by the artificial black and white image, Indicates the k-th video covariance matrix, Φ l represents the observation matrix of the l-th video block taken from the random mask observation matrix, y l represents the measurement data of the l-th video block taken from the measurement data, and the value of σ The range is 0 to 1, I d represents the d-dimensional unit matrix, T represents the transpose operation, -1 represents the matrix inversion, k represents the direction number represented by the artificial black and white image, k=1,2,...,18 , l represents the serial number of the video block, l=1, 2,..., S, S represents the number of video blocks, that is, the number of image blocks divided into each frame.

按照下式,计算人工黑白图所代表方向的最优方向:Calculate the optimal direction of the direction represented by the artificial black and white image according to the following formula:

kk ~~ ll == argarg mm ii nno kk (( || || ΦΦ ll xx ll kk -- ythe y ll || || 22 ++ σσ 22 (( xx ll kk )) TT PP ‾‾ kk -- 11 (( xx ll kk )) ++ σσ 22 ll oo gg || PP ‾‾ kk || ))

其中,表示第l个视频块在人工黑白图所代表方向中的最优方向,表示返回使目标函数值最小时k的取值,Φl表示从随机掩模观测矩阵取出的第l个视频块的观测矩阵,表示第l个视频块在人工黑白图所代表的方向k上的估计值,yl表示从测量数据取出的第l个视频块的测量数据,表示第k个视频协方差矩阵,σ的取值范围为0至1,‖·‖2表示范数的平方,|·|表示行列式的值,T表示转置操作,·-1表示矩阵求逆,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目。in, Indicates the optimal direction of the lth video block in the direction represented by the artificial black and white image, Indicates to return the value of k when the objective function value is minimized, Φ l represents the observation matrix of the lth video block taken from the random mask observation matrix, Indicates the estimated value of the l-th video block in the direction k represented by the artificial black-and-white image, y l represents the measurement data of the l-th video block taken out from the measurement data, Indicates the kth video covariance matrix, the value of σ ranges from 0 to 1, ‖·‖ 2 represents the square of the norm, |·| represents the value of the determinant, T represents the transpose operation, · -1 represents the matrix calculation Inverse, k represents the direction number represented by the artificial black and white image, k=1,2,...,18, l represents the number of video blocks, l=1,2,...,S, S represents the number of video blocks , that is, the number of image blocks divided by each frame.

将每个视频块在其最优方向上的估计值,按照划分视频块时所保留的分块的位置,组合成初始重构视频。The estimated value of each video block in its optimal direction is combined into an initial reconstructed video according to the reserved block position when dividing the video block.

设置迭代次数s以及最大外部迭代次数U,令当前迭代次数s为1,并将初始重构视频作为第一次迭代的重构视频,其中,U表示正整数。Set the number of iterations s and the maximum number of external iterations U, set the current number of iterations s to 1, and use the initial reconstructed video as the reconstructed video of the first iteration, where U represents a positive integer.

步骤4,利用视频帧间和帧内的相关性搜索每个视频块的相似块。Step 4, using the inter-frame and intra-frame correlation of the video to search for similar blocks of each video block.

将重构视频的每一帧以步长p的大小划分为n×n维的块,将所有帧的块组成二维图像块集合G1,其中,p、n表示小于等于M、N维数中最小的值的正整数,M、N表示重构视频的第一维、第二维的大小。Divide each frame of the reconstructed video into n×n-dimensional blocks with a step size of p, and form blocks of all frames into a two-dimensional image block set G 1 , where p and n represent dimensions less than or equal to M and N A positive integer with the smallest value in , M and N represent the size of the first dimension and the second dimension of the reconstructed video.

将重构视频的每一帧以步长为1划分为n×n维的块,将所有帧的块组成二维图像块集合G2,并记录视频块在G2中的索引,其中,n表示小于等于M、N维数中最小的值的正整数,M、N表示重构视频的第一维、第二维的大小。Divide each frame of the reconstructed video into n×n-dimensional blocks with a step size of 1, form blocks of all frames into a two-dimensional image block set G 2 , and record the index of the video block in G 2 , where n Represents a positive integer less than or equal to the smallest value among M and N dimensions, M and N represent the size of the first and second dimensions of the reconstructed video.

对G1中的每一个块,从G2中取出所有在该块周围Z×Z×H窗口内的块,记为该块的近邻块,其中,Z表示窗口第一维和第二维的大小,H表示窗口第三维的大小,G1表示以步长p的大小划分块的二维图像块集合,G2表示以步长为1划分块的二维图像块集合。For each block in G1 , take out all the blocks in the Z×Z×H window around the block from G2, and record it as the adjacent block of the block, where Z represents the size of the first and second dimensions of the window , H represents the size of the third dimension of the window, G 1 represents a set of two-dimensional image blocks divided into blocks with a step size of p, and G 2 represents a set of two-dimensional image blocks divided into blocks with a step size of 1.

计算二维视频块集合G1中的每一个块与其近邻块的欧式距离,按欧式距离从小到大排序,选择前Q个块作为对应块的相似块,记录每个块的相似块在G2中的索引,其中,Q表示小于近邻块块数一半的正整数。Calculate the Euclidean distance between each block in the two-dimensional video block set G 1 and its neighbor blocks, sort according to the Euclidean distance from small to large, select the first Q blocks as similar blocks of the corresponding block, and record the similar blocks of each block in G 2 The index in , where Q represents a positive integer less than half the number of adjacent blocks.

每个块的相似块在G2中的索引表示为其中,称为第t帧第l个块的相似块索引,表示与该块相似的第q个块的索引值,q=1,2,...,Q,t表示视频帧的编号,t=1,2,...,H,l表示视频块的编号,l=1,2,...,S,H表示重构视频第三维的大小,S表示视频块的数目,即每一帧划分的图像块的数目,Q表示小于近邻块块数一半的正整数。 The index of similar blocks in G2 for each block is denoted as in, is called the similarity block index of the lth block in the tth frame, Represents the index value of the qth block similar to this block, q=1,2,...,Q, t represents the number of the video frame, t=1,2,...,H, l represents the number of the video block Numbering, l=1,2,...,S, H represents the size of the third dimension of the reconstructed video, S represents the number of video blocks, that is, the number of image blocks divided by each frame, and Q represents the number of blocks less than one adjacent block half positive integer.

步骤5,按照下式,求解视频的低秩结构。Step 5, solve the low-rank structure of the video according to the following formula.

LL ll tt sthe s ++ 11 == argarg mm ii nno LL ll tt || || RR ~~ ll tt (( Xx sthe s )) -- LL ll tt || || Ff 22 ++ λλ || || LL ll tt || || **

其中,表示第s+1次迭代的第t帧第l个图像块的低秩结构,表示取当目标函数值最小时图像块的低秩结构的值,表示提取第t帧第l个图像块的所有相似块的提取变换,Xs表示第s次迭代的重构视频, R ~ l t ( X s ) = ( R l t 1 ( X s ) , . . . , R l t q ( X s ) , . . . R l t Q ( X s ) ) , 表示提取第s次迭代重构视频Xs的第t帧第l个图像块的所有相似块的提取变换,表示提取第t帧第l个图像块的第q个相似块的提取矩阵,表示提取第s次迭代的重构视频Xs的第t帧第l个图像块的第q个相似块的提取矩阵,λ的取值为0.75,表示做弗罗贝尼乌斯Frobenius范数的平方操作,‖·‖*表示核范数操作,H表示重构视频第三维的大小,t表示视频帧的编号,即第t帧图像块的编号,t=1,2,...,H,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目,q=1,2,...,Q,Q表示相似块的数目, in, Represents the low-rank structure of the lth image block in the tth frame of the s+1th iteration, Represents the low-rank structure of the image block when the objective function value is minimized the value of Indicates the extraction transformation for extracting all similar blocks of the l-th image block in the t-th frame, X s represents the reconstructed video of the s-th iteration, R ~ l t ( x the s ) = ( R l t 1 ( x the s ) , . . . , R l t q ( x the s ) , . . . R l t Q ( x the s ) ) , Represents the extraction transformation of all similar blocks of the l-th image block in the t-th frame of the s-th iteratively reconstructed video X s , Represents the extraction matrix of the qth similar block that extracts the lth image block of the tth frame, Represents the extraction matrix of the qth similar block of the tth frame l image block of the reconstructed video X s of the sth iteration, the value of λ is 0.75, Indicates the square operation of the Frobenius norm of Frobenius, ‖·‖ * indicates the nuclear norm operation, H indicates the size of the third dimension of the reconstructed video, t indicates the number of the video frame, that is, the number of the image block of the tth frame , t=1,2,...,H, l represents the numbering of video blocks, l=1,2,...,S, S represents the number of video blocks, i.e. the number of image blocks divided by each frame, q=1,2,...,Q, Q represents the number of similar blocks,

进行SVD分解:其中U表示左酉矩阵,V表示右酉矩阵,Λ奇异值矩阵。right Perform SVD decomposition: Among them, U represents the left unitary matrix, V represents the right unitary matrix, and Λ singular value matrix.

对奇异值矩阵进行软阈值操作得到 Λ ~ i j = Λ i j - λ Λ i j > λ 0 Λ i j ≤ λ , 表示矩阵的第i行j列的元素。Soft thresholding is performed on the singular value matrix to obtain Λ ~ i j = Λ i j - λ Λ i j > λ 0 Λ i j ≤ λ , representation matrix The element in row i and column j of .

计算第s+1次迭代的低秩结构: Compute the low-rank structure at iteration s+1:

步骤6,按照下式,更新重构视频。Step 6: Update and reconstruct the video according to the following formula.

Xx sthe s ++ 11 == argarg mm ii nno xx || || ythe y -- ΦΦ Xx || || 22 22 ++ ηη ΣΣ tt ΣΣ ll || || RR ~~ ll tt (( Xx )) -- LL ll tt sthe s ++ 11 || || Ff 22

其中,Xs+1表示第s+1次迭代的重构视频,表示取当目标函数值最小时重构视频X的值,y表示由测量数据拉成的一维向量,Φ表示由随机掩模观测矩阵生成的视频观测矩阵,表示提取第t帧第l个图像块的所有相似块的提取变换,X表示重构视频, R ~ l t ( X ) = ( R l t 1 ( X ) , ... , R l t q ( X ) , ... R l t Q ( X ) ) , 表示提取重构视频X的第t帧第l个图像块的所有相似块的提取变换,表示提取第t帧第l个图像块的第q个相似块的提取矩阵,表示提取重构视频X的第t帧第l个图像块的第q个相似块的提取矩阵,表示第s+1次迭代的第t帧第l个图像块的低秩结构,η的取值为1,Σ表示做求和操作,表示做2范数的平方操作,表示做弗罗贝尼乌斯Frobenius范数的平方操作,H表示重构视频第三维的大小,t表示视频帧的编号,即第t帧图像块的编号,t=1,2,...,H,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目,q=1,2,...,Q,Q表示相似块的数目。Among them, X s+1 represents the reconstructed video of the s+1th iteration, Indicates to take the value of the reconstructed video X when the value of the objective function is the smallest, y represents the one-dimensional vector drawn from the measurement data, Φ represents the video observation matrix generated by the random mask observation matrix, Indicates the extraction transformation of all similar blocks of the l-th image block in the t-th frame, X represents the reconstructed video, R ~ l t ( x ) = ( R l t 1 ( x ) , ... , R l t q ( x ) , ... R l t Q ( x ) ) , Represents the extraction transformation of all similar blocks of the l-th image block in the t-th frame of the reconstructed video X, Represents the extraction matrix of the qth similar block that extracts the lth image block of the tth frame, Represents the extraction matrix for extracting the qth similar block of the tth frame l image block of the reconstructed video X, Represents the low-rank structure of the lth image block in the tth frame of the s+1th iteration, the value of η is 1, and Σ represents the summation operation, Indicates the square operation of the 2 norm, Indicates the square operation of the Frobenius norm of Frobenius, H indicates the size of the third dimension of the reconstructed video, t indicates the number of the video frame, that is, the number of the image block of the tth frame, t=1,2,... , H, l represents the numbering of video blocks, l=1,2,...,S, S represents the number of video blocks, that is, the number of image blocks divided by each frame, q=1,2,..., Q, Q represents the number of similar blocks.

第s+1次迭代的重构视频的解: X s + 1 = ( Φ T Φ + η Σ t Σ l R ~ l t T R ~ l t ) - 1 ( Φ T y + η Σ t Σ l R ~ l t T L l t s + 1 ) The solution for the reconstructed video at iteration s+1: x the s + 1 = ( Φ T Φ + η Σ t Σ l R ~ l t T R ~ l t ) - 1 ( Φ T the y + η Σ t Σ l R ~ l t T L l t the s + 1 )

其中, R ~ l t T R ~ l t = Σ l t q ∈ E l t R l t q T R l t q , R ~ l t = ( R l t 1 , ... , R l t q , ... R l t Q ) . in, R ~ l t T R ~ l t = Σ l t q ∈ E. l t R l t q T R l t q , R ~ l t = ( R l t 1 , ... , R l t q , ... R l t Q ) .

步骤7,判断当前迭代次数是否大于最大外部迭代次数,若是,执行步骤8,否则将当前迭代次数加1,执行步骤4。Step 7, judge whether the current number of iterations is greater than the maximum number of external iterations, if so, go to step 8, otherwise, add 1 to the current number of iterations, go to step 4.

步骤8,输出重构视频。Step 8, output the reconstructed video.

下面结合仿真图对本发明做进一步的描述。The present invention will be further described below in conjunction with the simulation diagram.

1.仿真条件:1. Simulation conditions:

本发明仿真实验的运行系统为CPUIntel(R)Core(TM)i5-34703.20GHz,32位windows7操作系统,仿真软件采用MatlabR2011b,仿真参数设置如下所示。The operating system of the simulation experiment of the present invention is CPUIntel(R) Core(TM) i5-3470 3.20GHz, 32-bit windows7 operating system, the simulation software adopts MatlabR2011b, and the simulation parameters are set as follows.

本发明仿真选用的是256×256的96帧交通车辆图片,每H=8帧视频数据进行一次观测,随机掩模观测矩阵A中的元素Ai,j,n以概率r=0.5随机为1,分块大小n×n=8×8,步长p=4,搜索相似块窗口大小Z×Z×H=50×50×8,相似块数目Q=100,最大迭代次数U=10,其中给定的参数λ=0.75,η=1。What the simulation of the present invention selects is 256 * 256 96 frames of traffic vehicle pictures, every H=8 frames of video data are observed once, and the element A i in the random mask observation matrix A, j, n is randomly 1 with probability r=0.5 , block size n×n=8×8, step size p=4, search similar block window size Z×Z×H=50×50×8, number of similar blocks Q=100, maximum number of iterations U=10, where Given parameters λ=0.75, η=1.

2.仿真内容与结果分析:2. Simulation content and result analysis:

在以上仿真条件下,使用三种不同的对比方法对96帧交通车辆的图片进行重构:Under the above simulation conditions, three different comparison methods are used to reconstruct the pictures of 96 frames of traffic vehicles:

对比方法1为现有技术的高斯混合模型方法,对96帧交通车辆的图片进行重构;Comparative method 1 is the Gaussian mixture model method of the prior art, and reconstructs the pictures of 96 frames of traffic vehicles;

对比方法2为采用本发明步骤1、2、3构造初始重构视频,对96帧交通车辆的图片进行重构,将初始重构视频作为重构结果;Contrast method 2 is to adopt steps 1, 2, and 3 of the present invention to construct the initial reconstructed video, reconstruct the pictures of 96 frames of traffic vehicles, and use the initial reconstructed video as the reconstruction result;

对比方法3为采用现有技术的高斯混合模型方法构造初始重构视频,代替本发明的步骤3,对96帧交通车辆的图片进行重构。Comparative method 3 is to use the Gaussian mixture model method in the prior art to construct the initial reconstructed video, instead of step 3 of the present invention, to reconstruct 96 frames of pictures of traffic vehicles.

三种对比方法和本发明方法的重构视觉效果如图2所示,其中图2(a)为96帧交通车辆视频第1帧的原图,图2(b)为96帧交通车辆视频第2帧的原图,图2(c)为96帧交通车辆视频第3帧的原图,图2(d)为96帧交通车辆视频第3帧的原图,图2(e)是用对比方法1对交通车辆视频第1帧进行重构的重构结果图,图2(f)是用对比方法1对交通车辆视频第2帧进行重构的重构结果图、图2(g)是用对比方法1对交通车辆视频第3帧进行重构的重构结果图,图2(h)是用对比方法1对交通车辆视频第4帧进行重构的重构结果图,图2(i)是用对比方法2对交通车辆视频第1帧进行重构的重构结果图,图2(j)是用对比方法2对交通车辆视频第2帧进行重构的重构结果图,图2(k)是用对比方法2对交通车辆视频第3帧进行重构的重构结果图,图2(l)是用对比方法2对交通车辆视频第4帧进行重构的重构结果图,图2(m)是用对比方法3对交通车辆视频第1帧进行重构的重构结果图,图2(n)是用对比方法3对交通车辆视频第2帧进行重构的重构结果图,图2(o)是用对比方法3对交通车辆视频第3帧进行重构的重构结果图,图2(p)是用对比方法3对交通车辆视频第4帧进行重构的重构结果图,图2(q)是用本发明方法对交通车辆视频第1帧进行重构的重构结果图,图2(r)是用本发明方法对交通车辆视频第2帧进行重构的重构结果图,图2(s)是用本发明方法对交通车辆视频第3帧进行重构的重构结果图,图2(t)是用本发明方法对交通车辆视频第4帧进行重构的重构结果图。The reconstructed visual effects of the three comparison methods and the method of the present invention are shown in Figure 2, wherein Figure 2(a) is the original image of the first frame of the 96-frame traffic vehicle video, and Figure 2(b) is the first frame of the 96-frame traffic vehicle video 2 frames of the original image, Figure 2(c) is the original image of the third frame of the 96-frame traffic vehicle video, Figure 2(d) is the original image of the third frame of the 96-frame traffic vehicle video, Figure 2(e) is for comparison Method 1 reconstructs the first frame of the traffic vehicle video. Figure 2(f) is the reconstruction result of the second frame of the traffic vehicle video using the comparative method 1. Figure 2(g) is the reconstruction result Figure 2(h) is the reconstruction result picture of the fourth frame of traffic vehicle video reconstructed by comparison method 1, Figure 2(i ) is the reconstruction result diagram of the first frame of the traffic vehicle video reconstructed by the comparison method 2, and Fig. 2(j) is the reconstruction result diagram of the second frame of the traffic vehicle video reconstruction by the comparison method 2, Fig. 2 (k) is the reconstruction result diagram of the third frame of the traffic vehicle video reconstructed by the comparison method 2, and Fig. 2(l) is the reconstruction result diagram of the fourth frame of the traffic vehicle video reconstruction by the comparison method 2, Figure 2(m) is the reconstruction result of the reconstruction of the first frame of the traffic vehicle video using the comparison method 3, and Figure 2(n) is the reconstruction result of the reconstruction of the second frame of the traffic vehicle video using the comparison method 3 Figure 2(o) is the reconstruction result of the reconstruction of the third frame of the traffic vehicle video using the comparison method 3, and Figure 2(p) is the reconstruction of the fourth frame of the traffic vehicle video using the comparison method 3 Fig. 2 (q) is the reconstructed result map of the first frame of traffic vehicle video with the method of the present invention, and Fig. 2 (r) is the reconstruction of the second frame of traffic vehicle video with the method of the present invention Fig. 2 (s) is a reconstruction result diagram of reconstructing the 3rd frame of the traffic vehicle video with the method of the present invention, and Fig. 2 (t) is a reconstruction result map of the 4th frame of the traffic vehicle video with the method of the present invention Refactored refactored result graph.

从重构图像可以看出,本发明的重构图像的边缘附近噪声明显较少,重构图像的视觉效果要优于其余三种对比算法。It can be seen from the reconstructed image that the noise near the edge of the reconstructed image of the present invention is obviously less, and the visual effect of the reconstructed image is better than the other three comparison algorithms.

对交通车辆图片前八帧进行重构,用本发明与三种对比方法峰值信噪比PSNR值如表1所示。The first eight frames of the traffic vehicle picture are reconstructed, and the peak signal-to-noise ratio (PSNR) values of the present invention and the three comparison methods are shown in Table 1.

从表1可以看出,本发明重构视频的PSNR值比三种对比方法均高,表明重构视频的质量好。It can be seen from Table 1 that the PSNR value of the reconstructed video in the present invention is higher than that of the three comparison methods, indicating that the quality of the reconstructed video is good.

表1交通车辆图片前八帧采用不同方法的重构结果表Table 1 Reconstruction results of the first eight frames of traffic vehicle pictures using different methods

图3是对96帧图片用三种对比方法和本发明方法进行重构的峰值信噪比PSNR值的折线图,图3中的横坐标表示交通车辆的视频帧,纵坐标表示峰值信噪比PSNR(dB)值,其中,以带星号的虚线标识采用对比方法1对视频进行重构的PSNR值的折线,以带星号的实线标识采用对比方法2对视频进行重构的PSNR值的折线,以带圆圈的虚线标识采用对比方法3对视频进行重构的PSNR值的折线,以带圆圈的实线标识采用本发明对视频进行重构的PSNR值的折线。Fig. 3 is the broken line graph of the peak signal-to-noise ratio (PSNR) value that reconstructs with three kinds of comparison methods and the method of the present invention to 96 frame pictures, abscissa among Fig. 3 represents the video frame of traffic vehicle, and ordinate represents peak signal-to-noise ratio PSNR (dB) value, where the dashed line with an asterisk marks the polyline of the PSNR value of the video reconstructed by the comparison method 1, and the PSNR value of the video reconstructed by the comparison method 2 is marked by a solid line with an asterisk The polyline of the PSNR value reconstructed by the comparison method 3 is marked by a dotted line with a circle, and the polyline of the PSNR value of the video reconstructed by the present invention is marked by a solid line with a circle.

由图3可以看出,采用本发明方法得到的每一帧的重构结果图的PSNR值明显高于其他方法。It can be seen from FIG. 3 that the PSNR value of the reconstruction result map of each frame obtained by the method of the present invention is obviously higher than that of other methods.

综上所述,本发明能够很好地得到清晰的重构视频,与现有的其他重构方法相比,本发明提高了对视频进行重构的重构质量。To sum up, the present invention can obtain clear reconstructed video well, and compared with other existing reconstruction methods, the present invention improves the reconstruction quality of video reconstruction.

Claims (4)

1. A compressed sensing video reconstruction method based on a low-rank model comprises the following steps:
(1) receiving measurement data:
(1a) a compressed sensing sender observes video data, and a result of one-time observation of every H frames of video data by using a random mask observation matrix is formed to form a frame of measurement data, and the measurement data and the random mask observation matrix are sent, wherein H represents a positive integer with the value range of 1 to 20;
(1b) the receiver receives the measurement data and the random mask observation matrix sent by the sender;
(2) initializing a single-frame covariance matrix set:
(2a) generating 18 artificial black-and-white images, wherein the size of each artificial black-and-white image is 65 multiplied by 65 pixels, and each artificial black-and-white image represents one direction;
(2b) adopting 8 × 8 pixels in window size, and respectively sliding a window on the artificial black-and-white image in each direction to select all blocks with the size of 8 × 8 pixels in a step length of 1 pixel to obtain a direction block set in each direction;
(2c) respectively carrying out Principal Component Analysis (PCA) decomposition on the direction block set of each direction to obtain Principal Component Analysis (PCA) orthogonal bases and eigenvalue matrixes, and reserving the first 8 maximum eigenvalues and corresponding principal component orthogonal bases in each direction to obtain corresponding eigenvalue matrixes and direction bases;
(2d) calculating a single-frame covariance matrix in the direction represented by each artificial black-and-white image to obtain a single-frame covariance matrix set;
(3) constructing an initial reconstruction video based on joint sparse and Gaussian distributed piecewise linear estimation:
(3a) for the direction represented by each artificial black-and-white image, putting H single-frame covariance matrixes in the direction on a diagonal line of the matrixes, and constructing a joint sparse video covariance matrix for reconstructing three-dimensional video data in the direction represented by each artificial black-and-white image, wherein the video covariance matrix in the k direction is as follows:
wherein,video covariance matrix, P, representing the k-th directionkA single-frame covariance matrix representing a k-th direction, wherein k represents a direction number represented by the artificial black-and-white image, and k is 1, 2., 18;
(3b) initializing M × N × H-dimensional reconstructed video as a zero matrix, dividing each frame of the initial reconstructed video into N × N-dimensional S image blocks by a step size of p, andreserving the position of the block, and forming the image blocks at the same position of each frame into a video block to obtain a video block set { x }1,...,xl,...,xSAnd (c) the step of (c) in which,xlwhich represents the l-th video block,representing the first image block with size of n × n of the T-th frame of the reconstructed video, and T representing the transposition operation, for the first video block in the direction k represented by the artificial black-and-white imageObey a mean of 0 and a covariance matrix ofThe distribution of the gaussian component of (a) is,a video covariance matrix representing a k-th direction, M, N, H respectively represents the sizes of a first dimension, a second dimension and a third dimension in a reconstructed video, p and n respectively represent positive integers less than or equal to the minimum value in the M, N dimension, k represents a direction number represented by an artificial black-and-white image, k is 1,2,.., 18, l is 1,2,.., S represents the number of video blocks, i.e., the number of image blocks divided per frame;
(3c) the estimated value of the video block in the direction represented in the artificial black-and-white image is calculated according to the following formula:
<math> <mrow> <msubsup> <mi>x</mi> <mi>l</mi> <mi>k</mi> </msubsup> <mo>=</mo> <msub> <mover> <mi>P</mi> <mo>&OverBar;</mo> </mover> <mi>k</mi> </msub> <msup> <msub> <mi>&Phi;</mi> <mi>l</mi> </msub> <mi>T</mi> </msup> <msup> <mrow> <mo>(</mo> <msub> <mi>&Phi;</mi> <mi>l</mi> </msub> <msub> <mover> <mi>P</mi> <mo>&OverBar;</mo> </mover> <mi>k</mi> </msub> <msup> <msub> <mi>&Phi;</mi> <mi>l</mi> </msub> <mi>T</mi> </msup> <mo>+</mo> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <msub> <mi>I</mi> <mi>d</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msub> <mi>y</mi> <mi>l</mi> </msub> </mrow> </math>
wherein,representing the estimate of the l-th video block in the direction k represented by the artificial black-and-white picture,represents the kth video covariance matrix, ΦlAn observation matrix, y, representing the l-th video block taken from the random mask observation matrixlRepresenting the measurement data of the l-th video block taken from the measurement data, sigma is taken to be in the range of 0 to 1, IdRepresents a d-dimensional identity matrix, T represents a transpose operation, ·-1The matrix inversion is represented, k represents a direction number represented by an artificial black-and-white image, k is 1,2,.., 18, l represents a number of video blocks, and l is 1, 2.., S represents the number of video blocks, i.e., the number of image blocks divided per frame;
(3d) calculating the optimal direction of the direction represented by the artificial black-and-white image according to the following formula:
<math> <mrow> <msub> <mover> <mi>k</mi> <mo>~</mo> </mover> <mi>l</mi> </msub> <mo>=</mo> <mi>arg</mi> <mi> </mi> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>k</mi> </munder> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>&Phi;</mi> <mi>l</mi> </msub> <msubsup> <mi>x</mi> <mi>l</mi> <mi>k</mi> </msubsup> <mo>-</mo> <msub> <mi>y</mi> <mi>l</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mi>l</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mover> <mi>P</mi> <mo>&OverBar;</mo> </mover> <mi>k</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>(</mo> <msubsup> <mi>x</mi> <mi>l</mi> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo>|</mo> <msub> <mover> <mi>P</mi> <mo>&OverBar;</mo> </mover> <mi>k</mi> </msub> <mo>|</mo> <mo>)</mo> </mrow> </math>
wherein,represents the optimal direction of the ith video block in the direction represented by the artificial black-and-white picture,denotes the value of k, phi, when returning the objective function value to the minimumlAn observation matrix representing the ith video block taken from the random mask observation matrix,represents the estimated value of the l-th video block in the direction k represented by the artificial black-and-white picture, ylMeasurement data representing the ith video block taken from the measurement data,representing the kth video covariance matrix, the value range of sigma is 0 to 1, | · |. the luminance of the calculation is zero2Represents the square of the norm, | - | represents the value of the determinant, T represents the transpose operation, ·-1The matrix inversion is represented, k represents a direction number represented by an artificial black-and-white image, k is 1,2,.., 18, l represents a number of video blocks, and l is 1, 2.., S represents the number of video blocks, i.e., the number of image blocks divided per frame;
(3e) combining the estimated value of each video block in the optimal direction into an initial reconstructed video according to the positions of the blocks reserved when the video blocks are divided in the step (3 b);
(3f) setting iteration times s and a maximum external iteration time U, setting the current iteration time s to be 1, and taking an initial reconstructed video as a reconstructed video of a first iteration, wherein U represents a positive integer;
(4) searching for similar blocks of each image block using correlation between video frames and intra frames:
(4a) dividing each frame of the reconstructed video into n multiplied by n dimensional blocks according to the size of step length p, and forming the blocks of all the frames into a two-dimensional image block set G1Wherein p and n represent positive integers less than or equal to the minimum value in the M, N dimensions, and M, N represents the sizes of the first dimension and the second dimension of the reconstructed video;
(4b) dividing each frame of the reconstructed video into n x n dimensional blocks with step length of 1, and forming the blocks of all the frames into a two-dimensional image block set G2And recording the video block in G2Wherein n represents a positive integer less than or equal to the smallest value among the M, N dimensions, and M, N represents the sizes of the first and second dimensions of the reconstructed video;
(4c) for G1From G to G2Take out all the Z XZ XH windows around the blockInner block, denoted as the block's neighbor, where Z represents the size of the window's first and second dimensions, H represents the size of the window's third dimension, G1Representing a set of two-dimensional image blocks divided into blocks by the size of a step p, G2Representing a set of two-dimensional image blocks partitioned into blocks with a step size of 1;
(4d) computing a set of two-dimensional image blocks G1The Euclidean distance between each block and its adjacent blocks is sorted from small to large according to the Euclidean distance, the first Q blocks are selected as the similar blocks of the corresponding blocks, and the similar blocks of each block are recorded in G2Wherein Q represents a positive integer less than half the number of neighbor blocks;
(5) and solving the low-rank structure of the video according to the following formula:
<math> <mrow> <msup> <msub> <mi>L</mi> <msub> <mi>l</mi> <mi>t</mi> </msub> </msub> <mrow> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi> </mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <msub> <mi>L</mi> <msub> <mi>l</mi> <mi>t</mi> </msub> </msub> </munder> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>R</mi> <mo>~</mo> </mover> <msub> <mi>l</mi> <mi>t</mi> </msub> </msub> <mrow> <mo>(</mo> <msup> <mi>X</mi> <mi>s</mi> </msup> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>L</mi> <msub> <mi>l</mi> <mi>t</mi> </msub> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>L</mi> <msub> <mi>l</mi> <mi>t</mi> </msub> </msub> <mo>|</mo> <msub> <mo>|</mo> <mo>*</mo> </msub> </mrow> </math>
wherein,a low rank structure representing the ith image block of the t frame for the (s + 1) th iteration,representing a low-rank structure of an image block when the value of an objective function is minimalThe value of (a) is,representing an extraction transformation, X, of all similar blocks from the ith image block of the t framesRepresenting the reconstructed video for the s-th iteration, R ~ l t ( X s ) = ( R l t 1 ( X s ) , ... , R l t q ( X s ) , ... R l t Q ( X s ) ) , representing the extraction of the s-th iterative reconstructed video XsThe extracted transforms of all similar blocks of the ith image block of the tth frame,an extraction matrix representing the q-th similar block for extracting the l-th image block of the t-th frame,representing reconstructed video X from the s-th iteration of extractionsThe extraction matrix of the q-th similar block of the ith image block of the t frame is 0.75,represents the squaring operation of Frobenius norm, | | ·| luminance*Representing a kernel norm operation, H representing the size of a third dimension of the reconstructed video, t representing the number of a video frame, i.e., the number of image blocks of the t-th frame, t being 1,2,.., H, l representing the number of video blocks, l being 1,2,.., S representing the number of video blocks, i.e., the number of divided image blocks per frame, Q being 1,2,.., Q representing the number of similar blocks;
(6) the reconstructed video is updated as follows:
<math> <mrow> <msup> <mi>X</mi> <mrow> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msup> <mo>=</mo> <munder> <mrow> <mi>arg</mi> <mi> </mi> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>X</mi> </munder> <mo>|</mo> <mo>|</mo> <mi>y</mi> <mo>-</mo> <mi>&Phi;</mi> <mi>X</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&eta;</mi> <munder> <mo>&Sigma;</mo> <mi>t</mi> </munder> <munder> <mo>&Sigma;</mo> <mi>l</mi> </munder> <mo>|</mo> <mo>|</mo> <msub> <mover> <mi>R</mi> <mo>~</mo> </mover> <msub> <mi>l</mi> <mi>t</mi> </msub> </msub> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>L</mi> <msub> <mi>l</mi> <mi>t</mi> </msub> <mrow> <mi>s</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> </mrow> </math>
wherein, Xs+1Representing the reconstructed video for the s +1 th iteration,representing taking the value of X for the reconstructed video when the value of the objective function is minimal, y represents the one-dimensional vector drawn from the measurement data, Φ represents the video observation matrix generated from the random mask observation matrix,representing the extraction transform of all similar blocks of the ith image block of the tth frame, X representing the reconstructed video, R ~ l t ( X ) = ( R l t 1 ( X ) , ... , R l t q ( X ) , ... R l t Q ( X ) ) , representing an extraction transformation to extract all similar blocks of the ith image block of the t frame of reconstructed video X,an extraction matrix representing the q-th similar block for extracting the l-th image block of the t-th frame,an extraction matrix representing the extraction of the q-th similar block of the i-th image block of the t-th frame of the reconstructed video X,the low rank structure of the ith image block of the t frame of the (s + 1) th iteration is represented, the value of eta is 1, sigma represents the summation operation,which means that a 2-norm squaring operation is performed,expressing a squaring operation of a Frobenius norm, H expressing the size of a third dimension of a reconstructed video, t expressing the number of a video frame, namely the number of image blocks of the t-th frame, t being 1,2, wherein H, l expressing the number of the video blocks, l being 1,2, wherein S, S expressing the number of the video blocks, namely the number of image blocks divided by each frame, and Q being 1,2, wherein Q expresses the number of similar blocks;
(7) judging whether the current iteration times are larger than the maximum external iteration times, if so, executing the step (8), otherwise, adding 1 to the current iteration times, and executing the step (4);
(8) and outputting the reconstructed video.
2. The low-rank model-based compressed perceptual video reconstruction method of claim 1, wherein: and (3) in the step (2a), the boundary of the black area and the white area of each artificial black-white image of the artificial black-white images passes through the center coordinates (33,33), and the angles of the 18 artificial black-white image boundaries are uniformly sampled from 0 to 180 degrees.
3. The low-rank model-based compressed perceptual video reconstruction method of claim 1, wherein: the principal component analysis PCA decomposition steps in step (2c) are as follows:
step 1, selecting one direction from all directions of the artificial black-and-white image, and solving a covariance matrix of a direction block set in the selected direction according to the following formula:
<math> <mrow> <mi>P</mi> <mo>=</mo> <mi>E</mi> <mrow> <mo>&lsqb;</mo> <mrow> <msub> <mi>f</mi> <mi>i</mi> </msub> <msup> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>T</mi> </msup> </mrow> <mo>&rsqb;</mo> </mrow> </mrow> </math>
where P represents the covariance matrix of the set of directional blocks for the selected direction, E represents the mathematical expectation, fiAn ith block in the direction block set representing the selected direction, T representing a transpose operation;
and step 2, diagonalizing the covariance matrix according to the following formula to obtain a Principal Component Analysis (PCA) orthogonal basis and an eigenvalue matrix:
P=BDBT
wherein, P represents the covariance matrix of the direction block set of the selected direction, B represents the principal component analysis PCA orthogonal basis of the selected direction, D represents the eigenvalue matrix of the direction, and T represents the transposition operation.
4. The low-rank model-based compressed perceptual video reconstruction method of claim 1, wherein: the formula for calculating the single-frame covariance matrix in the direction represented by each artificial black-and-white image in step (2d) is as follows:
Pk=BkDkBk T
wherein, PkRepresents the covariance matrix of the single frame in the k-th direction, BkDenotes a direction base in the k-th direction, DkAnd c, representing a characteristic value matrix in the k-th direction, T representing a transposition operation, k representing a direction number represented by the artificial black-and-white image, and k being 1, 2.
CN201510523631.6A 2015-08-24 2015-08-24 Compressed sensing video reconstruction method based on low-rank model Active CN105160664B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510523631.6A CN105160664B (en) 2015-08-24 2015-08-24 Compressed sensing video reconstruction method based on low-rank model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510523631.6A CN105160664B (en) 2015-08-24 2015-08-24 Compressed sensing video reconstruction method based on low-rank model

Publications (2)

Publication Number Publication Date
CN105160664A true CN105160664A (en) 2015-12-16
CN105160664B CN105160664B (en) 2017-10-24

Family

ID=54801506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510523631.6A Active CN105160664B (en) 2015-08-24 2015-08-24 Compressed sensing video reconstruction method based on low-rank model

Country Status (1)

Country Link
CN (1) CN105160664B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881911A (en) * 2018-06-26 2018-11-23 电子科技大学 A kind of contexts restoration methods for compressed sensing backsight frequency data stream
WO2021229320A1 (en) * 2020-05-15 2021-11-18 International Business Machines Corporation Matrix sketching using analog crossbar architectures

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722892A (en) * 2012-06-13 2012-10-10 西安电子科技大学 SAR (synthetic aperture radar) image change detection method based on low-rank matrix factorization
CN102821228A (en) * 2012-07-16 2012-12-12 西安电子科技大学 Low-rank video background reconstructing method
US8989465B2 (en) * 2012-01-17 2015-03-24 Mayo Foundation For Medical Education And Research System and method for medical image reconstruction and image series denoising using local low rank promotion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8989465B2 (en) * 2012-01-17 2015-03-24 Mayo Foundation For Medical Education And Research System and method for medical image reconstruction and image series denoising using local low rank promotion
CN102722892A (en) * 2012-06-13 2012-10-10 西安电子科技大学 SAR (synthetic aperture radar) image change detection method based on low-rank matrix factorization
CN102821228A (en) * 2012-07-16 2012-12-12 西安电子科技大学 Low-rank video background reconstructing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘芳 等: "结构化压缩感知研究进展", 《自动化学报》 *
王蓉芳等: "利用纹理信息的图像分块自适应压缩感知", 《电子学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108881911A (en) * 2018-06-26 2018-11-23 电子科技大学 A kind of contexts restoration methods for compressed sensing backsight frequency data stream
CN108881911B (en) * 2018-06-26 2020-07-10 电子科技大学 Foreground and background recovery method for compressed sensing rear video data stream
WO2021229320A1 (en) * 2020-05-15 2021-11-18 International Business Machines Corporation Matrix sketching using analog crossbar architectures
US11520855B2 (en) 2020-05-15 2022-12-06 International Business Machines Corportation Matrix sketching using analog crossbar architectures
GB2610758A (en) * 2020-05-15 2023-03-15 Ibm Matrix Sketching using analog crossbar architectures

Also Published As

Publication number Publication date
CN105160664B (en) 2017-10-24

Similar Documents

Publication Publication Date Title
Chen et al. Denoising hyperspectral image with non-iid noise structure
Jourabloo et al. Face de-spoofing: Anti-spoofing via noise modeling
CN105654492B (en) Robust real-time three-dimensional method for reconstructing based on consumer level camera
CN113362223A (en) Image super-resolution reconstruction method based on attention mechanism and two-channel network
Marinč et al. Multi-kernel prediction networks for denoising of burst images
Mobahi et al. Holistic 3D reconstruction of urban structures from low-rank textures
CN107680116B (en) A method for monitoring moving objects in video images
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN111709307B (en) Resolution enhancement-based remote sensing image small target detection method
CN111901532B (en) Video stabilization method based on recurrent neural network iteration strategy
CN114841846B (en) A robust watermarking method for self-encoded color images based on visual perception
CN106203444B (en) Polarimetric SAR Image Classification Method Based on Strip Wave and Convolutional Neural Network
CN111369548B (en) A No-Reference Video Quality Evaluation Method and Device Based on Generative Adversarial Networks
CN111881920B (en) A network adaptation method for large resolution images and a neural network training device
CN106952317A (en) Hyperspectral Image Reconstruction Method Based on Structure Sparse
CN102332153A (en) Image Compressive Sensing Reconstruction Method Based on Kernel Regression
CN109615576B (en) Single-frame image super-resolution reconstruction method based on cascade regression basis learning
Liu et al. LG-DBNet: Local and global dual-branch network for SAR image denoising
CN117252936A (en) Infrared image colorization method and system adapting to multiple training strategies
CN118195901A (en) A blind super-resolution method and device for sonar images integrating multi-dimensional self-attention
CN104734724A (en) Hyperspectral image compressed sensing method based on heavy weighting laplacian sparse prior
CN104917532B (en) Faceform&#39;s compression method
CN103606189B (en) A kind of track base system of selection towards non-rigid three-dimensional reconstruction
CN105160664A (en) Low-rank model based compressed sensing video reconstruction method
CN103903239B (en) A kind of video super-resolution method for reconstructing and its system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant