CN105160664A - Low-rank model based compressed sensing video reconstruction method - Google Patents
Low-rank model based compressed sensing video reconstruction method Download PDFInfo
- Publication number
- CN105160664A CN105160664A CN201510523631.6A CN201510523631A CN105160664A CN 105160664 A CN105160664 A CN 105160664A CN 201510523631 A CN201510523631 A CN 201510523631A CN 105160664 A CN105160664 A CN 105160664A
- Authority
- CN
- China
- Prior art keywords
- video
- msub
- blocks
- frame
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000005259 measurement Methods 0.000 claims abstract description 31
- 238000009826 distribution Methods 0.000 claims abstract description 9
- 239000011159 matrix material Substances 0.000 claims description 116
- 238000000605 extraction Methods 0.000 claims description 26
- 238000000513 principal component analysis Methods 0.000 claims description 23
- 230000009466 transformation Effects 0.000 claims description 10
- 238000000354 decomposition reaction Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000017105 transposition Effects 0.000 claims 3
- 239000004576 sand Substances 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 7
- 239000000203 mixture Substances 0.000 description 7
- 238000004088 simulation Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 4
- 238000012733 comparative method Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
本发明公开了一种基于低秩模型的压缩感知视频重构方法,主要解决压缩感知视频重构不准确和低鲁棒的问题,其实现过程为:(1)接收测量数据;(2)初始化单帧协方差矩阵集合;(3)基于联合稀疏和高斯分布的分段线性估计构造初始重构视频;(4)利用视频帧间和帧内的相关性搜索每个图像块的相似块;(5)求解视频的低秩结构;(6)更新重构视频;(7)判断是否到达迭代停止条件;(8)输出重构视频。本发明与现有视频重构技术相比,具有重构图像质量高,鲁棒性好的优点,可用于自然场景视频的重构。
The invention discloses a compressed sensing video reconstruction method based on a low-rank model, which mainly solves the problems of inaccurate and low robustness of the compressed sensing video reconstruction. The implementation process is: (1) receiving measurement data; (2) initializing A collection of single-frame covariance matrices; (3) Construct an initial reconstructed video based on piecewise linear estimation of joint sparse and Gaussian distributions; (4) Search for similar blocks for each image block using the inter- and intra-frame correlation of the video; ( 5) Solving the low-rank structure of the video; (6) updating the reconstructed video; (7) judging whether the iteration stop condition is reached; (8) outputting the reconstructed video. Compared with the existing video reconstruction technology, the invention has the advantages of high reconstructed image quality and good robustness, and can be used for natural scene video reconstruction.
Description
技术领域technical field
本发明属于图像处理技术领域,更进一步涉及视频编码技术领域中的一种基于压缩感知系统的视频重构方法。本发明挖掘了视频序列帧内和帧间的相似性,基于低秩模型对视频进行重构,可用于对自然图像视频序列进行重构。The invention belongs to the technical field of image processing, and further relates to a video reconstruction method based on a compressed sensing system in the technical field of video coding. The invention excavates the similarity between frames and frames of the video sequence, reconstructs the video based on the low-rank model, and can be used to reconstruct the video sequence of the natural image.
背景技术Background technique
近几年,在信号处理领域出现了一种新的数据理论压缩感知CS,该理论在数据采集的同时实现压缩,突破了传统奈奎采集斯特采样定理的限制,为数据采集技术带来了革命性的变化,使得该理论在压缩成像系统、军事密码学、无线传感等领域有着广阔的应用前景。压缩感知理论主要包括信号的稀疏表示、信号的观测和信号的重构等三个方面。其中设计快速有效的重构算法是将CS理论成功推广并应用于实际数据模型和采集系统的重要环节。In recent years, a new data theory compressive sensing CS has emerged in the field of signal processing. This theory realizes compression while collecting data, breaks through the limitations of the traditional Nyquist sampling theorem, and brings new advantages to data collection technology. The revolutionary changes make the theory have broad application prospects in compressed imaging systems, military cryptography, wireless sensing and other fields. Compressed sensing theory mainly includes three aspects: sparse representation of signal, observation of signal and reconstruction of signal. Among them, designing a fast and effective reconstruction algorithm is an important part of successfully promoting and applying CS theory to actual data models and acquisition systems.
从科学到体育多种应用领域中,高速摄像机在捕捉快动作中发挥着重要的作用,但是测量高速视频对摄像机的设计来说是一种挑战。压缩感知通过低帧率的压缩测量,能够捕捉高帧率视频信息,因此压缩感知被用于高速视频信息的捕捉,从而减轻高速摄像机设计的困难。High-speed cameras play an important role in capturing fast motion in applications ranging from science to sports, but measuring high-speed video presents a challenge to camera design. Compressed sensing can capture high frame rate video information through low frame rate compression measurement, so compressed sensing is used to capture high-speed video information, thereby reducing the difficulty of high-speed camera design.
GuoshenYu等人在其发表的论文“SolvingInverseProblemsWithPiecewiseLinearEstimators:FromGaussianMixtureModelstoStructuredSparsity”(《IEEETransactionsonImageProcessing》,2012,21(5):2481-2499)中提出一种使用分段线性估计求解图像逆问题的方法。该方法将问题建模为高斯混合模型,即一个图像块服从多个多变量高斯分布中的某一个分布,其在对包括图像重构的图像逆问题上取得了较好的结果。该方法存在的不足之处是,该方法针对二维图像,并不能直接用于视频序列的重构,并且运用统计方法抓取图像块之间的相似性并不精准。In their paper "SolvingInverseProblemsWithPiecewiseLinearEstimators: FromGaussianMixtureModelstoStructuredSparsity" ("IEEE Transactions on Image Processing", 2012, 21(5): 2481-2499), GuoshenYu et al. proposed a method for solving image inverse problems using piecewise linear estimation. This method models the problem as a Gaussian mixture model, that is, an image patch obeys one of multiple multivariate Gaussian distributions, and it achieves better results on image inverse problems including image reconstruction. The disadvantage of this method is that this method is aimed at two-dimensional images and cannot be directly used for the reconstruction of video sequences, and it is not accurate to use statistical methods to capture the similarity between image blocks.
JanboYang等人在其发表的论文“VideoCompressiveSensingUsingGaussianMixtureModels”(《IEEETransactionsonImageProcessingAPublicationoftheIEEESignalProcessingSociety》,2014,23)中提出一种基于高斯混合模型的方法。该方法通过对时空视频块建立高斯混合模型,对时间压缩的视频序列进行重构,获得了较好的重构效果,但是该方法仍然存在的不足是,该重构方法只是对图像建立了一个高斯混合模型,并没有抓住视频帧内和帧间的相关性,从而导致该方法重构出的视频序列不够准确。Janbo Yang et al. proposed a method based on the Gaussian mixture model in their paper "VideoCompressiveSensingUsingGaussianMixtureModels" ("IEEETransaction on Image Processing A Publication of the IEEE Signal Processing Society", 2014, 23). This method reconstructs the time-compressed video sequence by establishing a Gaussian mixture model for the spatio-temporal video block, and obtains a better reconstruction effect, but the method still has the disadvantage that the reconstruction method only establishes a The Gaussian mixture model does not capture the correlation between video frames and frames, which leads to inaccurate video sequences reconstructed by this method.
发明内容Contents of the invention
本发明的目的在于针对上述现有技术领域中的时空视频压缩感知重构技术存在的视频帧内和帧间的相关性的问题,提出一种基于低秩模型的压缩感知视频重构方法,提高重构图像的质量。The object of the present invention is to propose a method for compressive sensing video reconstruction based on a low-rank model to improve the correlation between video frames and inter-frames in the spatio-temporal video compression sensing reconstruction technology in the prior art. The quality of the reconstructed image.
实现本发明目的技术思路是:利用视频帧间的相关性,即视频不同帧相同位置具有相似性,建模不同帧的相同位置的图像块具有相同的高斯分布,从而求得初始的重构视频序列;对帧间和帧内所有相似的图像块建立低秩模型,对视频的低秩结构和重构的视频序列进行迭代优化求解,实现了高质量的时间视频压缩感知重构。The technical idea of realizing the purpose of the present invention is: using the correlation between video frames, that is, the same position of different frames of video has similarity, and the image blocks of the same position in different frames of modeling have the same Gaussian distribution, so as to obtain the initial reconstructed video Sequence; establish a low-rank model for all similar image blocks between frames and within a frame, and iteratively optimize and solve the low-rank structure of the video and the reconstructed video sequence, realizing high-quality temporal video compression perception reconstruction.
实现本发明目的的具体步骤如下:The concrete steps that realize the object of the present invention are as follows:
(1)接收测量数据:(1) Receive measurement data:
(1a)压缩感知发送方对视频数据进行观测,将每H帧视频数据用一个随机掩模观测矩阵进行一次观测的结果,组成一帧测量数据,并发送测量数据和随机掩模观测矩阵,其中,H表示取值范围为1至20的正整数;(1a) The compressed sensing sender observes the video data, and uses a random mask observation matrix for each H frame of video data to make one observation result to form a frame of measurement data, and sends the measurement data and random mask observation matrix, where , H represents a positive integer ranging from 1 to 20;
(1b)接收方接收发送方发送的测量数据和随机掩模观测矩阵;(1b) The receiver receives the measurement data and random mask observation matrix sent by the sender;
(2)初始化单帧协方差矩阵集合:(2) Initialize the single frame covariance matrix set:
(2a)生成18幅人工黑白图,每幅人工黑白图的大小为65×65像素,每幅人工黑白图代表一个方向;(2a) Generate 18 artificial black-and-white images, the size of each artificial black-and-white image is 65×65 pixels, and each artificial black-and-white image represents a direction;
(2b)采用窗口大小为8×8像素,以1个像素的步长,分别在每个方向的人工黑白图像上滑窗选取大小为8×8像素的所有块,得到每个方向的方向块集合;(2b) Using a window size of 8×8 pixels, with a step size of 1 pixel, slide the window on the artificial black and white image in each direction to select all blocks with a size of 8×8 pixels, and obtain the direction blocks in each direction gather;
(2c)分别对每个方向的方向块集合进行主成分分析PCA分解,得到主成分分析PCA正交基和特征值矩阵,保留每个方向上的前8个最大特征值和对应的主成分正交基,得到相应的特征值矩阵和方向基;(2c) Perform principal component analysis (PCA) decomposition on the set of direction blocks in each direction to obtain the PCA orthogonal basis and eigenvalue matrix of principal component analysis, and retain the first 8 largest eigenvalues and corresponding principal component positive values in each direction. Intersect basis to get the corresponding eigenvalue matrix and direction basis;
(2d)计算每个人工黑白图所代表方向上的单帧协方差矩阵,得到单帧协方差矩阵集合;(2d) Calculate the single-frame covariance matrix in the direction represented by each artificial black-and-white image to obtain a single-frame covariance matrix set;
(3)基于联合稀疏和高斯分布的分段线性估计构造初始重构视频:(3) Construct the initial reconstructed video based on the piecewise linear estimation of the joint sparse and Gaussian distribution:
(3a)对每个人工黑白图所代表的方向,将其方向上的H个单帧协方差矩阵放在矩阵对角线上,构造每个人工黑白图所代表方向的用于重构三维视频数据的联合稀疏的视频协方差矩阵,其中,第k个方向的视频协方差矩阵如下:(3a) For the direction represented by each artificial black-and-white image, place the H single-frame covariance matrices in the direction on the diagonal of the matrix, and construct the direction represented by each artificial black-and-white image for reconstructing 3D video The joint sparse video covariance matrix of the data, where the video covariance matrix in the k-th direction is as follows:
其中,表示第k个方向的视频协方差矩阵,Pk表示第k个方向的单帧协方差矩阵,k表示人工黑白图所代表的方向编号,k=1,2,...,18;in, Represent the video covariance matrix of the kth direction, P k represents the single frame covariance matrix of the kth direction, k represents the direction number represented by the artificial black and white image, k=1,2,...,18;
(3b)将M×N×H维的重构视频初始为零矩阵,对初始重构视频的每一帧以步长p的大小划分为n×n维的S个图像块,并保留分块的位置,将每一帧相同位置的图像块组成视频块,得到视频块集合{x1,...,xl,...,xS},其中,xl表示第l个视频块,表示重构视频第t帧的第l个大小为n×n维的图像块,T表示转置操作,对于人工黑白图所代表的方向k上的第l个视频块服从均值为0,协方差矩阵为的高斯分布,表示第k个方向的视频协方差矩阵,M、N、H分别表示重构视频中的第一维、第二维、第三维的大小,p、n分别表示小于等于M、N维数中最小的值的正整数,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目;(3b) The M×N×H-dimensional reconstructed video is initially a zero matrix, and each frame of the initial reconstructed video is divided into n×n-dimensional S image blocks with a step size of p, and the blocks are reserved The position of each frame is composed of image blocks at the same position as video blocks, and a set of video blocks {x 1 ,...,x l ,...,x S } is obtained, where, x l represents the lth video block, Represents the l-th image block with size n×n dimensions of the t-th frame of the reconstructed video, T represents the transpose operation, for the l-th video block in the direction k represented by the artificial black-and-white image Obedience mean is 0, covariance matrix is Gaussian distribution, Represents the video covariance matrix in the k-th direction, M, N, H represent the size of the first dimension, the second dimension, and the third dimension in the reconstructed video, respectively, p, n represent the smallest dimension of M and N, respectively is a positive integer of the value of , k represents the direction number represented by the artificial black-and-white image, k=1,2,...,18, l=1,2,...,S, S represents the number of video blocks, that is, each The number of image blocks divided into one frame;
(3c)按照下式,计算视频块在人工黑白图中所代表方向上的估计值:(3c) Calculate the estimated value of the video block in the direction represented by the artificial black and white image according to the following formula:
其中,表示第l个视频块在人工黑白图所代表的方向k上的估计值,表示第k个视频协方差矩阵,Φl表示从随机掩模观测矩阵取出的第l个视频块的观测矩阵,yl表示从测量数据取出的第l个视频块的测量数据,σ的取值范围为0至1,Id表示d维单位矩阵,T表示转置操作,·-1表示矩阵求逆,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目;in, Indicates the estimated value of the lth video block in the direction k represented by the artificial black and white image, Indicates the k-th video covariance matrix, Φ l represents the observation matrix of the l-th video block taken from the random mask observation matrix, y l represents the measurement data of the l-th video block taken from the measurement data, and the value of σ The range is 0 to 1, I d represents the d-dimensional unit matrix, T represents the transpose operation, -1 represents the matrix inversion, k represents the direction number represented by the artificial black and white image, k=1,2,...,18 , l represents the numbering of video blocks, l=1,2,...,S, S represents the number of video blocks, i.e. the number of image blocks divided by each frame;
(3d)按照下式,计算人工黑白图所代表方向的最优方向:(3d) According to the following formula, calculate the optimal direction of the direction represented by the artificial black and white image:
其中,表示第l个视频块在人工黑白图所代表方向中的最优方向,表示返回使目标函数值最小时k的取值,Φl表示从随机掩模观测矩阵取出的第l个视频块的观测矩阵,表示第l个视频块在人工黑白图所代表的方向k上的估计值,yl表示从测量数据取出的第l个视频块的测量数据,表示第k个视频协方差矩阵,σ的取值范围为0至1,‖·‖2表示范数的平方,|·|表示行列式的值,T表示转置操作,·-1表示矩阵求逆,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目;in, Indicates the optimal direction of the lth video block in the direction represented by the artificial black and white image, Indicates to return the value of k when the objective function value is minimized, Φ l represents the observation matrix of the lth video block taken from the random mask observation matrix, Indicates the estimated value of the l-th video block in the direction k represented by the artificial black-and-white image, y l represents the measurement data of the l-th video block taken out from the measurement data, Indicates the kth video covariance matrix, the value of σ ranges from 0 to 1, ‖·‖ 2 represents the square of the norm, |·| represents the value of the determinant, T represents the transpose operation, · -1 represents the matrix calculation Inverse, k represents the direction number represented by the artificial black and white image, k=1,2,...,18, l represents the number of video blocks, l=1,2,...,S, S represents the number of video blocks , that is, the number of image blocks divided by each frame;
(3e)将每个视频块在其最优方向上的估计值,按照步骤(3b)划分视频块时所保留的分块的位置,组合成初始重构视频;(3e) the estimated value of each video block on its optimal direction, according to step (3b) the position of the sub-block that is reserved when dividing the video block, is combined into initial reconstructed video;
(3f)设置迭代次数s以及最大外部迭代次数U,令当前迭代次数s为1,并将初始重构视频作为第一次迭代的重构视频,其中,U表示正整数;(3f) Set the number of iterations s and the maximum number of external iterations U, make the current number of iterations s 1, and use the initial reconstructed video as the reconstructed video of the first iteration, where U represents a positive integer;
(4)利用视频帧间和帧内的相关性搜索每个图像块的相似块:(4) Use the correlation between video frames and frames to search for similar blocks of each image block:
(4a)将重构视频的每一帧以步长p的大小划分为n×n维的块,将所有帧的块组成二维图像块集合G1,其中,p、n表示小于等于M、N维数中最小的值的正整数,M、N表示重构视频的第一维、第二维的大小;(4a) Divide each frame of the reconstructed video into n×n-dimensional blocks with a step size of p, and form blocks of all frames into a two-dimensional image block set G 1 , where p, n represent less than or equal to M, A positive integer of the minimum value in N dimensions, M and N represent the size of the first dimension and the second dimension of the reconstructed video;
(4b)将重构视频的每一帧以步长为1划分为n×n维的块,将所有帧的块组成二维图像块集合G2,并记录视频块在G2中的索引,其中,n表示小于等于M、N维数中最小的值的正整数,M、N表示重构视频的第一维、第二维的大小;(4b) Divide each frame of the reconstructed video into n×n-dimensional blocks with a step size of 1, form blocks of all frames into a two-dimensional image block set G 2 , and record the index of the video block in G 2 , Wherein, n represents a positive integer less than or equal to the smallest value in M and N dimensions, and M and N represent the size of the first dimension and the second dimension of the reconstructed video;
(4c)对G1中的每一个块,从G2中取出所有在该块周围Z×Z×H窗口内的块,记为该块的近邻块,其中,Z表示窗口第一维和第二维的大小,H表示窗口第三维的大小,G1表示以步长p的大小划分块的二维图像块集合,G2表示以步长为1划分块的二维图像块集合;(4c) For each block in G 1 , take out all blocks in the Z×Z×H window around the block from G 2 , and record them as the neighboring blocks of the block, where Z represents the first dimension of the window and the second Dimension size, H represents the size of the third dimension of the window, G 1 represents a two-dimensional image block set divided into blocks with a step size of p, G 2 represents a two-dimensional image block set divided into blocks with a step size of 1;
(4d)计算二维视频块集合G1中的每一个块与其近邻块的欧式距离,按欧式距离从小到大排序,选择前Q个块作为对应块的相似块,记录每个块的相似块在G2中的索引,其中,Q表示小于近邻块块数一半的正整数;(4d) Calculate the Euclidean distance between each block in the two-dimensional video block set G 1 and its neighbor blocks, sort according to the Euclidean distance from small to large, select the first Q blocks as similar blocks of the corresponding block, and record the similar blocks of each block Index in G2, where Q represents a positive integer less than half the number of adjacent blocks;
(5)按照下式,求解视频的低秩结构:(5) Solve the low-rank structure of the video according to the following formula:
其中,表示第s+1次迭代的第t帧第l个图像块的低秩结构,表示取当目标函数值最小时图像块的低秩结构的值,表示提取第t帧第l个图像块的所有相似块的提取变换,Xs表示第s次迭代的重构视频,
(6)按照下式,更新重构视频:(6) Update and reconstruct the video according to the following formula:
其中,Xs+1表示第s+1次迭代的重构视频,表示取当目标函数值最小时重构视频X的值,y表示由测量数据拉成的一维向量,Φ表示由随机掩模观测矩阵生成的视频观测矩阵,表示提取第t帧第l个图像块的所有相似块的提取变换,X表示重构视频,
(7)判断当前迭代次数是否大于最大外部迭代次数,若是,执行步骤(8),否则将当前迭代次数加1,执行步骤(4);(7) Determine whether the current iteration number is greater than the maximum external iteration number, if so, execute step (8), otherwise add 1 to the current iteration number, and execute step (4);
(8)输出重构视频。(8) Output reconstructed video.
本发明与现有技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:
第一,本发明利用基于联合稀疏和高斯分布的分段线性估计构造初始重构视频的方法,克服了现有技术中分段线性重构的方法不能直接用于视频序列的重构,使得本发明具有提高视频重构方法鲁棒性的优点。First, the present invention utilizes the method of constructing the initial reconstructed video based on the piecewise linear estimation of the joint sparse and Gaussian distribution, which overcomes the fact that the piecewise linear reconstruction method in the prior art cannot be directly used for the reconstruction of the video sequence, making this The invention has the advantage of improving the robustness of the video reconstruction method.
第二,本发明利用视频帧间和帧内的相关性搜索每个图像块的相似块,求解视频的低秩结构,通过迭代优化求解重构视频,克服了现有技术中基于高斯混合模型的方法没有利用视频帧内和帧间的相关性的缺点,使得本发明具有提高重构视频精确性的优点。Second, the present invention utilizes the correlation between video frames and intra-frames to search for similar blocks of each image block, solves the low-rank structure of the video, and solves the reconstructed video through iterative optimization, which overcomes the problem of Gaussian mixture model in the prior art. The method does not utilize the disadvantages of correlation within and between video frames, so that the present invention has the advantage of improving the accuracy of reconstructed video.
附图说明Description of drawings
图1是本发明的流程图;Fig. 1 is a flow chart of the present invention;
图2是本发明与现有技术在H=8时对车辆视频重构结果图;Fig. 2 is the vehicle video reconstruction result figure when the present invention and prior art are H=8;
图3是用本发明方法与现有技术重构出来的车辆视频的峰值信噪比PSNR折线图。Fig. 3 is a broken line diagram of peak signal-to-noise ratio PSNR of vehicle video reconstructed by the method of the present invention and the prior art.
具体实施方式detailed description
下面结合附图对本发明做进一步的描述。The present invention will be further described below in conjunction with the accompanying drawings.
参照附图1,本发明的具体步骤如下。With reference to accompanying drawing 1, concrete steps of the present invention are as follows.
步骤1,接收测量数据。Step 1, receiving measurement data.
压缩感知发送方对视频数据进行观测,将每H帧视频数据用一个随机掩模观测矩阵进行一次观测的结果,组成一帧测量数据,并发送测量数据和随机掩模观测矩阵,其中,H表示取值范围为1至20的正整数。The compressed sensing sender observes the video data, and uses a random mask observation matrix for each H frame of video data to make one observation result to form a frame of measurement data, and sends the measurement data and random mask observation matrix, where H represents A positive integer with a value ranging from 1 to 20.
接收方接收发送方发送的测量数据和随机掩模观测矩阵。The receiver receives the measurement data and the random mask observation matrix sent by the sender.
M×N×H维的原始视频数据用M×N×H维的随机掩模观测矩阵A进行观测,得到一帧M×N维测量数据Y:
步骤2,初始化单帧协方差矩阵集合。Step 2, initialize the single frame covariance matrix set.
生成18幅人工黑白图,每幅人工黑白图的大小为65×65像素,每幅人工黑白图代表一个方向。Generate 18 artificial black-and-white images, the size of each artificial black-and-white image is 65×65 pixels, and each artificial black-and-white image represents a direction.
采用窗口大小为8×8像素,以1个像素的步长,分别在每个方向的人工黑白图像上滑窗选取大小为8×8像素的所有块,得到每个方向的方向块集合。Using a window size of 8×8 pixels, with a step size of 1 pixel, slide the window to select all blocks with a size of 8×8 pixels on the artificial black and white image in each direction, and obtain a set of direction blocks in each direction.
分别对每个方向的方向块集合进行主成分分析PCA分解,得到主成分分析PCA正交基和特征值矩阵,保留每个方向上的前8个最大特征值和对应的主成分正交基,得到相应的特征值矩阵和方向基。Perform principal component analysis (PCA) decomposition on the set of direction blocks in each direction to obtain the PCA orthogonal basis and eigenvalue matrix of principal component analysis, and retain the first 8 largest eigenvalues and corresponding principal component orthogonal basis in each direction. The corresponding eigenvalue matrix and direction basis are obtained.
计算每个人工黑白图所代表方向上的单帧协方差矩阵,得到单帧协方差矩阵集合。Calculate the single-frame covariance matrix in the direction represented by each artificial black-and-white image to obtain a single-frame covariance matrix set.
按照下式,计算每个黑白图所代表方向上的单帧协方差矩阵:According to the following formula, calculate the single-frame covariance matrix in the direction represented by each black and white image:
Pk=BkDkBk T,P k = B k D k B k T ,
其中,Pk表示第k个方向上的单帧协方差矩阵,Bk表示第k个方向上的方向基,Dk表示第k个方向上的特征值矩阵,T表示转置操作,k表示方向的编号,k=1,2,...,18。Among them, P k represents the single-frame covariance matrix in the k-th direction, B k represents the direction basis in the k-th direction, D k represents the eigenvalue matrix in the k-th direction, T represents the transpose operation, and k represents Direction number, k=1,2,...,18.
主成分分析PCA分解的步骤如下:The steps of principal component analysis PCA decomposition are as follows:
从人工黑白图的所有方向中选择一个方向,按照下式,求出所选方向方向块集合的协方差矩阵:Select a direction from all the directions of the artificial black and white image, and calculate the covariance matrix of the direction block set of the selected direction according to the following formula:
P=E[fifi T]P=E[f i f i T ]
其中,P表示所选方向的方向块集合的协方差矩阵,E表示数学期望,fi表示所选取方向的方向块集合中的第i个块,T表示转置操作。Among them, P represents the covariance matrix of the direction block set of the selected direction, E represents the mathematical expectation, f i represents the ith block in the direction block set of the selected direction, and T represents the transpose operation.
按照下式,对协方差矩阵进行对角化,得到主成分分析PCA正交基和特征值矩阵:According to the following formula, the covariance matrix is diagonalized to obtain the PCA orthogonal basis and eigenvalue matrix of principal component analysis:
P=BDBT P = BDB T
其中,P表示所选取方向的方向块集合的协方差矩阵,B表示所选取方向的主成分分析PCA正交基,D表示该方向的特征值矩阵,T表示转置操作。Among them, P represents the covariance matrix of the direction block set of the selected direction, B represents the PCA orthogonal basis of the selected direction, D represents the eigenvalue matrix of the direction, and T represents the transpose operation.
步骤3,基于联合稀疏和高斯分布的分段线性估计构造初始重构视频。Step 3. Construct an initial reconstructed video based on piecewise linear estimation of joint sparse and Gaussian distributions.
对每个人工黑白图所代表的方向,将其方向上的H个单帧协方差矩阵放在矩阵对角线上,以联合稀疏表示每个人工黑白图所代表方向的对三维视频数据表示的视频协方差矩阵,其中,第k个方向的视频协方差矩阵如下:For the direction represented by each artificial black-and-white image, put the H single-frame covariance matrices in the direction on the diagonal of the matrix, and jointly sparsely represent the three-dimensional video data representation of the direction represented by each artificial black-and-white image Video covariance matrix, wherein, the video covariance matrix of the kth direction is as follows:
其中,表示第k个方向的视频协方差矩阵,Pk表示第k个方向的单帧协方差矩阵,k表示人工黑白图所代表的方向编号,k=1,2,...,18。in, Represents the video covariance matrix of the k-th direction, P k represents the single-frame covariance matrix of the k-th direction, k represents the direction number represented by the artificial black and white image, k=1,2,...,18.
将M×N×H维的重构视频初始为零矩阵,对初始重构视频的每一帧以步长p的大小划分为n×n维的S个图像块,并保留分块的位置,将每一帧相同位置的图像块组成视频块,得到视频块集合{x1,...,xl,...,xS},其中,xl表示第l个视频块,表示重构视频第t帧的第l个大小为n×n维的图像块,T表示转置操作,对于人工黑白图所代表的方向k上的第l个视频块服从均值为0,协方差矩阵为的高斯分布,表示第k个方向的视频协方差矩阵,M、N、H分别表示重构视频中的第一维、第二维、第三维的大小,p、n分别表示小于等于M、N维数中最小的值的正整数,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目。The M×N×H-dimensional reconstructed video is initially a zero matrix, and each frame of the initial reconstructed video is divided into n×n-dimensional S image blocks with a step size of p, and the position of the block is reserved. The image blocks at the same position in each frame are composed of video blocks to obtain a set of video blocks {x 1 ,...,x l ,...,x S }, where, x l represents the lth video block, Represents the l-th image block with size n×n dimensions of the t-th frame of the reconstructed video, T represents the transpose operation, for the l-th video block in the direction k represented by the artificial black-and-white image Obedience mean is 0, covariance matrix is Gaussian distribution, Represents the video covariance matrix in the k-th direction, M, N, H represent the size of the first dimension, the second dimension, and the third dimension in the reconstructed video, respectively, p, n represent the smallest dimension of M and N, respectively is a positive integer of the value of , k represents the direction number represented by the artificial black-and-white image, k=1,2,...,18, l=1,2,...,S, S represents the number of video blocks, that is, each The number of image blocks divided into one frame.
用分段线性估计计算视频块在人工黑白图所代表方向上的最优方向的步骤如下:Using piecewise linear estimation to calculate the optimal direction of the video block in the direction represented by the artificial black and white image, the steps are as follows:
按照下式,计算视频块在人工黑白图中所代表方向上的估计值:Calculate the estimated value of the video block in the direction represented by the artificial black and white image according to the following formula:
其中,表示第l个视频块在人工黑白图所代表的方向k上的估计值,表示第k个视频协方差矩阵,Φl表示从随机掩模观测矩阵取出的第l个视频块的观测矩阵,yl表示从测量数据取出的第l个视频块的测量数据,σ的取值范围为0至1,Id表示d维单位矩阵,T表示转置操作,·-1表示矩阵求逆,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目。in, Indicates the estimated value of the lth video block in the direction k represented by the artificial black and white image, Indicates the k-th video covariance matrix, Φ l represents the observation matrix of the l-th video block taken from the random mask observation matrix, y l represents the measurement data of the l-th video block taken from the measurement data, and the value of σ The range is 0 to 1, I d represents the d-dimensional unit matrix, T represents the transpose operation, -1 represents the matrix inversion, k represents the direction number represented by the artificial black and white image, k=1,2,...,18 , l represents the serial number of the video block, l=1, 2,..., S, S represents the number of video blocks, that is, the number of image blocks divided into each frame.
按照下式,计算人工黑白图所代表方向的最优方向:Calculate the optimal direction of the direction represented by the artificial black and white image according to the following formula:
其中,表示第l个视频块在人工黑白图所代表方向中的最优方向,表示返回使目标函数值最小时k的取值,Φl表示从随机掩模观测矩阵取出的第l个视频块的观测矩阵,表示第l个视频块在人工黑白图所代表的方向k上的估计值,yl表示从测量数据取出的第l个视频块的测量数据,表示第k个视频协方差矩阵,σ的取值范围为0至1,‖·‖2表示范数的平方,|·|表示行列式的值,T表示转置操作,·-1表示矩阵求逆,k表示人工黑白图所代表的方向编号,k=1,2,...,18,l表示视频块的编号,l=1,2,...,S,S表示视频块的数目,即每一帧划分的图像块的数目。in, Indicates the optimal direction of the lth video block in the direction represented by the artificial black and white image, Indicates to return the value of k when the objective function value is minimized, Φ l represents the observation matrix of the lth video block taken from the random mask observation matrix, Indicates the estimated value of the l-th video block in the direction k represented by the artificial black-and-white image, y l represents the measurement data of the l-th video block taken out from the measurement data, Indicates the kth video covariance matrix, the value of σ ranges from 0 to 1, ‖·‖ 2 represents the square of the norm, |·| represents the value of the determinant, T represents the transpose operation, · -1 represents the matrix calculation Inverse, k represents the direction number represented by the artificial black and white image, k=1,2,...,18, l represents the number of video blocks, l=1,2,...,S, S represents the number of video blocks , that is, the number of image blocks divided by each frame.
将每个视频块在其最优方向上的估计值,按照划分视频块时所保留的分块的位置,组合成初始重构视频。The estimated value of each video block in its optimal direction is combined into an initial reconstructed video according to the reserved block position when dividing the video block.
设置迭代次数s以及最大外部迭代次数U,令当前迭代次数s为1,并将初始重构视频作为第一次迭代的重构视频,其中,U表示正整数。Set the number of iterations s and the maximum number of external iterations U, set the current number of iterations s to 1, and use the initial reconstructed video as the reconstructed video of the first iteration, where U represents a positive integer.
步骤4,利用视频帧间和帧内的相关性搜索每个视频块的相似块。Step 4, using the inter-frame and intra-frame correlation of the video to search for similar blocks of each video block.
将重构视频的每一帧以步长p的大小划分为n×n维的块,将所有帧的块组成二维图像块集合G1,其中,p、n表示小于等于M、N维数中最小的值的正整数,M、N表示重构视频的第一维、第二维的大小。Divide each frame of the reconstructed video into n×n-dimensional blocks with a step size of p, and form blocks of all frames into a two-dimensional image block set G 1 , where p and n represent dimensions less than or equal to M and N A positive integer with the smallest value in , M and N represent the size of the first dimension and the second dimension of the reconstructed video.
将重构视频的每一帧以步长为1划分为n×n维的块,将所有帧的块组成二维图像块集合G2,并记录视频块在G2中的索引,其中,n表示小于等于M、N维数中最小的值的正整数,M、N表示重构视频的第一维、第二维的大小。Divide each frame of the reconstructed video into n×n-dimensional blocks with a step size of 1, form blocks of all frames into a two-dimensional image block set G 2 , and record the index of the video block in G 2 , where n Represents a positive integer less than or equal to the smallest value among M and N dimensions, M and N represent the size of the first and second dimensions of the reconstructed video.
对G1中的每一个块,从G2中取出所有在该块周围Z×Z×H窗口内的块,记为该块的近邻块,其中,Z表示窗口第一维和第二维的大小,H表示窗口第三维的大小,G1表示以步长p的大小划分块的二维图像块集合,G2表示以步长为1划分块的二维图像块集合。For each block in G1 , take out all the blocks in the Z×Z×H window around the block from G2, and record it as the adjacent block of the block, where Z represents the size of the first and second dimensions of the window , H represents the size of the third dimension of the window, G 1 represents a set of two-dimensional image blocks divided into blocks with a step size of p, and G 2 represents a set of two-dimensional image blocks divided into blocks with a step size of 1.
计算二维视频块集合G1中的每一个块与其近邻块的欧式距离,按欧式距离从小到大排序,选择前Q个块作为对应块的相似块,记录每个块的相似块在G2中的索引,其中,Q表示小于近邻块块数一半的正整数。Calculate the Euclidean distance between each block in the two-dimensional video block set G 1 and its neighbor blocks, sort according to the Euclidean distance from small to large, select the first Q blocks as similar blocks of the corresponding block, and record the similar blocks of each block in G 2 The index in , where Q represents a positive integer less than half the number of adjacent blocks.
每个块的相似块在G2中的索引表示为其中,称为第t帧第l个块的相似块索引,表示与该块相似的第q个块的索引值,q=1,2,...,Q,t表示视频帧的编号,t=1,2,...,H,l表示视频块的编号,l=1,2,...,S,H表示重构视频第三维的大小,S表示视频块的数目,即每一帧划分的图像块的数目,Q表示小于近邻块块数一半的正整数。 The index of similar blocks in G2 for each block is denoted as in, is called the similarity block index of the lth block in the tth frame, Represents the index value of the qth block similar to this block, q=1,2,...,Q, t represents the number of the video frame, t=1,2,...,H, l represents the number of the video block Numbering, l=1,2,...,S, H represents the size of the third dimension of the reconstructed video, S represents the number of video blocks, that is, the number of image blocks divided by each frame, and Q represents the number of blocks less than one adjacent block half positive integer.
步骤5,按照下式,求解视频的低秩结构。Step 5, solve the low-rank structure of the video according to the following formula.
其中,表示第s+1次迭代的第t帧第l个图像块的低秩结构,表示取当目标函数值最小时图像块的低秩结构的值,表示提取第t帧第l个图像块的所有相似块的提取变换,Xs表示第s次迭代的重构视频,
对进行SVD分解:其中U表示左酉矩阵,V表示右酉矩阵,Λ奇异值矩阵。right Perform SVD decomposition: Among them, U represents the left unitary matrix, V represents the right unitary matrix, and Λ singular value matrix.
对奇异值矩阵进行软阈值操作得到
计算第s+1次迭代的低秩结构: Compute the low-rank structure at iteration s+1:
步骤6,按照下式,更新重构视频。Step 6: Update and reconstruct the video according to the following formula.
其中,Xs+1表示第s+1次迭代的重构视频,表示取当目标函数值最小时重构视频X的值,y表示由测量数据拉成的一维向量,Φ表示由随机掩模观测矩阵生成的视频观测矩阵,表示提取第t帧第l个图像块的所有相似块的提取变换,X表示重构视频,
第s+1次迭代的重构视频的解:
其中,
步骤7,判断当前迭代次数是否大于最大外部迭代次数,若是,执行步骤8,否则将当前迭代次数加1,执行步骤4。Step 7, judge whether the current number of iterations is greater than the maximum number of external iterations, if so, go to step 8, otherwise, add 1 to the current number of iterations, go to step 4.
步骤8,输出重构视频。Step 8, output the reconstructed video.
下面结合仿真图对本发明做进一步的描述。The present invention will be further described below in conjunction with the simulation diagram.
1.仿真条件:1. Simulation conditions:
本发明仿真实验的运行系统为CPUIntel(R)Core(TM)i5-34703.20GHz,32位windows7操作系统,仿真软件采用MatlabR2011b,仿真参数设置如下所示。The operating system of the simulation experiment of the present invention is CPUIntel(R) Core(TM) i5-3470 3.20GHz, 32-bit windows7 operating system, the simulation software adopts MatlabR2011b, and the simulation parameters are set as follows.
本发明仿真选用的是256×256的96帧交通车辆图片,每H=8帧视频数据进行一次观测,随机掩模观测矩阵A中的元素Ai,j,n以概率r=0.5随机为1,分块大小n×n=8×8,步长p=4,搜索相似块窗口大小Z×Z×H=50×50×8,相似块数目Q=100,最大迭代次数U=10,其中给定的参数λ=0.75,η=1。What the simulation of the present invention selects is 256 * 256 96 frames of traffic vehicle pictures, every H=8 frames of video data are observed once, and the element A i in the random mask observation matrix A, j, n is randomly 1 with probability r=0.5 , block size n×n=8×8, step size p=4, search similar block window size Z×Z×H=50×50×8, number of similar blocks Q=100, maximum number of iterations U=10, where Given parameters λ=0.75, η=1.
2.仿真内容与结果分析:2. Simulation content and result analysis:
在以上仿真条件下,使用三种不同的对比方法对96帧交通车辆的图片进行重构:Under the above simulation conditions, three different comparison methods are used to reconstruct the pictures of 96 frames of traffic vehicles:
对比方法1为现有技术的高斯混合模型方法,对96帧交通车辆的图片进行重构;Comparative method 1 is the Gaussian mixture model method of the prior art, and reconstructs the pictures of 96 frames of traffic vehicles;
对比方法2为采用本发明步骤1、2、3构造初始重构视频,对96帧交通车辆的图片进行重构,将初始重构视频作为重构结果;Contrast method 2 is to adopt steps 1, 2, and 3 of the present invention to construct the initial reconstructed video, reconstruct the pictures of 96 frames of traffic vehicles, and use the initial reconstructed video as the reconstruction result;
对比方法3为采用现有技术的高斯混合模型方法构造初始重构视频,代替本发明的步骤3,对96帧交通车辆的图片进行重构。Comparative method 3 is to use the Gaussian mixture model method in the prior art to construct the initial reconstructed video, instead of step 3 of the present invention, to reconstruct 96 frames of pictures of traffic vehicles.
三种对比方法和本发明方法的重构视觉效果如图2所示,其中图2(a)为96帧交通车辆视频第1帧的原图,图2(b)为96帧交通车辆视频第2帧的原图,图2(c)为96帧交通车辆视频第3帧的原图,图2(d)为96帧交通车辆视频第3帧的原图,图2(e)是用对比方法1对交通车辆视频第1帧进行重构的重构结果图,图2(f)是用对比方法1对交通车辆视频第2帧进行重构的重构结果图、图2(g)是用对比方法1对交通车辆视频第3帧进行重构的重构结果图,图2(h)是用对比方法1对交通车辆视频第4帧进行重构的重构结果图,图2(i)是用对比方法2对交通车辆视频第1帧进行重构的重构结果图,图2(j)是用对比方法2对交通车辆视频第2帧进行重构的重构结果图,图2(k)是用对比方法2对交通车辆视频第3帧进行重构的重构结果图,图2(l)是用对比方法2对交通车辆视频第4帧进行重构的重构结果图,图2(m)是用对比方法3对交通车辆视频第1帧进行重构的重构结果图,图2(n)是用对比方法3对交通车辆视频第2帧进行重构的重构结果图,图2(o)是用对比方法3对交通车辆视频第3帧进行重构的重构结果图,图2(p)是用对比方法3对交通车辆视频第4帧进行重构的重构结果图,图2(q)是用本发明方法对交通车辆视频第1帧进行重构的重构结果图,图2(r)是用本发明方法对交通车辆视频第2帧进行重构的重构结果图,图2(s)是用本发明方法对交通车辆视频第3帧进行重构的重构结果图,图2(t)是用本发明方法对交通车辆视频第4帧进行重构的重构结果图。The reconstructed visual effects of the three comparison methods and the method of the present invention are shown in Figure 2, wherein Figure 2(a) is the original image of the first frame of the 96-frame traffic vehicle video, and Figure 2(b) is the first frame of the 96-frame traffic vehicle video 2 frames of the original image, Figure 2(c) is the original image of the third frame of the 96-frame traffic vehicle video, Figure 2(d) is the original image of the third frame of the 96-frame traffic vehicle video, Figure 2(e) is for comparison Method 1 reconstructs the first frame of the traffic vehicle video. Figure 2(f) is the reconstruction result of the second frame of the traffic vehicle video using the comparative method 1. Figure 2(g) is the reconstruction result Figure 2(h) is the reconstruction result picture of the fourth frame of traffic vehicle video reconstructed by comparison method 1, Figure 2(i ) is the reconstruction result diagram of the first frame of the traffic vehicle video reconstructed by the comparison method 2, and Fig. 2(j) is the reconstruction result diagram of the second frame of the traffic vehicle video reconstruction by the comparison method 2, Fig. 2 (k) is the reconstruction result diagram of the third frame of the traffic vehicle video reconstructed by the comparison method 2, and Fig. 2(l) is the reconstruction result diagram of the fourth frame of the traffic vehicle video reconstruction by the comparison method 2, Figure 2(m) is the reconstruction result of the reconstruction of the first frame of the traffic vehicle video using the comparison method 3, and Figure 2(n) is the reconstruction result of the reconstruction of the second frame of the traffic vehicle video using the comparison method 3 Figure 2(o) is the reconstruction result of the reconstruction of the third frame of the traffic vehicle video using the comparison method 3, and Figure 2(p) is the reconstruction of the fourth frame of the traffic vehicle video using the comparison method 3 Fig. 2 (q) is the reconstructed result map of the first frame of traffic vehicle video with the method of the present invention, and Fig. 2 (r) is the reconstruction of the second frame of traffic vehicle video with the method of the present invention Fig. 2 (s) is a reconstruction result diagram of reconstructing the 3rd frame of the traffic vehicle video with the method of the present invention, and Fig. 2 (t) is a reconstruction result map of the 4th frame of the traffic vehicle video with the method of the present invention Refactored refactored result graph.
从重构图像可以看出,本发明的重构图像的边缘附近噪声明显较少,重构图像的视觉效果要优于其余三种对比算法。It can be seen from the reconstructed image that the noise near the edge of the reconstructed image of the present invention is obviously less, and the visual effect of the reconstructed image is better than the other three comparison algorithms.
对交通车辆图片前八帧进行重构,用本发明与三种对比方法峰值信噪比PSNR值如表1所示。The first eight frames of the traffic vehicle picture are reconstructed, and the peak signal-to-noise ratio (PSNR) values of the present invention and the three comparison methods are shown in Table 1.
从表1可以看出,本发明重构视频的PSNR值比三种对比方法均高,表明重构视频的质量好。It can be seen from Table 1 that the PSNR value of the reconstructed video in the present invention is higher than that of the three comparison methods, indicating that the quality of the reconstructed video is good.
表1交通车辆图片前八帧采用不同方法的重构结果表Table 1 Reconstruction results of the first eight frames of traffic vehicle pictures using different methods
图3是对96帧图片用三种对比方法和本发明方法进行重构的峰值信噪比PSNR值的折线图,图3中的横坐标表示交通车辆的视频帧,纵坐标表示峰值信噪比PSNR(dB)值,其中,以带星号的虚线标识采用对比方法1对视频进行重构的PSNR值的折线,以带星号的实线标识采用对比方法2对视频进行重构的PSNR值的折线,以带圆圈的虚线标识采用对比方法3对视频进行重构的PSNR值的折线,以带圆圈的实线标识采用本发明对视频进行重构的PSNR值的折线。Fig. 3 is the broken line graph of the peak signal-to-noise ratio (PSNR) value that reconstructs with three kinds of comparison methods and the method of the present invention to 96 frame pictures, abscissa among Fig. 3 represents the video frame of traffic vehicle, and ordinate represents peak signal-to-noise ratio PSNR (dB) value, where the dashed line with an asterisk marks the polyline of the PSNR value of the video reconstructed by the comparison method 1, and the PSNR value of the video reconstructed by the comparison method 2 is marked by a solid line with an asterisk The polyline of the PSNR value reconstructed by the comparison method 3 is marked by a dotted line with a circle, and the polyline of the PSNR value of the video reconstructed by the present invention is marked by a solid line with a circle.
由图3可以看出,采用本发明方法得到的每一帧的重构结果图的PSNR值明显高于其他方法。It can be seen from FIG. 3 that the PSNR value of the reconstruction result map of each frame obtained by the method of the present invention is obviously higher than that of other methods.
综上所述,本发明能够很好地得到清晰的重构视频,与现有的其他重构方法相比,本发明提高了对视频进行重构的重构质量。To sum up, the present invention can obtain clear reconstructed video well, and compared with other existing reconstruction methods, the present invention improves the reconstruction quality of video reconstruction.
Claims (4)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510523631.6A CN105160664B (en) | 2015-08-24 | 2015-08-24 | Compressed sensing video reconstruction method based on low-rank model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510523631.6A CN105160664B (en) | 2015-08-24 | 2015-08-24 | Compressed sensing video reconstruction method based on low-rank model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105160664A true CN105160664A (en) | 2015-12-16 |
CN105160664B CN105160664B (en) | 2017-10-24 |
Family
ID=54801506
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510523631.6A Active CN105160664B (en) | 2015-08-24 | 2015-08-24 | Compressed sensing video reconstruction method based on low-rank model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105160664B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108881911A (en) * | 2018-06-26 | 2018-11-23 | 电子科技大学 | A kind of contexts restoration methods for compressed sensing backsight frequency data stream |
WO2021229320A1 (en) * | 2020-05-15 | 2021-11-18 | International Business Machines Corporation | Matrix sketching using analog crossbar architectures |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102722892A (en) * | 2012-06-13 | 2012-10-10 | 西安电子科技大学 | SAR (synthetic aperture radar) image change detection method based on low-rank matrix factorization |
CN102821228A (en) * | 2012-07-16 | 2012-12-12 | 西安电子科技大学 | Low-rank video background reconstructing method |
US8989465B2 (en) * | 2012-01-17 | 2015-03-24 | Mayo Foundation For Medical Education And Research | System and method for medical image reconstruction and image series denoising using local low rank promotion |
-
2015
- 2015-08-24 CN CN201510523631.6A patent/CN105160664B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8989465B2 (en) * | 2012-01-17 | 2015-03-24 | Mayo Foundation For Medical Education And Research | System and method for medical image reconstruction and image series denoising using local low rank promotion |
CN102722892A (en) * | 2012-06-13 | 2012-10-10 | 西安电子科技大学 | SAR (synthetic aperture radar) image change detection method based on low-rank matrix factorization |
CN102821228A (en) * | 2012-07-16 | 2012-12-12 | 西安电子科技大学 | Low-rank video background reconstructing method |
Non-Patent Citations (2)
Title |
---|
刘芳 等: "结构化压缩感知研究进展", 《自动化学报》 * |
王蓉芳等: "利用纹理信息的图像分块自适应压缩感知", 《电子学报》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108881911A (en) * | 2018-06-26 | 2018-11-23 | 电子科技大学 | A kind of contexts restoration methods for compressed sensing backsight frequency data stream |
CN108881911B (en) * | 2018-06-26 | 2020-07-10 | 电子科技大学 | Foreground and background recovery method for compressed sensing rear video data stream |
WO2021229320A1 (en) * | 2020-05-15 | 2021-11-18 | International Business Machines Corporation | Matrix sketching using analog crossbar architectures |
US11520855B2 (en) | 2020-05-15 | 2022-12-06 | International Business Machines Corportation | Matrix sketching using analog crossbar architectures |
GB2610758A (en) * | 2020-05-15 | 2023-03-15 | Ibm | Matrix Sketching using analog crossbar architectures |
Also Published As
Publication number | Publication date |
---|---|
CN105160664B (en) | 2017-10-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Denoising hyperspectral image with non-iid noise structure | |
Jourabloo et al. | Face de-spoofing: Anti-spoofing via noise modeling | |
CN105654492B (en) | Robust real-time three-dimensional method for reconstructing based on consumer level camera | |
CN113362223A (en) | Image super-resolution reconstruction method based on attention mechanism and two-channel network | |
Marinč et al. | Multi-kernel prediction networks for denoising of burst images | |
Mobahi et al. | Holistic 3D reconstruction of urban structures from low-rank textures | |
CN107680116B (en) | A method for monitoring moving objects in video images | |
CN112489164B (en) | Image coloring method based on improved depth separable convolutional neural network | |
CN111709307B (en) | Resolution enhancement-based remote sensing image small target detection method | |
CN111901532B (en) | Video stabilization method based on recurrent neural network iteration strategy | |
CN114841846B (en) | A robust watermarking method for self-encoded color images based on visual perception | |
CN106203444B (en) | Polarimetric SAR Image Classification Method Based on Strip Wave and Convolutional Neural Network | |
CN111369548B (en) | A No-Reference Video Quality Evaluation Method and Device Based on Generative Adversarial Networks | |
CN111881920B (en) | A network adaptation method for large resolution images and a neural network training device | |
CN106952317A (en) | Hyperspectral Image Reconstruction Method Based on Structure Sparse | |
CN102332153A (en) | Image Compressive Sensing Reconstruction Method Based on Kernel Regression | |
CN109615576B (en) | Single-frame image super-resolution reconstruction method based on cascade regression basis learning | |
Liu et al. | LG-DBNet: Local and global dual-branch network for SAR image denoising | |
CN117252936A (en) | Infrared image colorization method and system adapting to multiple training strategies | |
CN118195901A (en) | A blind super-resolution method and device for sonar images integrating multi-dimensional self-attention | |
CN104734724A (en) | Hyperspectral image compressed sensing method based on heavy weighting laplacian sparse prior | |
CN104917532B (en) | Faceform's compression method | |
CN103606189B (en) | A kind of track base system of selection towards non-rigid three-dimensional reconstruction | |
CN105160664A (en) | Low-rank model based compressed sensing video reconstruction method | |
CN103903239B (en) | A kind of video super-resolution method for reconstructing and its system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |