CN102881002A - Video background recovery method based on movement information and matrix completion - Google Patents

Video background recovery method based on movement information and matrix completion Download PDF

Info

Publication number
CN102881002A
CN102881002A CN2012102393491A CN201210239349A CN102881002A CN 102881002 A CN102881002 A CN 102881002A CN 2012102393491 A CN2012102393491 A CN 2012102393491A CN 201210239349 A CN201210239349 A CN 201210239349A CN 102881002 A CN102881002 A CN 102881002A
Authority
CN
China
Prior art keywords
matrix
background
frame
observation
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102393491A
Other languages
Chinese (zh)
Other versions
CN102881002B (en
Inventor
杨敬钰
孙洋
李坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lingyun Shixun Technology Co ltd
Luster LightTech Co Ltd
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201210239349.1A priority Critical patent/CN102881002B/en
Publication of CN102881002A publication Critical patent/CN102881002A/en
Application granted granted Critical
Publication of CN102881002B publication Critical patent/CN102881002B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明属于计算机视觉领域。为提供一种简便实用的背景提取方法。本发明采取的技术方案是,基于运动信息和矩阵填充的视频背景恢复方法,采用光流检测帧与帧之间的运动目标的移动,并检测出运动信息,生成权值矩阵,并按列向量排列,生成总权值矩阵;再将所有采样帧按列向量排列,形成原始数据阵列矩阵;用观测矩阵与总权值矩阵按元素相乘得到观测矩阵,之后,通过矩阵填充,得出已填充的观测矩阵;最终,将该矩阵的每一列按照原始采样帧的尺寸恢复出来,得到背景图像。。本发明主要应用于背景提取。

Figure 201210239349

The invention belongs to the field of computer vision. In order to provide a simple and practical background extraction method. The technical solution adopted by the present invention is, based on the video background recovery method based on motion information and matrix filling, optical flow is used to detect the movement of moving objects between frames, and the motion information is detected to generate a weight matrix, and the column vector Arrange to generate the total weight matrix; then arrange all the sampling frames by column vectors to form the original data array matrix; multiply the observation matrix and the total weight matrix element by element to obtain the observation matrix, and then fill the matrix to obtain the filled The observation matrix; finally, each column of the matrix is restored according to the size of the original sampling frame to obtain the background image. . The present invention is mainly applied to background extraction.

Figure 201210239349

Description

The video background restoration methods of based on motion information and matrix fill-in
Technical field
The invention belongs to computer vision field, relate to and adopt light stream to detect movable information, and matrix fill-in, realize background extracting.Specifically, the video background restoration methods that relates to based on motion information and matrix fill-in.
Background technology
The usable range of camera increased closely during the decade many.But these growths have caused the expansion of data, mean that dependence goes to store manually or deal with data becomes no longer feasible.For from video, automatically detect, storage, tracing movement target, researchers have proposed the method for some feasibilities.Simple moving object detection algorithm is the one and another present frame with the video sequence the inside, and a stable background frames is made comparisons.This is the background recovery algorithms of main flow, and they can standard go to set up a background model, then compare with it with present frame, detect when very large difference is arranged in the zone.The essence of background recovery algorithms is that moving target (prospect of mentioning namely) is identified from scene (calling background) stable or that slowly move.
Under indoor environment, a stable background model, might be suitable in analyzing of short duration video sequence.But this model poor effect in most of actual conditions needs a kind of more complicated model.And motion detection usually is to analyze the first step of scene.For instance, motion detection zone out may be for Gait Recognition, and people's face detects, stream of people's metering, traffic monitoring etc. and carry out filtering and characterization.The application of background scene and diversity thereof have explained that countless papers are for background extracting institute main topic of discussion.The problem that the background extracting Technology Need solves comprises the comparison for observed image and estimated image, does not comprise any targets of interest.This technology is relevant with background model (perhaps background image).This comparison procedure is called as the prospect detecting, and it is divided into two parts to the pixel that covers entire image: 1) comprise the prospect of targets of interest, 2) background, namely its supplementary set.
A lot of background extracting technology all comprise a lot of models and storage, some algorithm focuses on the specific demand that proposes an idealized background extracting technology, and also some must be adapted to slowly or violent illumination variation (variations of the situations such as time, cloud layer), dynamic change (camera vibrations), high-frequency background object (for example branch leaf) and background Geometrical change (vehicle of for example parking).Some application need in the Algorithms for Background Extraction insert camera, and computational load just becomes subject matter like this.And, in order to monitor outdoor scene, also need very strong noise resisting ability and illumination variation adaptive faculty.A special format for alternative background density model, the algorithm that has keeps an at first buffer memory of the given background value quantity that observes for each pixel, if a new value is coupling for the value that great majority are stored in the pixel model, then range background.Someone wishes that this method can prevent from connecting with offset issue, and the method is called as any assumed density model.But because the value of pixel background substitutes according to the method for first in first out, in accurately careful discussion, this still can have problems, unless it has stored a large amount of pixel samples, such as, about the fast and slow problem of motion in the background.This author mentions the buffer memory of 20 samples, is the minimum essential requirement that allows this method work, certainly they also noticed more than 60 samples to this method be do not have much improved.Therefore, the culture period of these methods must be comprised of 20 frames at least.Finally, in order to tackle illumination variation and target appearance or disappearance in the background, need to add two kinds of extra mechanism (a kind of is pixel level, and also having a kind of is the spot level) to algorithm, process whole target.
Summary of the invention
The present invention is intended to solution and overcomes the deficiencies in the prior art, and a kind of simple and practical background extracting method is provided.The technical scheme that the present invention takes is that the video background restoration methods of based on motion information and matrix fill-in adopts light stream to detect the movement of the moving target between frame and the frame, and detect movable information, generate weight matrix, and arrange by column vector, generate the total weight value matrix; Again all sample frame are arranged by column vector, form the raw data array matrix; Multiplying each other by element with observing matrix and total weight value matrix obtains observing matrix, afterwards, by matrix fill-in, draws the observing matrix of having filled; Finally, with each row of this matrix according to the size restoration of crude sampling frame out, obtain background image.Specifically, comprise the following steps:
1) construct original experimental data:
11) uniform sampling k frame image matrix from video is as original experimental data;
2) structure total weight value matrix:
21) detect the displacement that the movement elements between per two frames occurs with optical flow approach, obtain k-1 light stream matrix, this element is being worked as the displacement vector that occurs between the front cross frame in the original experimental data of each element representation in this matrix;
22) keep displacement vector in each light stream matrix less than 1 element, though namely in the raw data element motion be no more than a pixel, also to take into account, generate k-1 gray level image by conversion, the pixel value of the element that wherein moves is 0; The pixel value of the element that does not move is 255, claims that this gray level image is weight matrix;
23) this k-1 weight matrix is arranged by column vector, generated the total weight value matrix;
3) structure raw data array:
31) because the last frame image of sampling does not participate in motion, only the k-1 frame asks weight matrix used by light stream as a comparison, so the k frame of original experimental data is given up; This k-1 frame raw data is arranged by column vector, generate raw data array;
4) carry out background extracting:
41) make up observing matrix: total weight value matrix and raw data array corresponding element are multiplied each other obtains observing matrix;
42) by the matrix fill-in algorithm observing matrix is rebuild: make up Augmented Lagrangian Functions and find the solution protruding optimization problem.
41) making up observing matrix is specially:
Weight matrix has marked the position of part foreground pixel point, namely obtains the required observing matrix of matrix fill-in algorithm after it and raw data array corresponding element are multiplied each other; The element of observing matrix can be divided three classes: background pixel, the prospect that has calibrated, the unrecognized prospect that goes out; The prospect that has calibrated is set to 0, in iterative process, treats as the vacancy element, this matrix is summarized as following mathematical model:
P Ω(D)=P Ω(A)+E
D ∈ R wherein (m*n) * (k-1)Be the observing matrix of input, A ∈ R (m*n) * (k-1)Be the background matrix that reconstructs, E ∈ R (m*n) * (k-1)Be the matrix that the unrecognized foreground elements that goes out forms, (m*n) the every frame of expression observation data has m*n pixel, and k-1 represents to have in the observation data k-1 frame video sequence; Ω is the prospect coordinate that calibrates, P Ω() represents matrix projection to index Ω.
42) be specially, it can abstractly be following protruding optimization problem that observing matrix is filled:
min||A||*+λ||E|| 1 subject to P Ω(D)=P Ω(A)+E
Wherein λ>0 is used for balance nuclear norm and 1 norm to the impact of optimization problem, and subject to represents to make obedience, find the solution following formula and need to make up Augmented Lagrangian Functions, by iteration convergence progressively to optimum solution:
L ( A , E , Y , &mu; ) = | | A | | * + &lambda; | | E | | 1 + < Y , P &Omega; ( D - A ) - E > + &mu; 2 | | P &Omega; ( D - A ) - E | | F 2
Y ∈ R in the formula (m*n) * (k-1)Be Lagrange multiplier, μ>0 is penalty factor.Finally finish matrix fill-in through after the limited number of time iteration to following formula, can produce the observing matrix of having filled, split, can obtain (k-1) individual size is the background image of m * n.
The characteristics of method of the present invention and effect:
The inventive method is also detected fully or is extracted in the situation of moving target not needing, and uses matrix fill-in can recover preferably background image.Have following characteristics:
1, program is simple, is easy to realize.
2, adopt the method for light stream, raw data is processed, detect movable information, draw weight matrix, obtain the total weight value matrix after arranging by column vector; Again raw data is given up last frame and obtained observing matrix by the column vector arrangement.These two matrixes are multiplied each other by corresponding element, obtain a matrix that the element disappearance is arranged, can turn parts into the whole problem like this, go whole dealing with problems with a larger matrix.And need not stick to moving target and the background parts of processing in each two field picture.
3, utilize the result of optical flow analysis, detect movable information, then carry out again matrix fill-in, can obviously improve the effect of background extracting.
Description of drawings
Above-mentioned and/or the additional aspect of the present invention and advantage are from obviously and easily understanding becoming the description of embodiment below in conjunction with accompanying drawing, wherein:
Fig. 1 is actual implementing procedure figure;
Fig. 2 is the raw data image that the k frame sampling obtains;
Fig. 3 is k-1 frame weight matrix image;
Fig. 4 is k-1 frame background image.
Embodiment
Below in conjunction with embodiment and accompanying drawing the present invention is made a detailed description.
The technical scheme that the present invention takes is 1) adopt light stream to detect the movement of the moving target between frame and the frame, detect movable information, generate weight matrix, and arrange by column vector, generate the total weight value matrix; Again all sample frame are arranged by column vector, form raw data array; 2) multiply each other by element with raw data array matrix and total weight value matrix and obtain observing matrix, afterwards, by matrix fill-in, draw the observing matrix of having filled; 3) final, each row of this low-rank matrix are filled out according to the size of crude sampling frame, obtain background image.Specifically, comprise the following steps:
1) construct original experimental data:
11) image array of uniform sampling k frame m * n size from video is as original experimental data;
2) structure total weight value matrix:
21) detect the displacement that the movement elements between per two frames occurs with optical flow approach, detect movable information, it is the same with sample frame to obtain (k-1) individual size, dimension is 2 light stream matrix, and this element is being worked as the displacement vector that occurs between the front cross frame in the original experimental data of each element representation in this matrix;
22) each light stream matrixing is become to be of a size of m * n capable, the compute matrix of 2 row, the 1st displacement scalar of classifying each element transverse movement between two frames as wherein, the 2nd classifies the displacement scalar of each element lengthwise movement between two frames as;
23) keep displacement vector in each light stream matrix less than 1 element (even namely in the raw data element motion be no more than a pixel, also to take into account), generate (k-1) individual gray level image that is of a size of m * n size by conversion, the pixel value of the element that wherein moves is 0, i.e. black; The pixel value of the element that does not move is 255, i.e. white.We claim that this gray level image is weight matrix, and (k-1) is individual altogether;
24) weight matrix of this (k-1) individual m * n size is arranged by column vector, generated size and be (the total weight value matrix of m * n) * (k-1);
3) structure raw data array:
31) because the last frame image of sampling does not participate in motion, only (k-1) frame asks weight matrix used by light stream as a comparison.So the k frame of original experimental data is given up (k-1) frame before keeping.The raw data of this (k-1) frame m * n size is arranged by column vector, generate (the raw data array of size of m * n) * (k-1).
4) carry out background extracting
41) make up observing matrix
Weight matrix has marked the position of part foreground pixel point, namely obtains the required observing matrix of matrix fill-in algorithm after it and raw data array corresponding element are multiplied each other.It is to concentrate at partial data as a reference with the motion vector that optical flow analysis produces the part foreground elements is demarcated.The element of observing matrix can be divided three classes: background pixel, the prospect that has calibrated, the unrecognized prospect that goes out.The prospect that we will calibrate sets to 0, and treats as the vacancy element in iterative process.To sum up, this matrix can be summarized as following mathematical model:
P Ω(D)=P Ω(A)+E
D ∈ R wherein (m*n) * (k-1)Be the observing matrix of input, A ∈ R (m*n) * (k-1)Be the background matrix that reconstructs, E ∈ R (m*n) * (k-1)Be the matrix that the unrecognized foreground elements that goes out forms, (m*n) the every frame of expression observation data has m*n pixel, (k-1) has k-1 frame video sequence in the expression observation data.Ω is the prospect coordinate that calibrates, P Ω() represents matrix projection to index Ω.
43) mathematical model
It can abstractly be following protruding optimization problem that observing matrix is filled:
min||A||*+λ||E|| 1 subject to P Ω(D)=P Ω(A)+E
Wherein λ>0 is used for the impact on optimization problem of balance nuclear norm and 1 norm.Find the solution following formula and need to make up Augmented Lagrangian Functions, by iteration convergence progressively to optimum solution:
L ( A , E , Y , &mu; ) = | | A | | * + &lambda; | | E | | 1 + < Y , P &Omega; ( D - A ) - E > + &mu; 2 | | P &Omega; ( D - A ) - E | | F 2
Y ∈ R in the formula (m*n) * (k-1)Be Lagrange multiplier, μ>0 is penalty factor.Finally finish matrix fill-in through after the limited number of time iteration to following formula, can produce the observing matrix of having filled, split, can obtain (k-1) individual size is the background image of m * n.
The present invention proposes the background extracting method (shown in the flow process of Fig. 1) of a kind of optical flow-based and matrix fill-in, reach by reference to the accompanying drawings embodiment and be described in detail as follows:
11) image array of uniform sampling 25 frames 288 * 360 sizes from video is as original experimental data (as shown in Figure 2);
2) structure total weight value matrix:
21) detect the displacement that the movement elements between per two frames occurs with optical flow approach, it is the same with sample frame to obtain 24 sizes, dimension is 2 light stream matrix, and this element is being worked as the displacement vector that occurs between the front cross frame in the original experimental data of each element representation in this matrix;
22) each light stream matrixing is become to be of a size of 288 * 360 row, the compute matrix of 2 row, the 1st displacement scalar of classifying each element transverse movement between two frames as wherein, the 2nd classifies the displacement scalar of each element lengthwise movement between two frames as;
23) keep displacement vector in each light stream matrix less than 1 element (even namely in the raw data element motion be no more than a pixel, also to take into account), generate 24 gray level images that are of a size of 288 * 360 sizes by conversion, the pixel value of the element that wherein moves is 0, i.e. black; The pixel value of the element that does not move is 255, i.e. white.We claim that this gray level image is weight matrix, totally 24 (as shown in Figure 3);
24) weight matrix of these 24 288 * 360 sizes is arranged by column vector, generating size is the total weight value matrix of (288 * 360) * 24;
3) structure raw data array:
31) because the last frame image of sampling does not participate in motion, only the 24th frame asks weight matrix used by light stream as a comparison.So the 25th frame of original experimental data is given up, is kept front 24 frames.The raw data of these 24 frame, 288 * 360 sizes is arranged by column vector, generate the raw data array of (288 * 360) * 24 size.
4) carry out background extracting
41) make up observing matrix
Total weight value matrix and raw data array corresponding element are multiplied each other, generate the observing matrix that contains the element of having vacant position of (288 * 360) * 24 size.
42) substitution matrix fill-in algorithm produces background matrix
Observing matrix is carried out iteration as input substitution function, and the number of times of different video sequence iteration is not quite similar.After algorithm convergence, can automatically stop to calculate, generate the observing matrix of having filled of (288 * 360) * 24 size.Each row of the observing matrix of then this having been filled split out, and rearranging becomes 24 288 * 360 matrix, namely obtain 24 background images after the reduction.

Claims (4)

1.一种基于运动信息和矩阵填充的视频背景恢复方法,其特征是,包括如下步骤:采用光流检测帧与帧之间的运动目标的移动,并检测出运动信息,生成权值矩阵,并按列向量排列,生成总权值矩阵;再将所有采样帧按列向量排列,形成原始数据阵列矩阵;用观测矩阵与总权值矩阵按元素相乘得到观测矩阵,之后,通过矩阵填充,得出已填充的观测矩阵;最终,将该矩阵的每一列按照原始采样帧的尺寸恢复出来,得到背景图像。1. A video background restoration method based on motion information and matrix filling, is characterized in that, comprises the steps: adopt optical flow to detect the movement of the moving object between frame and frame, and detect motion information, generate weight matrix, And arrange it according to the column vector to generate the total weight matrix; then arrange all the sampling frames according to the column vector to form the original data array matrix; multiply the observation matrix and the total weight matrix element by element to obtain the observation matrix, and then fill it with the matrix, Obtain the filled observation matrix; finally, restore each column of the matrix according to the size of the original sampling frame to obtain the background image. 2.如权利要求1所述的基于运动信息和矩阵填充的视频背景恢复方法,其特征是,所述步骤进一步细化为:2. the video background restoration method based on motion information and matrix filling as claimed in claim 1, is characterized in that, described step is further refined as: 1)构造原始实验数据:1) Construct the original experimental data: 11)从视频中均匀采样k帧图像矩阵,作为原始实验数据;11) Uniformly sample k-frame image matrix from the video as the original experimental data; 2)构造总权值矩阵:2) Construct the total weight matrix: 21)用光流方法检测每两帧之间的运动元素所发生的位移,得到k-1个光流矩阵,该矩阵中的每一一个元素表示原始实验数据中该元素在当前两帧之间所发生的位移矢量;21) Use the optical flow method to detect the displacement of the moving element between every two frames, and obtain k-1 optical flow matrices, each element in the matrix represents the element in the original experimental data between the current two frames The displacement vector between 22)保留每个光流矩阵中位移矢量小于1的元素,也就是即使原始数据中元素运动不超过一个像素,也要考虑进去,通过变换生成k-1个灰度图像,其中发生运动的元素的像素值为0;没有发生运动的元素的像素值为255,称这个灰度图像为权值矩阵;22) Keep the elements whose displacement vector is less than 1 in each optical flow matrix, that is, even if the element movement in the original data does not exceed one pixel, it must be taken into account, and k-1 grayscale images are generated by transformation, and the moving elements The pixel value of the element is 0; the pixel value of the element without motion is 255, and this grayscale image is called a weight matrix; 23)将这k-1个权值矩阵按列向量排列,生成总权值矩阵;23) arrange the k-1 weight matrices by column vectors to generate a total weight matrix; 3)构造原始数据阵列:3) Construct the original data array: 31)由于采样的最后一帧图像不参与运动,只作为对比第k-1帧通过光流求权值矩阵所用,所以将原始实验数据的第k帧舍掉;将这k-1帧原始数据按列向量排列,生成原始数据阵列;31) Since the last sampled image does not participate in motion, it is only used to compare the weight matrix of the k-1 frame through the optical flow, so the k-th frame of the original experimental data is discarded; the k-1 frame of the original data Arrange by column vector to generate the original data array; 4)进行背景提取:4) Perform background extraction: 41)构建观测矩阵:将总权值矩阵与原始数据阵列对应元素相乘得到观测矩阵;41) Build an observation matrix: multiply the total weight matrix with the corresponding elements of the original data array to obtain an observation matrix; 42)通过矩阵填充算法对观测矩阵进行重建:构建增广拉格朗日函数求解凸优化问题。42) Reconstruct the observation matrix through the matrix filling algorithm: build an augmented Lagrangian function to solve the convex optimization problem. 3.如权利要求2所述的基于运动信息和矩阵填充的视频背景恢复方法,其特征是,41)构建观测矩阵具体为,权值矩阵标记出了部分前景像素点的位置,将之与原始数据阵列对应元素相乘后即得到矩阵填充算法所需的观测矩阵;观测矩阵的元素可以分为三类:背景像素,已标定出的前景,未被识别出的前景;将已标定出的前景置0,在迭代过程中当做空缺元素对待,将该矩阵归纳为以下数学模型:3. the video background recovery method based on motion information and matrix filling as claimed in claim 2, it is characterized in that, 41) constructing observation matrix is specifically, weight matrix has marked the position of part foreground pixel point, compares it with original After multiplying the corresponding elements of the data array, the observation matrix required by the matrix filling algorithm is obtained; the elements of the observation matrix can be divided into three categories: background pixels, calibrated foreground, and unrecognized foreground; the calibrated foreground Set to 0, treat it as a vacant element in the iterative process, and summarize the matrix into the following mathematical model: PΩ(D)=PΩ(A)+E (D)= (A)+E 其中D∈R(m*n)*(k-1)为输入的观测矩阵,A∈R(m*n)*(k-1)为重建出的背景矩阵,E∈R(m*n)*(k-1)为未被识别出的前景元素组成的矩阵,(m*n)表示观测数据每帧有m*n个像素,k-1表示观测数据中共有k-1帧视频序列;Ω为标定出的前景坐标,PΩ(·)表示将矩阵投影到索引Ω上。Where D∈R (m*n)*(k-1) is the input observation matrix, A∈R (m*n)*(k-1) is the reconstructed background matrix, E∈R (m*n) *(k-1) is a matrix composed of unrecognized foreground elements, (m*n) means that there are m*n pixels in each frame of the observed data, and k-1 means that there are k-1 frames of video sequences in the observed data; Ω is the calibrated foreground coordinate, and P Ω (·) indicates that the matrix is projected onto the index Ω. 4.如权利要求2所述的基于运动信息和矩阵填充的视频背景恢复方法,其特征是,42)具体为,对观测矩阵进行填充可以抽象为以下凸优化问题:4. the video background recovery method based on motion information and matrix filling as claimed in claim 2, is characterized in that, 42) be specifically, the observation matrix is filled and can be abstracted as following convex optimization problem: min||A||*+λ||E||1subject to PΩ(D)=PΩ(A)+Emin||A||*+λ||E|| 1 subject to P Ω (D)=P Ω (A)+E 其中λ>0用来平衡核范数与1范数对优化问题的影响,subject to表示使服从,求解上式需要构建增广拉格朗日函数,通过逐步迭代收敛至最优解:Among them, λ>0 is used to balance the influence of the nuclear norm and the 1 norm on the optimization problem, and subject to means to obey. To solve the above formula, it is necessary to construct an augmented Lagrangian function, and converge to the optimal solution through gradual iteration: LL (( AA ,, EE. ,, YY ,, &mu;&mu; )) == || || AA || || ** ++ &lambda;&lambda; || || EE. || || 11 ++ << YY ,, PP &Omega;&Omega; (( DD. -- AA )) -- EE. >> ++ &mu;&mu; 22 || || PP &Omega;&Omega; (( DD. -- AA )) -- EE. || || Ff 22 式中Y∈R(m*n)(k-1)为拉格朗日乘子,μ>0为惩罚因子,经过对上式的有限次迭代后最终完成矩阵填充,可产生已填充的观测矩阵,再进行拆分,可得到(k-1)个大小为m×n的背景图像。In the formula, Y∈R (m*n)(k-1) is the Lagrangian multiplier, and μ>0 is the penalty factor. After a limited number of iterations of the above formula, the matrix filling is finally completed, and the filled observations can be generated matrix, and then split to obtain (k-1) background images with a size of m×n.
CN201210239349.1A 2012-07-11 2012-07-11 Video background recovery method based on movement information and matrix completion Expired - Fee Related CN102881002B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210239349.1A CN102881002B (en) 2012-07-11 2012-07-11 Video background recovery method based on movement information and matrix completion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210239349.1A CN102881002B (en) 2012-07-11 2012-07-11 Video background recovery method based on movement information and matrix completion

Publications (2)

Publication Number Publication Date
CN102881002A true CN102881002A (en) 2013-01-16
CN102881002B CN102881002B (en) 2014-12-17

Family

ID=47482315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210239349.1A Expired - Fee Related CN102881002B (en) 2012-07-11 2012-07-11 Video background recovery method based on movement information and matrix completion

Country Status (1)

Country Link
CN (1) CN102881002B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136732A (en) * 2013-02-19 2013-06-05 北京工业大学 Image denosing method based on matrix filling
CN104573328A (en) * 2014-12-17 2015-04-29 天津大学 Rectangle fitting method based on slack variable bound
CN105976292A (en) * 2016-04-27 2016-09-28 苏州市伏泰信息科技股份有限公司 City life garbage classification data collection method and system thereof
CN106204477A (en) * 2016-07-06 2016-12-07 天津大学 Video frequency sequence background restoration methods based on online low-rank background modeling
CN108280445A (en) * 2018-02-26 2018-07-13 江苏裕兰信息科技有限公司 A kind of detection method of vehicle periphery moving object and raised barrier
CN109983469A (en) * 2016-11-23 2019-07-05 Lg伊诺特有限公司 Use the image analysis method of vehicle drive information, device, the system and program and storage medium
CN109993089A (en) * 2019-03-22 2019-07-09 浙江工商大学 A method of video object removal and background restoration based on deep learning
CN110120026A (en) * 2019-05-23 2019-08-13 东北大学秦皇岛分校 Matrix complementing method based on Schatten Capped p norm
CN111626942A (en) * 2020-03-06 2020-09-04 天津大学 Method for recovering dynamic video background based on space-time joint matrix

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105765A1 (en) * 2003-11-17 2005-05-19 Mei Han Video surveillance system with object detection and probability scoring based on object class
CN102063727A (en) * 2011-01-09 2011-05-18 北京理工大学 Covariance matching-based active contour tracking method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050105765A1 (en) * 2003-11-17 2005-05-19 Mei Han Video surveillance system with object detection and probability scoring based on object class
CN102063727A (en) * 2011-01-09 2011-05-18 北京理工大学 Covariance matching-based active contour tracking method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DONGXIANG ZHOU ET AL.: "Modified GMM Background Modeling and Optical Flow for Detection of Moving Objects", 《2005 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS》 *
李喜来等: "智能交通系统运动车辆的光流法检测", 《光电技术应用》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136732A (en) * 2013-02-19 2013-06-05 北京工业大学 Image denosing method based on matrix filling
CN104573328B (en) * 2014-12-17 2017-06-13 天津大学 Rectangle fitting method based on slack variable constraint
CN104573328A (en) * 2014-12-17 2015-04-29 天津大学 Rectangle fitting method based on slack variable bound
CN105976292A (en) * 2016-04-27 2016-09-28 苏州市伏泰信息科技股份有限公司 City life garbage classification data collection method and system thereof
CN106204477B (en) * 2016-07-06 2019-05-31 天津大学 Video frequency sequence background restoration methods based on online low-rank background modeling
CN106204477A (en) * 2016-07-06 2016-12-07 天津大学 Video frequency sequence background restoration methods based on online low-rank background modeling
CN109983469A (en) * 2016-11-23 2019-07-05 Lg伊诺特有限公司 Use the image analysis method of vehicle drive information, device, the system and program and storage medium
CN109983469B (en) * 2016-11-23 2023-08-08 Lg伊诺特有限公司 Image analysis method, device, system, and program using vehicle driving information, and storage medium
CN108280445A (en) * 2018-02-26 2018-07-13 江苏裕兰信息科技有限公司 A kind of detection method of vehicle periphery moving object and raised barrier
CN108280445B (en) * 2018-02-26 2021-11-16 江苏裕兰信息科技有限公司 Method for detecting moving objects and raised obstacles around vehicle
CN109993089A (en) * 2019-03-22 2019-07-09 浙江工商大学 A method of video object removal and background restoration based on deep learning
CN109993089B (en) * 2019-03-22 2020-11-24 浙江工商大学 A method of video object removal and background restoration based on deep learning
CN110120026A (en) * 2019-05-23 2019-08-13 东北大学秦皇岛分校 Matrix complementing method based on Schatten Capped p norm
CN110120026B (en) * 2019-05-23 2022-04-05 东北大学秦皇岛分校 Data Recovery Method Based on Schatten Capped p-norm
CN111626942A (en) * 2020-03-06 2020-09-04 天津大学 Method for recovering dynamic video background based on space-time joint matrix

Also Published As

Publication number Publication date
CN102881002B (en) 2014-12-17

Similar Documents

Publication Publication Date Title
CN102881002A (en) Video background recovery method based on movement information and matrix completion
Sun et al. TIB-Net: Drone detection network with tiny iterative backbone
JP7206386B2 (en) Image processing model training method, image processing method, network device, and storage medium
CN111402237B (en) Video image anomaly detection method and system based on space-time cascade self-encoder
CN105243670A (en) Sparse and low-rank joint expression video foreground object accurate extraction method
CN107993208A (en) It is a kind of based on sparse overlapping group prior-constrained non local full Variational Image Restoration method
CN102054270A (en) Method and device for extracting foreground from video image
Jiang et al. Event-based low-illumination image enhancement
CN109685045A (en) A kind of Moving Targets Based on Video Streams tracking and system
CN111626948B (en) A Low-Photon Poisson Image Restoration Method Based on Image Complementation
CN114581386B (en) Defect detection method and device based on spatiotemporal data
CN111696033A (en) Real image super-resolution model and method for learning cascaded hourglass network structure based on angular point guide
CN108133487A (en) Video cross-region single-person gesture target detection and extraction method
Mai et al. Back propagation neural network dehazing
CN109461122A (en) A kind of compressed sensing image rebuilding method based on multi-view image
Wang et al. Weakly supervised single image dehazing
CN107423709A (en) A kind of object detection method for merging visible ray and far infrared
Gao et al. A novel dual-stage progressive enhancement network for single image deraining
Yang et al. Detail-aware near infrared and visible fusion with multi-order hyper-Laplacian priors
Pang et al. Infrared and visible image fusion based on double fluid pyramids and multi-scale gradient residual block
CN106296610A (en) The three-dimensional framework restorative procedure analyzed based on low-rank matrix
Zhang et al. Multisensor infrared and visible image fusion via double joint edge preservation filter and nonglobally saliency gradient operator
Gu et al. Continuous bidirectional optical flow for video frame sequence interpolation
CN109002802B (en) Video foreground separation method and system based on adaptive robust principal component analysis
Zhu et al. HDRD-Net: High-resolution detail-recovering image deraining network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200709

Address after: 411, block a, Zhizao street, Zhongguancun, No. 45, Chengfu Road, Haidian District, Beijing 100080

Patentee after: Beijing Youke Nuclear Power Technology Development Co.,Ltd.

Address before: 300072 Tianjin City, Nankai District Wei Jin Road No. 92

Patentee before: Tianjin University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201012

Address after: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Patentee after: Beijing lingyunguang Technology Group Co.,Ltd.

Address before: 411, block a, Zhizao street, Zhongguancun, No. 45, Chengfu Road, Haidian District, Beijing 100080

Patentee before: Beijing Youke Nuclear Power Technology Development Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100094 701, 7 floor, 7 building, 13 Cui Hunan Ring Road, Haidian District, Beijing.

Patentee after: Lingyunguang Technology Co.,Ltd.

Address before: 100094 701, 7 floor, 7 building, 13 Cui Hunan Ring Road, Haidian District, Beijing.

Patentee before: Beijing lingyunguang Technology Group Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210114

Address after: 518000 room 1101, 11th floor, building 2, C District, Nanshan Zhiyuan, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Lingyun Shixun Technology Co.,Ltd.

Address before: 100094 701, 7 floor, 7 building, 13 Cui Hunan Ring Road, Haidian District, Beijing.

Patentee before: Lingyunguang Technology Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141217