CN108764177A - A kind of moving target detecting method based on low-rank decomposition and expression combination learning - Google Patents

A kind of moving target detecting method based on low-rank decomposition and expression combination learning Download PDF

Info

Publication number
CN108764177A
CN108764177A CN201810550978.3A CN201810550978A CN108764177A CN 108764177 A CN108764177 A CN 108764177A CN 201810550978 A CN201810550978 A CN 201810550978A CN 108764177 A CN108764177 A CN 108764177A
Authority
CN
China
Prior art keywords
pixel
super
frame
low
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810550978.3A
Other languages
Chinese (zh)
Other versions
CN108764177B (en
Inventor
李成龙
熊紫薇
汤进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201810550978.3A priority Critical patent/CN108764177B/en
Publication of CN108764177A publication Critical patent/CN108764177A/en
Application granted granted Critical
Publication of CN108764177B publication Critical patent/CN108764177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on low-rank decomposition and indicates the moving target detecting method of combination learning, obtains each frame image of video sequence to be detected;Super-pixel segmentation is carried out to every frame image, and extracts feature vector and merges composition matrix;Linearly related each other based on the background image in video sequence, a priori assumption moving target is relatively small continuous fragment, and can use the holotopy indicated among coefficient describes a frame between super-pixel indicated in model, obtains algorithm model;Model is solved, the label of each super-pixel in each frame is obtained, the testing result of every frame image can be obtained.The present invention is more efficient compared to the existing progress moving object detection as unit of pixel, and memory overhead is less;Holotopy between the super-pixel obtained using expression model, compared to the existing accuracy higher for only using local structure continuous constraint and being detected.

Description

A kind of moving target detecting method based on low-rank decomposition and expression combination learning
Technical field
The present invention relates to a kind of computer visions to learn to carry out the technology of moving object detection more particularly to a kind of base In the moving target detecting method of low-rank decomposition and expression combination learning.
Background technology
Moving object detection is the basic project of computer vision field, in video monitoring, driving navigation and reality enhancing Equal fields have a wide range of applications value.Moving object detection refers to target in one section of video sequence by movement from video It positions and splits, be the basis of the tasks such as succeeding target identification and tracking, behavioural analysis.Traditional moving object detection side There are three types of formulas:
(1) optical flow method Determining Optical flow. determine light stream, i.e., calculate each picture using optical flow equation The motion state vector of vegetarian refreshments, to find the pixel of movement, and can be to these pixels into line trace.
(2) frame difference method subtracts each other the respective pixel position of two continuous frames in video sequence, obtained gray scale difference value configuration frame Between difference image, if gray scale difference value be more than setting threshold value, judge that the pixel for moving target, otherwise judges the pixel Point is background.
(3) background subtraction method, first build one do not include moving object static background image, then use present image with The gray scale difference values of background image corresponding pixel points judges moving target, if gray scale difference value is more than the threshold value of setting, judges The pixel is moving target, otherwise judges the pixel for background.
These above-mentioned detection methods are established in the level of pixel scale mostly, based on statistical learning, it is intended to from picture Moving target and background are distinguished in the angle of plain value, to background scene the considerations of is too simple, it is difficult to handle true video Scene.
More popular background subtraction method is by the Robust Principal Component Analysis (Robust based on low-rank and sparse decomposition recently principal component analysis:RPCA)Robust principal component analysis:exact Recovery of corrupted low-rank matrices by convex optimization. robust principal components point Analysis:Pass through the low-rank matrix of convex optimization Exact recovery damage.Applied to moving object detection, i.e., by each frame figure of video sequence As the matrix decomposition that is formed after vectorization is at the background parts and sparse foreground part (i.e. moving target) of low-rank.
Initiative work Robust principal component analysis. Robust Principal Component Analysis.Showing can To pursue (Principal Component Pursuit by principal component:PCP) restore low-rank mould from unknown pattern damage Type, stable principal component pursue (Stable Principal Component Pursuit:SPCP)Stable principal The principal component that component pursuit. stablize is pursued.The extension of PCP, can preferably handle sparse rough error and Small noise, but these methods do not consider the continuous constraints of structure when being modeled to foreground target, are easy to cause " empty Hole " and extraordinary noise.
For this purpose, Moving object detection by detecting contiguous outliers in the Low-rank representation. carry out moving object detection by the continuous abnormal value detected in low-rank representation. DECOLOR is added structure to foreground target and continuously constrains, and is modeled as markov random file, but this method is pair Video sequence carries out batch processing, can not handle arbitrarily long video.
To solve this problem, Corola:A sequential solution to moving object detection using low-rank approximation.Corola:Moving object detection is carried out using low-rank approximation Continuous solution.COROLA is extended DECOLOR, can online processing video sequence, and before it is in order to improve The accuracy of scape target detection adds gauss hybrid models.
Another method Background subtraction via superpixel-based online matrix Decomposition with structured foreground constraints. with structuring foreground by constraining Line matrix based on super-pixel, which decomposes, carries out background subtraction.It is the online decomposition based on super-pixel, to background matrix maximum Norm carries out regularization, the sparse constraint of structuring is carried out to foreground target, however although this method devises a super-pixel The Optimization Framework of rank remains pixel scale when constraining foreground target, and is still needed in final target detection Differentiation operation is carried out to each pixel inside each super-pixel.
These methods assume that the background image of bottom is linearly related, and moving target only accounts for the fraction of image, Therefore the matrix being made of the video sequence of vectorization can be with low-rank matrix come approximate, and can be in this low-rank representation It is exceptional value by the target detection of movement.It can be to avoid many vacations of foreground behavior using moving object detection as abnormality detection And if the low-rank representation of background can be flexibly adapted to the global change in background.
But this method still has deficiency, is mainly shown as:1. the processing procedure of pixel scale makes time and memory overhead It is very big;2. the structure continuous constraint using part improves testing result, have ignored among a frame image pixel or super-pixel it Between holotopy.
Invention content
Technical problem to be solved by the present invention lies in:Time and memory overhead in existing method are big and ignore pixel Or between super-pixel the problem of holotopy, a kind of moving target inspection based on low-rank decomposition and expression combination learning is provided Survey method.
The present invention is that solution above-mentioned technical problem, the present invention include the following steps by the following technical programs:
(101) each frame image of video sequence to be detected is obtained;
(102) super-pixel segmentation is carried out to every frame image, feature composition of vector is extracted as unit of super-pixel, will entirely be regarded The vector of each frame merges composition matrix in frequency sequence;
(103) linearly related each other based on the background image in video sequence, a priori assumption moving target is relatively small Continuous fragment;And based on expression model, expression model refers to that other in the image can be used in each super-pixel in image The linear combination of super-pixel indicates, and the coefficient of linear combination indicates coefficient, with indicating that coefficient describes super picture among a frame Holotopy between element, obtains algorithm model;
(104) model is solved, obtains the label of each super-pixel in each frame, the label of super-pixel is assigned should Each pixel in super-pixel, you can obtain the testing result of every frame image.
In the step (101), video sequence to be detected is obtained first, and background is fixed in the video sequence, that is, is realized Moving object detection under monitoring scene, image width are W, a height of H, then the every frame image obtained is W*H*3, wherein 3 represent RGB The pixel value of image in three channels.
In the step (102), according to the super-pixel segmentation of each frame as a result, extracting super-pixel as unit of super-pixel Feature composition characteristic vectorWherein t indicates that t frames, k indicate the number of super-pixel among a frame, the The feature vector x of i super-pixeliIt can be the combination of lab colors mean value, histogram either low-level image feature;Entire video sequence The feature vector of each frame merges composition matrix X=[X in row1, X2..., Xn], wherein n is the totalframes of video sequence.
In the step (103), the background image in video sequence is linearly related each other, is that low-rank is added in background matrix B Constrain λ | | B | |*, wherein λ is control parameter, controls the complexity of background model.
A priori assumption foreground object is relatively small continuous fragment, it can thus be concluded that the sparse smoothness constraint formula of foreground S
Wherein first item carries out sparse constraint using L1 norms to S, indicates that foreground target accounts for fraction in the picture;
Section 2 carries out smoothness constraint to super-pixel, i.e., the label between super-pixel is as similar as possible;
β and η is control parameter, and β controls the sparsity of foreground target, and η controls the correlation between two super-pixel,It is the binary label of t frame super-pixel, if i-th of super-pixel is foreground, i.e. moving target, then 1 is taken, otherwise takes 0.
The foreground refers to moving different any objects from background, and the Strength Changes that foreground moving generates can not adapt to The low-rank model of background, they are detected as exceptional value in low-rank representation.
For each frame image, model F is indicatedt=FtZt+Et(t ∈ [1,2 ..., n]),
WhereinThat is the eigenmatrix of a frame image, ZtTo indicate coefficient, EtFor noise, mould is indicated Type refers to each super-pixel in image can be indicated with the linear combination of other super-pixel in the image, and linear combination Coefficient indicates coefficient, indicates coefficient ZtThe similarity relation between each super-pixel of present frame and other super-pixel is reflected, For describing the holotopy among a frame between super-pixel, smoothness constraint is improved to
In indicating model, with nuclear norm to ZtLow-rank constraint is carried out, with L1 norms to EtSparse constraint is carried out, is obtained about Beam formula ∑t(||Zt||*+α||Et||1), wherein α is control parameter, controls the sparsity of noise.
The algorithm model is:
Constraints isFt=FtZt+Et(t ∈ [1,2 ..., n]), whereinIt is the complementary matrix of S, if Super-pixel is foreground namely moving target, then0 is taken, otherwise takes 1.
In the step (104), the label s of each super-pixel of each frame is obtained after solving modeli, the super-pixel Tag representation super-pixel belong to foreground or background, and assign the label of super-pixel to each pixel in the super-pixel, The testing result of each frame is finally determined according to the label of each pixel in each frame.
The present invention has the following advantages compared with prior art:The present invention is as unit of super-pixel, compared to existing with pixel More efficient for unit progress moving object detection, memory overhead is less;It is complete between the super-pixel obtained using expression model Office's relationship, compared to the existing accuracy higher for only using local structure continuous constraint and being detected.
Description of the drawings
Fig. 1 is the flow diagram of the present invention.
Specific implementation mode
It elaborates below to the embodiment of the present invention, the present embodiment is carried out lower based on the technical solution of the present invention Implement, gives detailed embodiment and specific operating process, but protection scope of the present invention is not limited to following implementation Example.
As shown in Figure 1, the present embodiment includes the following steps:
Step (101):
Video sequence to be detected is obtained first, and background is fixed in video sequence, that is, realizes the moving target under monitoring scene Detection, image width are W, a height of H, then the every frame image obtained is W*H*3, wherein 3 represent the pixel of image in tri- channels RGB Value.
Step (102):
Super-pixel point is carried out to every frame image using SLIC partitioning algorithms (Slic superpixels.Slic super-pixel) Cut, wherein super-pixel refer to have many characteristics, such as similar grain, color, brightness adjacent pixel constitute have certain visual meaningaaa Irregular block of pixels, it, by group pixels, a large amount of picture is replaced with a small amount of super-pixel using the similitude of feature between pixel Characteristics of image is usually expressed, the complexity of post processing of image is largely reduced, is widely used at computer vision Reason.
According to the super-pixel segmentation of each frame as a result, extracting the feature composition characteristic vector of super-pixel as unit of super-pixelWherein t indicates that t frames, k indicate the number of super-pixel among a frame, the feature of i-th of super-pixel Vector xiIt can be the combination of lab colors mean value, histogram either low-level image feature;The feature of each frame in entire video sequence Vector merges composition matrix X=[X1, X2..., Xn], wherein n is the totalframes of video sequence.
Step (103):
The background scene of video sequence is fixed under monitoring scene, in addition to dynamic caused by illumination variation or cycle movement Except texture variations, background should remain unchanged in entire video sequence, and therefore, background image is linearly related each other, be formed Low-rank matrix B does not do background scene any additional it is assumed that being thus that low-rank is added about in background matrix B in addition to low-rank attribute Beam λ | | B | |*, wherein λ is control parameter, controls the complexity of background model;
Foreground refers to moving different any objects from background, and the Strength Changes that foreground moving generates can not adapt to background Low-rank model, they can be detected as exceptional value in low-rank representation.
In general, a priori assumption foreground object should be relatively small continuous fragment.It can thus be concluded that foreground S's is sparse Smoothness constraint formulaWherein first item carries out sparse constraint using L1 norms to S, indicates foreground mesh Mark accounts for fraction in the picture;Section 2 carries out smoothness constraint to super-pixel, i.e., the label between super-pixel is as far as possible It is similar.β and η is control parameter, and β controls the sparsity of foreground target, and η controls the correlation between two super-pixel,It is the binary label of t frame super-pixel, if i-th of super-pixel is foreground, i.e. moving target, then1 is taken, otherwise takes 0;
Since above-mentioned smoothness constraint only accounts for local structure continuous constraint, super-pixel among a frame image is had ignored Between holotopy, therefore be added indicate model, explore the structural constraint of foreground target automatically by low-rank representation.For every One frame image indicates model Ft=FtZt+Et(t ∈ [1,2 ..., n]), whereinThat is a frame image Eigenmatrix, ZtTo indicate coefficient, EtFor noise, expression model refers to that its in the image can be used in each super-pixel in image The linear combination of his super-pixel indicates, and the coefficient of linear combination indicates coefficient, indicates coefficient ZtReflect present frame Each similarity relation between super-pixel and other super-pixel, it is flat for describe the holotopy among a frame between super-pixel Slip constraint is improved to
In indicating model, with nuclear norm to ZtLow-rank constraint is carried out, with L1 norms to EtSparse constraint is carried out, is obtained about Beam formula ∑t(||Zt||*+α||Et||1), wherein α is control parameter, controls the sparsity of noise.
Above-mentioned model joint can be shown that final algorithm model is as follows:
Constraints isFt=FtZt+Et(t ∈ [1,2 ..., n]), whereinIt is the complementary matrix of S, if Super-pixel is foreground namely moving target, then0 is taken, otherwise takes 1.
Step (104):
The label s of each super-pixel of each frame can be obtained after solving modeli, the super picture of tag representation of the super-pixel Element belongs to foreground (namely moving target) or background, and assigns the label of super-pixel to each pixel in the super-pixel, Such as:The label s of first super-pixel1It indicates that the label of all pixels point in first super-pixel is 0 for 0, that is, belongs to the back of the body Scape;The label s of second super-pixel2It indicates that the label of all pixels point in second super-pixel is 1 for 1, that is, belongs to foreground (moving target) finally determines the testing result of each frame according to the label of each pixel in each frame.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention All any modification, equivalent and improvement etc., should all be included in the protection scope of the present invention made by within refreshing and principle.

Claims (10)

1. a kind of moving target detecting method based on low-rank decomposition and expression combination learning, which is characterized in that including following step Suddenly:
(101) each frame image of video sequence to be detected is obtained;
(102) super-pixel segmentation is carried out to every frame image, feature composition of vector is extracted as unit of super-pixel, by entire video sequence The vector of each frame merges composition matrix in row;
(103) linearly related each other based on the background image in video sequence, a priori assumption moving target is relatively small company Continuous fragment;And based on expression model, expression model refers to that the super picture of other in the image can be used in each super-pixel in image The linear combination of element indicates, and the coefficient of linear combination indicates coefficient, with indicate coefficient describe among a frame super-pixel it Between holotopy, obtain algorithm model;
(104) model is solved, obtains the label of each super-pixel in each frame, assign the label of super-pixel to the super picture Each pixel in element, you can obtain the testing result of every frame image.
2. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 1, It is characterized in that, in the step (101), obtains video sequence to be detected first, background is fixed in the video sequence, that is, is realized Moving object detection under monitoring scene, image width are W, a height of H, then the every frame image obtained is W*H*3, wherein 3 represent RGB The pixel value of image in three channels.
3. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 1, It is characterized in that, in the step (102), according to the super-pixel segmentation of each frame as a result, extracting super-pixel as unit of super-pixel Feature composition characteristic vectorWherein t indicates that t frames, k indicate the number of super-pixel among a frame, The feature vector x of i-th of super-pixeliIt is the combination of lab colors mean value, histogram either low-level image feature;Entire video sequence In the feature vector of each frame merge composition matrix X=[X1, X2..., Xn], wherein n is the totalframes of video sequence.
4. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 1, It is characterized in that, in the step (103), the background image in video sequence is linearly related each other, is that low-rank is added in background matrix B Constrain λ | | B | |*, wherein λ is control parameter, controls the complexity of background model.
5. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 4, It is characterized in that, a priori assumption foreground object is relatively small continuous fragment, it can thus be concluded that the sparse smoothness constraint formula of foreground S
Wherein first item carries out sparse constraint using L1 norms to S, indicates that foreground target accounts for fraction in the picture;
Section 2 carries out smoothness constraint to super-pixel, i.e., the label between super-pixel is as similar as possible;
β and η is control parameter, and β controls the sparsity of foreground target, and η controls the correlation between two super-pixel,It is the binary label of t frame super-pixel, if i-th of super-pixel is foreground, i.e. moving target, then 1 is taken, otherwise takes 0.
6. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 5, It is characterized in that, the foreground refers to moving different any objects from background, and the Strength Changes that foreground moving generates can not fit The low-rank model of background is answered, they are detected as exceptional value in low-rank representation.
7. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 6, It is characterized in that, model F is indicated for each frame image for each frame imaget=FtZt+Et(t ∈ [1,2 ..., n]),
WhereinThat is the eigenmatrix of a frame image, ZtTo indicate coefficient, EtFor noise, indicate that model is The each super-pixel referred in image can be indicated with the linear combination of other super-pixel in the image, and the coefficient of linear combination It indicates coefficient, indicates coefficient ZtThe similarity relation between each super-pixel of present frame and other super-pixel is reflected, is used for The holotopy between super-pixel, smoothness constraint among a frame is described to be improved to
8. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 7, It is characterized in that, in indicating model, with nuclear norm to ZtLow-rank constraint is carried out, with L1 norms to EtSparse constraint is carried out, is obtained about Beam formula ∑t(||Zt||*+α||Et||1), wherein α is control parameter, controls the sparsity of noise.
9. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 8, It is characterized in that, the algorithm model is:
Constraints isWhereinIt is the complementary matrix of S, if super Pixel is foreground namely moving target, then0 is taken, otherwise takes 1.
10. a kind of moving target detecting method based on low-rank decomposition and expression combination learning according to claim 9, It is characterized in that, in the step (104), the label s of each super-pixel of each frame is obtained after solving modeli, the super picture The tag representation super-pixel of element belongs to foreground or background, and the label of super-pixel is assigned to each pixel in the super-pixel Point finally determines the testing result of each frame according to the label of each pixel in each frame.
CN201810550978.3A 2018-05-31 2018-05-31 Moving target detection method based on low-rank decomposition and representation joint learning Active CN108764177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810550978.3A CN108764177B (en) 2018-05-31 2018-05-31 Moving target detection method based on low-rank decomposition and representation joint learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810550978.3A CN108764177B (en) 2018-05-31 2018-05-31 Moving target detection method based on low-rank decomposition and representation joint learning

Publications (2)

Publication Number Publication Date
CN108764177A true CN108764177A (en) 2018-11-06
CN108764177B CN108764177B (en) 2021-08-27

Family

ID=64001212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810550978.3A Active CN108764177B (en) 2018-05-31 2018-05-31 Moving target detection method based on low-rank decomposition and representation joint learning

Country Status (1)

Country Link
CN (1) CN108764177B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610508A (en) * 2019-08-20 2019-12-24 全球能源互联网研究院有限公司 Static video analysis method and system
CN112561949A (en) * 2020-12-23 2021-03-26 江苏信息职业技术学院 Fast moving target detection algorithm based on RPCA and support vector machine
CN113658227A (en) * 2021-08-26 2021-11-16 安徽大学 RGBT target tracking method and system based on collaborative low-rank graph model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810474A (en) * 2014-02-14 2014-05-21 西安电子科技大学 Car plate detection method based on multiple feature and low rank matrix representation
CN105868784A (en) * 2016-03-29 2016-08-17 安徽大学 Disease and insect pest detection system based on SAE-SVM
CN103700091B (en) * 2013-12-01 2016-08-31 北京航空航天大学 Based on the image significance object detection method that multiple dimensioned low-rank decomposition and structural information are sensitive
US20170116481A1 (en) * 2015-10-23 2017-04-27 Beihang University Method for video matting via sparse and low-rank representation
CN107358245A (en) * 2017-07-19 2017-11-17 安徽大学 A kind of detection method of image collaboration marking area

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103700091B (en) * 2013-12-01 2016-08-31 北京航空航天大学 Based on the image significance object detection method that multiple dimensioned low-rank decomposition and structural information are sensitive
CN103810474A (en) * 2014-02-14 2014-05-21 西安电子科技大学 Car plate detection method based on multiple feature and low rank matrix representation
US20170116481A1 (en) * 2015-10-23 2017-04-27 Beihang University Method for video matting via sparse and low-rank representation
CN105868784A (en) * 2016-03-29 2016-08-17 安徽大学 Disease and insect pest detection system based on SAE-SVM
CN107358245A (en) * 2017-07-19 2017-11-17 安徽大学 A kind of detection method of image collaboration marking area

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIANG ZHANG 等: "Salient object detection based on super-pixel clustering and unified low-rank representation", 《COMPUTER VISION AND IMAGE UNDERSTANDING》 *
XIAOWEI ZHOU 等: "Moving Object Detection by Detecting Contiguous Outliers in the Low-Rank Representation", 《ARXIV:1109.0882V2》 *
刘雅 等: "基于低秩表示的乳腺癌病理图像有丝分裂检测", 《计算机应用研究》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610508A (en) * 2019-08-20 2019-12-24 全球能源互联网研究院有限公司 Static video analysis method and system
CN110610508B (en) * 2019-08-20 2021-11-09 全球能源互联网研究院有限公司 Static video analysis method and system
CN112561949A (en) * 2020-12-23 2021-03-26 江苏信息职业技术学院 Fast moving target detection algorithm based on RPCA and support vector machine
CN112561949B (en) * 2020-12-23 2023-08-22 江苏信息职业技术学院 Rapid moving object detection algorithm based on RPCA and support vector machine
CN113658227A (en) * 2021-08-26 2021-11-16 安徽大学 RGBT target tracking method and system based on collaborative low-rank graph model
CN113658227B (en) * 2021-08-26 2024-02-20 安徽大学 RGBT target tracking method and system based on collaborative low-rank graph model

Also Published As

Publication number Publication date
CN108764177B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
Ma et al. Self-supervised sparse-to-dense: Self-supervised depth completion from lidar and monocular camera
CN108596974B (en) Dynamic scene robot positioning and mapping system and method
Matsushita et al. Illumination normalization with time-dependent intrinsic images for video surveillance
Van Der Mark et al. Real-time dense stereo for intelligent vehicles
CN106548153B (en) Video abnormality detection method based on graph structure under multi-scale transform
CN111462120B (en) Defect detection method, device, medium and equipment based on semantic segmentation model
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN109829449B (en) RGB-D indoor scene labeling method based on super-pixel space-time context
Tian et al. Depth estimation using a self-supervised network based on cross-layer feature fusion and the quadtree constraint
CN112784736B (en) Character interaction behavior recognition method based on multi-modal feature fusion
Chen et al. Fixing defect of photometric loss for self-supervised monocular depth estimation
CN105809716B (en) Foreground extraction method integrating superpixel and three-dimensional self-organizing background subtraction method
CN108764177A (en) A kind of moving target detecting method based on low-rank decomposition and expression combination learning
US9430840B1 (en) Method and system for segmenting an image based on motion vanishing points
Brebion et al. Real-time optical flow for vehicular perception with low-and high-resolution event cameras
CN111507357A (en) Defect detection semantic segmentation model modeling method, device, medium and equipment
CN109035196A (en) Image local fuzzy detection method based on conspicuousness
Chen et al. Layered projection-based quality assessment of 3D point clouds
Wang et al. Spatio-temporal online matrix factorization for multi-scale moving objects detection
Pei MSFNet: Multi-scale features network for monocular depth estimation
Li et al. Fast visual tracking using motion saliency in video
CN106446764B (en) Video object detection method based on improved fuzzy color aggregated vector
CN113255514B (en) Behavior identification method based on local scene perception graph convolutional network
CN112802053B (en) Dynamic object detection method for dense mapping in dynamic environment
CN106485734B (en) A kind of video moving object detection method based on non local self-similarity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant