CN106503647A - The accident detection method that structural sparse is represented is approached based on low-rank - Google Patents
The accident detection method that structural sparse is represented is approached based on low-rank Download PDFInfo
- Publication number
- CN106503647A CN106503647A CN201610915766.1A CN201610915766A CN106503647A CN 106503647 A CN106503647 A CN 106503647A CN 201610915766 A CN201610915766 A CN 201610915766A CN 106503647 A CN106503647 A CN 106503647A
- Authority
- CN
- China
- Prior art keywords
- feature
- dictionary
- training
- test
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 claims abstract description 71
- 238000012360 testing method Methods 0.000 claims abstract description 50
- 238000012549 training Methods 0.000 claims abstract description 45
- 230000008569 process Effects 0.000 claims abstract description 35
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 230000009467 reduction Effects 0.000 claims abstract description 3
- 230000002159 abnormal effect Effects 0.000 claims description 30
- 238000000513 principal component analysis Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 4
- 238000003064 k means clustering Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 238000013459 approach Methods 0.000 abstract description 3
- 230000005856 abnormality Effects 0.000 abstract description 2
- 230000006399 behavior Effects 0.000 description 9
- 206010000117 Abnormal behaviour Diseases 0.000 description 3
- 230000002547 anomalous effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of approach the accident detection method that structural sparse is represented based on low-rank, including feature extraction, three processes of training and test.1) the multiple dimensioned three-dimensional gradient feature of video sequence is extracted;2) dimensionality reduction is carried out to multiple dimensioned three-dimensional gradient feature, forms training characteristics collection and test feature collection;3) remaining training characteristics and relevant parameter are initialized;4) remaining training characteristics are iterated with study group sparse dictionary, normal mode wordbook is obtained;5) using the group sparse dictionary collection obtained by training process, sparse reconstruction is carried out to test feature;6) according to reconstruction error, judge whether test feature is off-note.The present invention solves in abnormality detection technology the slower shortcoming of low-rank characteristic and detection rates for fully not excavating video data.
Description
Technical Field
The invention relates to the field of pattern recognition and video analysis, in particular to an abnormal event detection method based on low-rank approximation structured sparse representation.
Background
Video sequence anomalous event detection is an active research topic in computer vision and has been widely used in many applications such as crowd monitoring, public place detection, traffic safety and personal behavior anomalies. In the face of massive video data, the traditional method for manually marking abnormal events is time-consuming and inefficient. Therefore, an automated and fast video sequence anomaly detection method is urgently needed.
Although research on abnormal event detection has made great progress in feature extraction, behavior modeling, and abnormal measurement, the detection of abnormal events in video sequences remains a very challenging task. First, there is no precise definition of exceptional events in video. One common abnormal behavior identification method is abnormal behavior pattern clustering, and the other is to use detection samples with low occurrence rate as the abnormality. The difficulty with the first approach is that there is insufficient a priori knowledge to describe the abnormal behavior pattern; the second approach requires the establishment of a probabilistic model, and anomaly detection relies on the definition of normal patterns and multi-scale changes of features. Second, anomaly detection in dense scenes requires that the behavioral model can handle a high density of moving objects, which requires consideration of the effects of occlusion and interaction between multiple objects.
From the viewpoint of feature extraction, the abnormal event detection method can be classified into a method based on a target trajectory and a method based on a low-level feature. The target trajectory-based method first performs moving target tracking and then detects an abnormal event using the target trajectory. The method based on the target track can clearly express the space state of the target at each moment, but the method is sensitive to noise, masking and tracking errors and cannot perform abnormal detection on dense scenes. The method based on the low-level features can overcome the defects of the target track method by extracting the motion features and the morphological features of the pixel level in the video sequence.
Currently, the mainstream methods for detecting abnormal events include Dynamic Bayesian Networks (DBNs), Probabilistic Topic Models (PTMs) and sparse representation models. In DBNs, Hidden Markov Models (HMMs) and Markov Random Fields (MRFs) increase the modeling cost geometrically with increasing detection targets, resulting in insufficient processing of dense scenes by these models. In contrast to DBNs, PTMs, such as PLSA and LDA, only focus on spatially symbiotic visual words, but ignore temporal information of features, making probabilistic topic models unable to localize anomalous events spatio-temporally. In recent years, a sparse representation model for anomaly detection has attracted attention. Most sparse representation models are trained to obtain an over-complete dictionary, but the low-rank characteristic and the inherent structural redundancy of video data are not fully mined.
Disclosure of Invention
The invention aims to provide an abnormal event detection method based on low-rank approximate structured sparse representation, aiming at the defects that the low-rank characteristic and the detection rate of video data are not fully mined in the abnormal event detection technology.
The technical solution for realizing the purpose of the invention is as follows: an abnormal event detection method based on low-rank approximation structured sparse representation comprises three processes of feature extraction, training and testing:
the feature extraction process comprises the following steps:
1) extracting multi-scale three-dimensional gradient features of a video sequence;
2) and reducing the dimension of the multi-scale three-dimensional gradient feature to form a training feature set and a testing feature set.
The training process comprises the following steps:
3) initializing residual training characteristics and related parameters;
4) and performing iterative learning on the residual training characteristics to form a group sparse dictionary, and obtaining a normal mode dictionary set.
The test process comprises the following steps:
5) carrying out sparse reconstruction on the test features by utilizing a group sparse dictionary set obtained in the training process;
6) and judging whether the test feature is an abnormal feature or not according to the reconstruction error.
In the above method, the step 1) comprises the following specific steps:
1.1) carrying out scaling of different scales on each frame of image of a video sequence to form a three-layer image pyramid.
1.2) carrying out space-time cube sampling on each layer of image, and extracting three-dimensional gradient characteristics of non-overlapped regions in space.
1.3) for each layer of video sequence, superposing three-dimensional gradient features of 5 continuous frames in the same spatial region together to form a space-time feature.
In the above method, the step 2) comprises the following specific steps:
2.1) reducing the dimension of each extracted space-time characteristic by utilizing Principal Component Analysis (PCA).
2.2) converting the training video sequence and the test video sequence into a training characteristic set and a test characteristic set by using the method.
In the above method, the step 3) includes the following specific steps:
3.1) initializing the training feature set obtained in the step 2.2) into a residual training feature set;
3.2) initializing a regularization parameter, an error threshold, iteration times and a normal mode dictionary set;
in the above method, the step 4) includes the following specific steps:
4.1) if the residual feature set is empty, ending the training process; and if the residual feature set is not empty, determining the clustering number, and carrying out K-means clustering on the residual feature set.
And 4.2) respectively carrying out dictionary learning on each feature cluster to obtain a group sparse dictionary.
4.3) choosing a suitable dictionary to represent the remaining features. If the dictionary can represent the residual features, adding the dictionary into the normal mode dictionary set, and removing the features which can be represented by the dictionary from the residual training feature set; if the dictionary may not represent any of the remaining features, the dictionary is discarded.
4.4) adding 1 to the iteration times, and jumping to the step 4.1);
in the above method, the step 5) includes the following specific steps:
5.1) traversing a normal mode dictionary set obtained in the training process for each test feature, and estimating a sparse coefficient corresponding to the dictionary;
5.2) calculating the reconstruction error of the test feature corresponding to the group of sparse dictionaries according to the specific dictionary and the corresponding sparse coefficient thereof;
in the above method, the step 6) includes the following specific steps:
6.1) for a certain test feature, if a dictionary with a corresponding reconstruction error smaller than a reconstruction threshold is found, judging that the test feature is a normal feature;
6.2) for a certain test feature, if the reconstruction errors of any corresponding dictionary set are larger than the reconstruction threshold, judging that the test feature is an abnormal feature;
compared with the prior art, the invention has the following remarkable advantages: first, because of the low-rank nature of each video feature cluster, the method can learn the group sparse dictionary of normal behavior patterns by utilizing an efficient low-rank solution algorithm, such as the SVD threshold algorithm; secondly, the method adaptively determines the number of dictionary bases of each normal behavior mode, and can more accurately understand the dynamic scene semantics; and thirdly, different from the traditional sparse representation method, the method selects a proper dictionary from a dictionary set to represent the test sample, so that the sparse reconstruction of the video event is more accurate, the detection speed is obviously improved, and the real-time performance is ensured.
Drawings
FIG. 1 is an overview of the adaptive anomaly detection method.
FIG. 2 is a flow chart of three-dimensional spatio-temporal gradient feature extraction.
Fig. 3 is a multi-scale video frame.
FIG. 4 is a three-dimensional gradient feature.
FIG. 5 is a spatiotemporal cube with overlapping spatial regions.
FIG. 6 is a flow chart of group sparse dictionary learning.
FIG. 7 is a flow chart of abnormal event detection.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The abnormal event detection method of the invention comprises three main processes of a characteristic extraction process, a training process and a testing process, as shown in figure 1.
The feature extraction process is shown in fig. 2, and comprises the following specific steps:
the video sequence image frames are converted into a three-dimensional image pyramid process 21. Converting each frame of image of the video sequence into a gray scale image, and scaling each frame of gray scale image into three different scales: 20 x 20,30 x 40 and 120 x 160, forming a three-level image pyramid. For each scale of the image pyramid, each frame is divided into non-overlapping regions of the same spatial size (10 × 10), as shown in fig. 3.
A process 22 of extracting three-dimensional gradient features of a video sequence. In order to take into account morphological characteristics and motion characteristics of the target in the dense scene, a three-dimensional space-time gradient is adopted as a target for feature extraction, as shown in fig. 4. Assuming that x, y and t represent the horizontal, vertical and temporal directions in a video sequence, respectively, G is the three-dimensional gradient of a pixel and its corresponding three-dimensional projection is G, respectivelyx、GyAnd Gt。
Spatio-temporal feature formation Using spatio-temporal cube samples 23. the same spatial region over 5 consecutive frames constitutes spatio-temporal cubes, each spatio-temporal cube consisting of all pixels in a cube of size 10 × 10 × 5,/x×ly10 × 10 and ltThe space size and the time sequence length of the spatio-temporal cube are respectively represented by 5. The three-dimensional gradients of all pixels in a spatio-temporal cube constitute a three-dimensional spatio-temporal gradient feature. To maintain timing information of scene events, spatiotemporal cubes that overlap in timing are sampled on each spatial region, as shown in FIG. 5. Assume that the body center of a space-time cube is (S)x,Sy,St) Then its chronologically adjacent spatio-temporal cubes lie in (S)x,Sy,St-lt[ 2 ] and (S)x,Sy,St+lt/2)。
The training feature set and test feature set process 24 is formed using spatio-temporal feature dimensionality reduction. Since the dimension (10 × 10 × 5 × 3 ═ 1500) of the three-dimensional gradient feature is too high, which results in too large computation amount in the training process and the testing process and affects the real-time performance of detection, the space-time feature is reduced to 100 dimensions by using PCA. And finally, obtaining a training feature set and a testing feature set.
It should be noted that the training video sequence only contains normal events; the test video sequence contains both normal and abnormal events.
The training process refers to a process of learning a normal behavior pattern through a known normal behavior sample set. The method comprises the following specific steps:
the method adopts an iteration mode to train all the features on each spatial position respectively to obtain a group sparse dictionary set on each spatial position for representing all the training features on the spatial position. And for all the features on each spatial position, obtaining similar feature clusters by using the K mean value, and then performing dictionary learning on each similar feature cluster by using a low-rank approximation algorithm. Each iteration removes features that can be represented by a dictionary, and then continues dictionary learning for the next iteration on the remaining features until all features can be trained into a dictionary representation.
The training parameter process 61 is initialized. The remaining training feature set is initialized to the training feature set resulting from the feature extraction process, i.e., X ═ X1,x2,...,xn]Whereinn is the number of features, m is the dimension of each feature, the regularization parameter τ is 0.015, the error threshold T is 0.01, the iteration number j is 1, and the normal pattern dictionary set
The remaining training feature set clustering process 62 is performed using K means. According to the number of the residual training features, the clustering number is selected in a self-adaptive mode, and then K-means clustering is carried out on the residual feature set. Suppose that j isthC in the sub-iterationthThe characteristics of the individual clusters are represented asWherein j is 1, 2., N,denotes the cthThe number of features in a cluster, and the function f (-) maps the feature rank in the cluster to the rank of the feature in the initial feature set.
As in fig. 6, the group sparse dictionary process 63 is learned using a low rank approximation algorithm. Different from the traditional sparse representation model, the method utilizes the low-rank structure of video feature clustering and adopts a low-rank approximation algorithm to learn the group sparse dictionary containing the highly relevant dictionary base, so that redundant information in the video data is abandoned. The objective function for group sparse dictionary learning is as follows:
wherein,using singular value decomposition, τ being a regularization parameter, ωiIs the ith singular value λiThe weight of (c). The weighted low-rank optimization problem adopts a singular value thresholding algorithm, and the closed solution of the formula (1) is as follows:
wherein,r is operated by soft thresholdThe estimated rank. Soft threshold operation on each singular value, less than τ · ωiIs set to 0 and is greater than tau omegaiThe singular values of (a) remain the original values.
A group sparse dictionary process 64 is selected. Once determinedFor any one featureThe objective function is as follows:
wherein,to representThe sparse coefficient vector of (a) is,representation featureWhether or not to be able to be usedRepresents, constraint term ∑ gammai,j1 and γi,jWith {0,1} being used to ensure that only one dictionary can be picked for representation Dictionary for representationRepresenting feature clustersT denotes an error threshold. The closed solution for γ is as follows:
if it is notCan representThen the dictionaryAdding a dictionary set D, keeping singular value information corresponding to the dictionary, enabling the iteration number j to be j +1, and applying a sparse coefficientIs characterized byFrom the remaining feature set XjRemoving; if it is notMay not represent any of the remaining featuresThen the dictionary is discarded
The processes 63 and 64 are iterated until the remaining feature set is an empty set.
And finally, acquiring a group sparse dictionary set of normal features, wherein each dictionary is used for representing a normal behavior mode. Note that since the video sequence corresponding to the training feature set only contains normal events, the learned dictionaries all represent normal behavior patterns.
As shown in fig. 7, the testing process refers to a process of detecting whether a test sample is an abnormal sample through a normal pattern dictionary set learned through the training process. The method comprises the following specific steps:
test parameters process 71 is initialized. Initializing a normal behavior pattern dictionary set to a dictionary set resulting from a training process, i.e., D ═ D1,D2,...DN]The reconstruction error threshold is.
A reconstruction error process 72 for the test feature is calculated. For any one test feature x, for each dictionary DkAnd (3) calculating a reconstruction error by the following method:
β in equation (5)kThe optimal solution of (a) is:
the reconstruction error at this time is:
a determination is made as to whether the test feature is an abnormal feature process 73. The test sample is represented by searching a proper dictionary, and whether the test sample is an abnormal event is judged by utilizing the reconstruction error. If the reconstruction error is less than the reconstruction error threshold, it indicates that the test feature x can be stored in the dictionary DkIndicating that the test feature x belongs to a normal event; otherwise, the test feature x cannot be used by dictionary DkAnd (4) showing. If all dictionaries in the normal pattern dictionary set cannot represent the test feature x, then the test feature x belongs to an exception event.
It is important to point out here that compared with the most advanced algorithm at present, the invention adopts an iterative low-rank approximation method, and achieves at least 2% detection accuracy improvement. By the method for searching the normal mode dictionary set, the detection speed can be improved by more than 20 times compared with the traditional abnormal detection method. In addition, compared with the traditional sparse representation method, the method can reduce training time by at least 10 times by adopting the SVD threshold algorithm.
Claims (6)
1. An abnormal event detection method based on low-rank approximation structured sparse representation is characterized by comprising the following steps of: the method comprises three processes of feature extraction, training and testing:
the feature extraction process comprises the following steps:
1) extracting multi-scale three-dimensional gradient features of a video sequence;
2) reducing the dimension of the multi-scale three-dimensional gradient feature to form a training feature set and a testing feature set;
the training process comprises the following steps:
3) initializing residual training characteristics and related parameters;
4) performing iterative learning on the residual training features to form a sparse dictionary, and obtaining a normal mode dictionary set;
the test process comprises the following steps:
5) carrying out sparse reconstruction on the test features by utilizing a group sparse dictionary set obtained in the training process;
6) and judging whether the test feature is an abnormal feature or not according to the reconstruction error.
2. The method for detecting the abnormal events based on the low-rank approximation structured sparse representation as claimed in claim 1, wherein: the step 1) comprises the following specific steps:
1.1) zooming each frame image of the video sequence in different scales to form a three-layer image pyramid;
1.2) performing space-time cube sampling on each layer of image, and extracting three-dimensional gradient features of non-overlapped regions in space;
1.3) for each layer of video sequence, superposing three-dimensional gradient features of 5 continuous frames in the same spatial region together to form a space-time feature.
3. The abnormal event detection method based on the low-rank approximation structured sparse representation as claimed in claims 1 and 2, wherein: the step 2) comprises the following specific steps:
2.1) carrying out dimensionality reduction on each extracted space-time feature by using Principal Component Analysis (PCA);
2.2) converting the training video sequence and the test video sequence into a training characteristic set and a test characteristic set by using the method.
4. The method for detecting abnormal events based on the low-rank approximation structured sparse representation according to claim 1 or 3, wherein: the step 3) comprises the following specific steps:
3.1) initializing the training feature set obtained in the step 2.2) into a residual training feature set;
3.2) initializing a regularization parameter, an error threshold, iteration times and a normal pattern dictionary set.
5. The method for detecting abnormal events based on the low-rank approximation structured sparse representation as claimed in claim 1, wherein: the step 4) comprises the following specific steps:
assuming that the number of iterations j is 1, the remaining training features are denoted as Xj=[x1,x2,...,xn]Whereinn is the number of features, m is the dimension of each feature, the regularization parameter is τ, the normal pattern dictionary set is
4.1) determining the number of clusters CjFor the remaining feature set XjPerforming K-means clustering, at this time jthC in the sub-iterationthThe characteristics of the individual clusters are represented asWherein j is 1, 2., N,denotes the cthThe feature number of each cluster, and a function f (-) maps the feature serial number in each cluster to the serial number of the feature in the initial feature set;
4.2) using formula (1) and formula (2):
performing dictionary learning on all the clusters of the residual features to obtain the jththIn the next iteration CjDictionary set for individual feature clusteringWherein,using singular value decomposition, τ being a regularization parameter, ωiIs the ith singular value λiThe weight of (a) is determined,r is operated by soft thresholdAn estimated rank; soft threshold operation on each singular value, less than τ · ωiIs set to 0 and is greater than tau omegaiThe singular value of (A) still keeps the original value;
4.3) Once determinedFor any remaining featureSelecting a dictionary by using formula (3) and formula (4)To represent the remaining features;
wherein,to representThe sparse coefficient vector of (a) is,representation featureWhether or not to be able to be usedRepresents, constraint term ∑ gammai,j1 and γi,jWith {0,1} being used to ensure that only one dictionary can be picked for representation Dictionary for representationRepresenting feature clustersT denotes an error threshold;
if it is notCan representThen the dictionaryAdding dictionary set D and thinning out coefficientIs characterized byFrom the remaining feature set XjRemoving; if it is notMay not represent any of the remaining featuresThen the dictionary is discarded
4.4) the number of iterations plus 1, j ═ j + 1;
4.5) when the residual feature set is an empty set, the group sparse dictionary learning process is ended.
6. The method for detecting abnormal events based on the low-rank approximation structured sparse representation as claimed in claim 1, wherein: the step 5) comprises the following specific steps:
suppose the normal pattern dictionary set is D ═ D1,D2,...DN];
5.1) given a test feature x, for a dictionary Dk(k ═ 1, 2.. N) the reconstruction error was calculated:
5.2) solving β in equation (5)kThe optimal solution of (2):
5.3) calculating the reconstruction error of the test feature x by using the formula (7):
if the reconstruction error is less than the reconstruction error threshold, it indicates that the test feature x can be stored in the dictionary DkIndicating that the test feature x belongs to a normal event; otherwise, the test feature x cannot be used by dictionary DkRepresenting that k is k +1, return to step 5.1);
5.4) if all dictionaries in the normal pattern dictionary set can not represent the test feature x, the test feature x belongs to the abnormal event.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610915766.1A CN106503647A (en) | 2016-10-21 | 2016-10-21 | The accident detection method that structural sparse is represented is approached based on low-rank |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610915766.1A CN106503647A (en) | 2016-10-21 | 2016-10-21 | The accident detection method that structural sparse is represented is approached based on low-rank |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106503647A true CN106503647A (en) | 2017-03-15 |
Family
ID=58318284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610915766.1A Pending CN106503647A (en) | 2016-10-21 | 2016-10-21 | The accident detection method that structural sparse is represented is approached based on low-rank |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106503647A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460412A (en) * | 2018-02-11 | 2018-08-28 | 北京盛安同力科技开发有限公司 | A kind of image classification method based on subspace joint sparse low-rank Structure learning |
CN109117774A (en) * | 2018-08-01 | 2019-01-01 | 广东工业大学 | A kind of multi-angle video method for detecting abnormality based on sparse coding |
CN110580504A (en) * | 2019-08-27 | 2019-12-17 | 天津大学 | Video abnormal event detection method based on self-feedback mutual exclusion subclass mining |
CN111931682A (en) * | 2020-08-24 | 2020-11-13 | 珠海大横琴科技发展有限公司 | Abnormal behavior detection method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318261A (en) * | 2014-11-03 | 2015-01-28 | 河南大学 | Graph embedding low-rank sparse representation recovery sparse representation face recognition method |
CN105046717A (en) * | 2015-05-25 | 2015-11-11 | 浙江师范大学 | Robust video object tracking method |
CN105469359A (en) * | 2015-12-09 | 2016-04-06 | 武汉工程大学 | Locality-constrained and low-rank representation based human face super-resolution reconstruction method |
CN105513093A (en) * | 2015-12-10 | 2016-04-20 | 电子科技大学 | Object tracking method based on low-rank matrix representation |
CN105825477A (en) * | 2015-01-06 | 2016-08-03 | 南京理工大学 | Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion |
-
2016
- 2016-10-21 CN CN201610915766.1A patent/CN106503647A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318261A (en) * | 2014-11-03 | 2015-01-28 | 河南大学 | Graph embedding low-rank sparse representation recovery sparse representation face recognition method |
CN105825477A (en) * | 2015-01-06 | 2016-08-03 | 南京理工大学 | Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion |
CN105046717A (en) * | 2015-05-25 | 2015-11-11 | 浙江师范大学 | Robust video object tracking method |
CN105469359A (en) * | 2015-12-09 | 2016-04-06 | 武汉工程大学 | Locality-constrained and low-rank representation based human face super-resolution reconstruction method |
CN105513093A (en) * | 2015-12-10 | 2016-04-20 | 电子科技大学 | Object tracking method based on low-rank matrix representation |
Non-Patent Citations (1)
Title |
---|
BOSI YU: ""Low-rank Approximation based Abnormal Detection in The Video Sequence"", 《2016 IEEE INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460412A (en) * | 2018-02-11 | 2018-08-28 | 北京盛安同力科技开发有限公司 | A kind of image classification method based on subspace joint sparse low-rank Structure learning |
CN108460412B (en) * | 2018-02-11 | 2020-09-04 | 北京盛安同力科技开发有限公司 | Image classification method based on subspace joint sparse low-rank structure learning |
CN109117774A (en) * | 2018-08-01 | 2019-01-01 | 广东工业大学 | A kind of multi-angle video method for detecting abnormality based on sparse coding |
CN109117774B (en) * | 2018-08-01 | 2021-09-28 | 广东工业大学 | Multi-view video anomaly detection method based on sparse coding |
CN110580504A (en) * | 2019-08-27 | 2019-12-17 | 天津大学 | Video abnormal event detection method based on self-feedback mutual exclusion subclass mining |
CN110580504B (en) * | 2019-08-27 | 2023-07-25 | 天津大学 | Video abnormal event detection method based on self-feedback mutual exclusion subclass mining |
CN111931682A (en) * | 2020-08-24 | 2020-11-13 | 珠海大横琴科技发展有限公司 | Abnormal behavior detection method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106503652A (en) | Based on the accident detection method that low-rank adaptive sparse is rebuild | |
Mukhoti et al. | Evaluating bayesian deep learning methods for semantic segmentation | |
CN109344736B (en) | Static image crowd counting method based on joint learning | |
WO2020173226A1 (en) | Spatial-temporal behavior detection method | |
CN109146921B (en) | Pedestrian target tracking method based on deep learning | |
CN106778595B (en) | Method for detecting abnormal behaviors in crowd based on Gaussian mixture model | |
CN110188637A (en) | A kind of Activity recognition technical method based on deep learning | |
CN110084228A (en) | A kind of hazardous act automatic identifying method based on double-current convolutional neural networks | |
CN110334589B (en) | High-time-sequence 3D neural network action identification method based on hole convolution | |
CN111191667B (en) | Crowd counting method based on multiscale generation countermeasure network | |
CN110120064B (en) | Depth-related target tracking algorithm based on mutual reinforcement and multi-attention mechanism learning | |
CN108229338A (en) | A kind of video behavior recognition methods based on depth convolution feature | |
CN107862275A (en) | Human bodys' response model and its construction method and Human bodys' response method | |
CN107016689A (en) | A kind of correlation filtering of dimension self-adaption liquidates method for tracking target | |
CN104050685B (en) | Moving target detecting method based on particle filter visual attention model | |
CN111080675A (en) | Target tracking method based on space-time constraint correlation filtering | |
CN105095862A (en) | Human gesture recognizing method based on depth convolution condition random field | |
WO2023207742A1 (en) | Method and system for detecting anomalous traffic behavior | |
Pruteanu-Malinici et al. | Infinite hidden Markov models for unusual-event detection in video | |
US20130136298A1 (en) | System and method for tracking and recognizing people | |
CN106503647A (en) | The accident detection method that structural sparse is represented is approached based on low-rank | |
Xiong et al. | Contextual Sa-attention convolutional LSTM for precipitation nowcasting: A spatiotemporal sequence forecasting view | |
CN108038515A (en) | Unsupervised multi-target detection tracking and its storage device and camera device | |
CN107194322B (en) | A kind of behavior analysis method in video monitoring scene | |
CN111144220B (en) | Personnel detection method, device, equipment and medium suitable for big data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170315 |
|
RJ01 | Rejection of invention patent application after publication |