CN106056135B - A kind of compressed sensing based human action classification method - Google Patents
A kind of compressed sensing based human action classification method Download PDFInfo
- Publication number
- CN106056135B CN106056135B CN201610341943.XA CN201610341943A CN106056135B CN 106056135 B CN106056135 B CN 106056135B CN 201610341943 A CN201610341943 A CN 201610341943A CN 106056135 B CN106056135 B CN 106056135B
- Authority
- CN
- China
- Prior art keywords
- dictionary
- matrix
- video
- classification
- compressed sensing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/245—Classification techniques relating to the decision surface
- G06F18/2453—Classification techniques relating to the decision surface non-linear, e.g. polynomial classifier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Biology (AREA)
- Nonlinear Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a kind of compressed sensing based human action classification method, video features expression, construction visual dictionary and compressed sensing based classification of motion algorithm four steps including space-time interest points detection, based on bag of words;It is to solve training sample feature according to step 1, obtains training sample matrix A=[A1,A2,…,AK]∈Rm×n, k classification, test sample y ∈ RmAnd optional tolerance ε > 0;Dictionary Z, classifier parameters W and coefficient matrices A are solved according to step 2;For new video actions sequence, classified using classifier W obtained in the previous step, finally obtains the classification estimation of the video actions.The beneficial effects of the present invention are: space-time interest points detection, dictionary learning and video features expression are incorporated a learning framework, and learn a linear classifier simultaneously.Learn to differentiate dictionary simultaneously by the method for optimization, differentiate code coefficient and classifier;Simplicity is calculated, robustness is good, and passes through the ability of the method for compressed sensing enhancing processing nonlinear data.
Description
Technical field
The present invention relates to a kind of human action classification methods, are specifically related to a kind of compressed sensing based human action point
Class method belongs to video analysis field.
Background technique
It is well known that extracting data from video reasonably to be indicated movement, it is even more important for the classification of motion.
Usually we need to choose the method that movement indicates according to the method for the classification of motion.For example, trajectory-based method is suitable for
The monitoring of open environment medium and long distance, and 3D model is frequently used in gesture identification.Parameswaran et al. just it is proposed that
Movement representation method: simplicity, completeness, continuity, uniqueness are assessed with following four standard.
Human body contour outline shape is a kind of movement representation method the most intuitive, therefore also has human body largely based on shape
Act representation method.This representation method must be partitioned into motion parts, i.e. background segment from scene first.L.Wang is utilized
Subspace and iconic model are realized using profile information identification maneuver, and Veeraraghaven et al. is then utilized in profile
Upper mark point, and analyze point set and carry out the classification of motion, these classification methods based on profile also all achieve success.
In recent years, compressed sensing is in Speech processing, natural image feature extraction, image denoising, the neck such as recognition of face
Domain is all successfully applied.As the new methods of high dimensional data processing, compressed sensing is also applied to partial descriptions
In polymerization.But in actual application, the main problems faced of compressed sensing included the construction of complete dictionary and dilute
Dredge decomposing algorithm etc..
Currently, the human action classification method for being mostly based on compressed sensing still uses for reference the thinking in image procossing.First
By representation of video shot at a feature vector, then using dictionary learning model learning dictionary and generates the rarefaction representation of video and go forward side by side
Row classification.If Wang et al. is first by Video segmentation at continuous time block, then with multilayer bag of words by representation of video shot at
One feature vector.Jiang et al. uses the feature of Action bank detector maturation as the character representation of video, relies on
Trained detector in advance is not accurately high.
It is research of the present invention for this purpose, how to provide a kind of high accurately compressed sensing based human action classification method
Purpose.
Summary of the invention
In order to overcome the deficiencies of the prior art, the present invention provides a kind of compressed sensing based human action classification method, is
There is preferable robustness in view of rudimentary activities feature, compressive sensing theory is applied in human action classification, will be regarded
Feel that dictionary is combined with rudimentary activities feature, motion characteristic description is effectively extracted from great amount of samples, improves movement
The accuracy of classification.
To solve prior art problem, the technical scheme adopted by the invention is that: a kind of compressed sensing based human body is dynamic
Make classification method, by regarding all action training samples as complete dictionary, designs a compressed sensing based movement
Sorting algorithm, it is characterised in that: the described method includes: space-time interest points detection, the video features expression based on bag of words, structure
Make four steps of visual dictionary and compressed sensing based classification of motion algorithm, in which:
Step 1: space-time interest points detection, for a video sequence, point of interest is determined by three dimensions, mark
The x of spatial position, y-axis and the t axis for indicating the time, are filtered using Gabor in the time domain, use gaussian filtering in two-dimentional airspace
Device finds space-time interest points using filter receptance function, and one-dimensional Gabor filtering is defined as multiplying for sine wave and Gauss window
Product:
Wherein, ω0The centre frequency of peak response can be obtained for filter, σ determines the width of Gauss window;It is described
Point of interest detection method in, receptance function is defined as follows:
R=(I*g*hev)2+(I*g*hod)2
Wherein, I is video sequence, and g (x, y, σ) is 2D Gaussian smoothing core, is applied on two-dimensional space, hevAnd hodFor sky
Between on 1D Gabor filtering it is orthogonal right.
Wherein, parameter σ and τ respectively corresponds the time scale and space scale of detection, and the parameter takes σ=2, τ=3,
The τ of ω=6/;
Step 2: the video features expression based on bag of words, in vision bag of words, prize two dimensional image is mapped as regarding
Feel keyword set, and son is described to calculate local feature using HOG;The method of the calculating uses rectangle HOG calculation method,
Its method includes: to be utilized respectively simple filter operator [- 1,0,1] and [1,0, -1] first to calculate image ladder in the x and y direction
Degree, then calculates the gradient direction of each pixel according to the direction gradient of x and y;
Step 3: construction visual dictionary, the motion characteristic extracted in step 2 enable X=[X1,X2,…,XN] it is all
The eigenmatrix of sample, whereinIndicate the feature formed by all local features of i-th of video by column arrangement
Matrix, NiIndicate sample XiThe local feature number for including,For its corresponding code coefficient matrix;It enablesIt indicates
J-th of local feature,For its corresponding code coefficient vector;Differentiation dictionary definition to be learned is D=[d1, d2..., dK]
∈P×K, differentiate dictionary learning frame object function is defined as:
WhereinTo rebuild error term, differentiation dictionary must be able to preferably rebuild all local features first,For linear classification item, W is classifier parameters,For regularization term, H is category label vector, and λ and η are canonical
Change parameter, controls the relative contribution of respective items;B=[β1,β2,…,βN] it is to the character representation after video features pond, βi
It indicates are as follows:
WhereinExpression length is Ni, each element is equal to 1/NiVector;
Formula (1) can be by alternately optimization, i.e., to dictionary Z, code coefficient matrix A and linear classifier parameter W
It is alternate to minimize objective function, until meeting stop criterion;Its process the following steps are included:
1. initialization indicates dictionary Z and encoder matrix A:
Given D0, indicate that dictionary Z is initialized as K rank unit matrix;Encoder matrix A is initialized as the following formula:
The formula be second order optimization problem, to A derivation and enable derivative be 0:
Initial A0It is calculated as
A0=(ZTκ(D0,D0)Z)-1ZTκ(D0,X)
2. fixed indicate dictionary Z, encoder matrix A, sorting parameter W is calculated:
The formula (1) can be rewritten as
Enabling its derivative is 0, then best W is calculated as
W*=η HBT(λIK×K+ηBBT)-1
Wherein IK×KIndicate that size is the unit matrix of K × K
3. fixed cluster device parameter W, indicating dictionary Z, calculation code matrix A:
The formula (1) is rewritten as
Derivation is carried out to it, is obtained
T=0 is enabled, ▽ g (A is calculatedi), search for feasible step-length ηt, iterative calculation
Until t > T or
4. regular coding matrix A, classifier parameters W, calculating indicates dictionary Z:
The formula (1) is expressed as
The column for indicating dictionary are only updated every time;Enable zkThe kth column for indicating Z, update zkWhen fixed remove zkOther are all outside
Column;Define intermediate variable φ (X)=φ (X)-φ (D0)zkAk, wherein zkIt is defined to indicate that matrix Z deletes the square after kth column
Battle array, AkThe matrix being defined as after encoder matrix A deletion row k;The formula (2) is expressed as
Wherein αkFor the row k of encoder matrix A, which is obtained
It enables the formula be equal to 0, obtains
Since dictionary and code coefficient are to be mutually related, corresponding code coefficient needs synchronized update
5. execute step 2. -4., until meeting stop criterion:
A. reach maximum the number of iterations,
B. indicate that the variation of dictionary Z, classifier parameters W and coefficient matrices A are respectively less than preset threshold value;
Step 4: compressed sensing based classification of motion algorithm has trained a linear classifier W, gives in step 3
A fixed test video ν, calculates its Video coding α firstν:
αν=(ZTκ(D0,D0)Z)-1ZTκ(D0,xν)
Wherein xνThe local feature for indicating video ν, to encoder matrix ανChi Hua obtains the character representation β of video ννTo get
To the classification y of video ννFor
Further, in the step one, the feature based on time change is counted using space-time interest points detection.
Further, in the step two, in the rectangle HOG method, son is described in every piece of upper HOG that calculates,
Every piece may include the grid of several uniformly dense samplings, and often repeat with adjacent block, and the HOG on every piece need to individually carry out specification
Change.
The beneficial effects of the present invention are: space-time interest points detection, dictionary learning and video features expression are incorporated
Frame is practised, and learns a linear classifier simultaneously.Learn to differentiate dictionary simultaneously by the method for optimization, differentiate code coefficient
And classifier;Simplicity is calculated, robustness is good, and passes through the ability of the method for compressed sensing enhancing processing nonlinear data.
Specific embodiment
In order to make those skilled in the art be better understood on technical solution of the present invention, combined with specific embodiments below to this
Invention is further analyzed.
A kind of compressed sensing based human action classification method is complete by regarding all action training samples as
Dictionary designs a compressed sensing based classification of motion algorithm, which comprises space-time interest points detection is based on bag of words
The video features expression of model, construction four steps of visual dictionary and compressed sensing based classification of motion algorithm, wherein: step
One: space-time interest points detection, feature of the method statistic based on time change detected using space-time interest points.For a video
For sequence, point of interest is determined by three dimensions, indicates the x of spatial position, y-axis and the t axis for indicating the time.The present invention is based on
The method of Gabor filtering, is filtered using Gabor in the time domain, is used Gaussian filter in two-dimentional airspace, is responded using filter
Function finds space-time interest points.One-dimensional Gabor filtering is defined as the product of sine wave and Gauss window:
Wherein, ω0The centre frequency of peak response can be obtained for filter, σ determines the width of Gauss window.At this
In the point of interest detection method of invention, receptance function is defined as follows by we:
R=(I*g*hev)2+(I*g*hod)2
The receptance function is for searching the space-time angle point that prediction action responds by force.In receptance function, I is video sequence,
G (x, y, σ) is 2D Gaussian smoothing core, is applied on two-dimensional space, and hevAnd hodThen it is used in 1D spatially
Gabor is filtered orthogonal right.
Wherein, parameter σ and τ respectively corresponds the time scale and space scale of detection, they determine that space-time interest points exist
The scale detected in three dimensions.Parameter takes σ=2, the τ of τ=3, ω=6/.
Step 2: the video features expression based on bag of words;In vision bag of words, the present invention encourages two dimensional image and reflects
It penetrates as vision keyword set, and son is described to calculate local feature using HOG.While saving image local feature, again
Effectively have compressed the description of image.
Using rectangle HOG calculation method, be utilized respectively first simple filter operator [- 1,0,1] and [1,0, -1] in x and
Image gradient is calculated on the direction y, and the gradient direction of each pixel is then calculated according to the direction gradient of x and y.In rectangle
In HOG method, every piece it is upper calculate HOG description, every piece may be comprising the grid of several uniformly dense samplings, and Chang Yuxiang
Adjacent block repeats.In addition, the HOG on every piece will individually standardize.Step 3: construction visual dictionary
Based on the motion characteristic that previous step is extracted, X=[X is enabled1,X2,…,XN] be all samples eigenmatrix, whereinIndicate the eigenmatrix formed by all local features of i-th of video by column arrangement, NiIndicate sample XiInclude
Local feature number,For its corresponding code coefficient matrix.It enablesIndicate j-th of local feature,For it
Corresponding code coefficient vector.Differentiation dictionary definition to be learned is D=[d1, d2..., dK]∈P×K, differentiate dictionary learning frame
Objective function is defined as:
WhereinTo rebuild error term, differentiate that dictionary must be able to preferably rebuild all local features first.For linear classification item, W is classifier parameters,For regularization term, H is category label vector, and λ and η are canonical
Change parameter, controls the relative contribution of respective items;B=[β1,β2,…,βN] it is to the character representation after video features pond, βi
It may be expressed as:
WhereinExpression length is Ni, each element is equal to 1/NiVector.
Formula (1) can be by alternately optimization, i.e., to dictionary Z, code coefficient matrix A and linear classifier parameter W
It is alternate to minimize objective function, until meeting stop criterion.Its process steps are as follows:
1. initialization indicates dictionary Z and encoder matrix A:
Given D0, indicate that dictionary Z is initialized as K rank unit matrix.Encoder matrix A is initialized as the following formula:
The formula be second order optimization problem, to A derivation and enable derivative be 0:
Initial A0It is calculated as
A0=(ZTκ(D0,D0)Z)-1ZTκ(D0,X)
2. fixed indicate dictionary Z, encoder matrix A, sorting parameter W is calculated:
Formula (1) can be rewritten as
Enabling its derivative is 0, then best W is calculated as
W*=η HBT(λIK×K+ηBBT)-1
Wherein IK×KIndicate that size is the unit matrix of K × K
3. fixed cluster device parameter W, indicating dictionary Z, calculation code matrix A:
Formula (1) can be rewritten as
Derivation is carried out to it, is obtained
T=0 is enabled, ▽ g (A is calculatedi), search for feasible step-length ηt, iterative calculation
Until t > T or
4. regular coding matrix A, classifier parameters W, calculating indicates dictionary Z:
Formula (1) can be expressed as
The column for indicating dictionary are only updated every time.Enable zkThe kth column for indicating Z, update zkWhen fixed remove zkOther are all outside
Column.Define intermediate variable φ (X)=φ (X)-φ (D0)zkAk, wherein zkIt is defined to indicate that matrix Z deletes the square after kth column
Battle array, AkThe matrix being defined as after encoder matrix A deletion row k.Formula (2) is represented by
Wherein αkFor the row k of encoder matrix A.The formula derivation is obtained
It enables the formula be equal to 0, obtains
Since dictionary and code coefficient are to be mutually related, corresponding code coefficient needs synchronized update
5. execute step 2. -4., until meeting following stop criterion:
A. reach maximum the number of iterations
B. indicate that the variation of dictionary Z, classifier parameters W and coefficient matrices A are respectively less than preset threshold value
Step 4: compressed sensing based classification of motion algorithm
In step 3, a linear classifier W is had trained.A test video ν is given, calculates its Video coding first
αν:
αν=(ZTκ(D0,D0)Z)-1ZTκ(D0,xν)
Wherein xνIndicate the local feature of video ν.To encoder matrix ανChi Hua obtains the character representation β of video νν.Therefore
The classification y of video ννIt is estimated as
The method of the invention is to solve training sample feature according to step 1, obtains training sample matrix A=[A1,
A2,…,AK]∈Rm×n, k classification, test sample y ∈ RmAnd optional tolerance ε > 0;According to step 2 solve dictionary Z,
Classifier parameters W and coefficient matrices A;For new video actions sequence, classified using classifier W obtained in the previous step,
Finally obtain the classification estimation of the video actions.
The present invention propose a compressed sensing based classification of motion method, by space-time interest points detection, dictionary learning and
Video features expression incorporates a learning framework, and learns a linear classifier simultaneously.It is learned simultaneously by the method for optimization
It practises and differentiates dictionary, differentiates code coefficient and classifier.The feature calculation that the present invention extracts is easy, and robustness is good, and passes through pressure
The ability of the method enhancing processing nonlinear data of contracting perception.
Technical solution provided herein is described in detail above, embodiment used herein is to the application
Principle and embodiment be expounded, the present processes that the above embodiments are only used to help understand and its core
Thought is thought;At the same time, for those skilled in the art in specific embodiment and applies model according to the thought of the application
Place that there will be changes, in conclusion the contents of this specification should not be construed as limiting the present application.
Claims (3)
1. a kind of compressed sensing based human action classification method, by regarding all action training samples as complete word
Allusion quotation designs a compressed sensing based classification of motion algorithm, it is characterised in that: including space-time interest points detection, is based on bag of words
The video features expression of model, construction four steps of visual dictionary and the compressed sensing based classification of motion, wherein:
Step 1: space-time interest points detection, for a video sequence, point of interest is determined by three dimensions, indicates space
The x of position, y-axis and the t axis for indicating the time, are filtered using Gabor in the time domain, use Gaussian filter in two-dimentional airspace,
Space-time interest points are found using filter receptance function, one-dimensional Gabor filtering is defined as the product of sine wave and Gauss window:
Wherein, ω0The centre frequency of peak response can be obtained for filter, σ determines the width of Gauss window;Described is emerging
In the method for interest point detection, receptance function is defined as follows:
R=(I*g*hev)2+(I*g*hod)2
Wherein, I is video sequence, and g (x, y, σ) is 2D Gaussian smoothing core, is applied on two-dimensional space, hevAnd hodFor spatially
1D Gabor is filtered orthogonal right,
Wherein, parameter σ and τ respectively corresponds the time scale and space scale of detection, and the parameter takes σ=2, and τ=3, ω=
6/τ;
Step 2: two dimensional image is mapped as vision and closed by the video features expression based on bag of words in vision bag of words
Keyword set, and son is described to calculate local feature using HOG;The method of the calculating uses rectangle HOG calculation method, side
Method includes: to be utilized respectively simple filter operator [- 1,0,1] first and [1,0, -1] calculates image gradient in the x and y direction,
The gradient direction of each pixel is then calculated according to the direction gradient of x and y;
Step 3: construction visual dictionary, the motion characteristic extracted in step 2 enable X=[X1,X2,…,XN] it is all samples
Eigenmatrix, wherein XiIt indicates to belong to P row N by what column arrangement was formed by all local features of i-th of videoiThe feature square of column
Battle array, NiIndicate sample XiThe local feature number for including, AiCorresponding belong to K row N for itsiThe code coefficient matrix of column;It enablesTable
Show j-th of local feature,For its corresponding code coefficient vector;Differentiation dictionary definition to be learned is D=[d1, d2...,
dK] indicate the matrix for belonging to P row K column, differentiate dictionary learning frame object function is defined as:
WhereinTo rebuild error term, differentiation dictionary must be able to preferably rebuild all local features first,For linear classification item, W is classifier parameters,For regularization term, H is category label vector, and λ and η are positive
Then change parameter, controls the relative contribution of respective items;B=[β1,β2,…,βN] be to the character representation after video features pond,
βiIt indicates are as follows:
WhereinExpression length is Ni, each element is equal to 1/NiVector;
Formula (1) can be by alternately optimization, i.e., to dictionary Z, code coefficient matrix A and linear classifier parameter W alternating
Minimum objective function, until meeting stop criterion;Its process the following steps are included:
1. initialization indicates dictionary Z and encoder matrix A:
Given D0, indicate that dictionary Z is initialized as K rank unit matrix;Encoder matrix A is initialized as the following formula:
The formula be second order optimization problem, to A derivation and enable derivative be 0:
Initial A0It is calculated as
A0=(ZTκ(D0,D0)Z)-1ZTκ(D0,X)
2. fixed indicate dictionary Z, encoder matrix A, sorting parameter W is calculated:
The formula (1) can be rewritten as
Enabling its derivative is 0, then best W is calculated as
W*=η HBT(λIK×K+ηBBT)-1
Wherein IK×KIndicate that size is the unit matrix of K × K
3. fixed cluster device parameter W, indicating dictionary Z, calculation code matrix A:
The formula (1) is rewritten as
Derivation is carried out to it, is obtained
T=0 is enabled, is calculatedSearch for feasible step-length ηt, iterative calculation
Until t > T or
4. regular coding matrix A, classifier parameters W, calculating indicates dictionary Z:
The formula (1) is expressed as
The column for indicating dictionary are only updated every time;Enable μkThe kth column for indicating Z, update μkWhen fixed remove μkOther outer all column;
Define intermediate variable φ (X)=φ (X)-φ (D0)zkAk, wherein zkIt is defined to indicate that matrix Z deletes the matrix after kth column, Ak
The matrix being defined as after encoder matrix A deletion row k;The formula (2) is expressed as
Wherein αkFor the row k of encoder matrix A, which is obtained
It enables the formula be equal to 0, obtains
Since dictionary and code coefficient are to be mutually related, corresponding code coefficient needs synchronized update
5. execute step 2. -4., until meeting stop criterion:
A. reach maximum the number of iterations,
B. indicate that the variation of dictionary Z, classifier parameters W and coefficient matrices A are respectively less than preset threshold value;
Step 4: the compressed sensing based classification of motion has trained a linear classifier W in step 3, gives a survey
Video ν is tried, calculates its Video coding α firstν:
αν=(ZTκ(D0,D0)Z)-1ZTκ(D0,xν)
Wherein xνThe local feature for indicating video ν, to encoder matrix ανChi Hua obtains the character representation β of video ννIt is regarded to get arriving
The classification y of frequency ννFor
2. a kind of compressed sensing based human action classification method according to claim 1, it is characterised in that: described
The step of one in, the feature based on time change is counted using space-time interest points detection.
3. a kind of compressed sensing based human action classification method according to claim 1, it is characterised in that: described
The step of two in, in the rectangle HOG method, every piece it is upper calculate HOG description, every piece may include several uniformly dense adopt
The grid of sample, and often repeated with adjacent block, the HOG on every piece need to individually standardize.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610341943.XA CN106056135B (en) | 2016-05-20 | 2016-05-20 | A kind of compressed sensing based human action classification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610341943.XA CN106056135B (en) | 2016-05-20 | 2016-05-20 | A kind of compressed sensing based human action classification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106056135A CN106056135A (en) | 2016-10-26 |
CN106056135B true CN106056135B (en) | 2019-04-12 |
Family
ID=57177401
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610341943.XA Expired - Fee Related CN106056135B (en) | 2016-05-20 | 2016-05-20 | A kind of compressed sensing based human action classification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106056135B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106934366B (en) * | 2017-03-10 | 2020-11-27 | 湖南科技大学 | Method for detecting human body action characteristics under disordered background |
CN108108652B (en) * | 2017-03-29 | 2021-11-26 | 广东工业大学 | Cross-view human behavior recognition method and device based on dictionary learning |
CN109711277B (en) * | 2018-12-07 | 2020-10-27 | 中国科学院自动化研究所 | Behavior feature extraction method, system and device based on time-space frequency domain hybrid learning |
CN109918994B (en) * | 2019-01-09 | 2023-09-15 | 天津大学 | Commercial Wi-Fi-based violent behavior detection method |
CN110974425B (en) * | 2019-12-20 | 2020-10-30 | 哈尔滨工业大学 | Method for training surgical instrument clamping force sensing model |
CN117890486B (en) * | 2024-03-15 | 2024-05-14 | 四川吉利学院 | Magnetic shoe internal defect detection method based on sparse cut space projection |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514456B (en) * | 2013-06-30 | 2017-04-12 | 安科智慧城市技术(中国)有限公司 | Image classification method and device based on compressed sensing multi-core learning |
CN103605986A (en) * | 2013-11-27 | 2014-02-26 | 天津大学 | Human motion recognition method based on local features |
CN104376312B (en) * | 2014-12-08 | 2019-03-01 | 广西大学 | Face identification method based on bag of words compressed sensing feature extraction |
CN105095863B (en) * | 2015-07-14 | 2018-05-25 | 西安电子科技大学 | The Human bodys' response method of semi-supervised dictionary learning based on similitude weights |
CN105930792A (en) * | 2016-04-19 | 2016-09-07 | 武汉大学 | Human action classification method based on video local feature dictionary |
-
2016
- 2016-05-20 CN CN201610341943.XA patent/CN106056135B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN106056135A (en) | 2016-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106056135B (en) | A kind of compressed sensing based human action classification method | |
CN104008370B (en) | A kind of video face identification method | |
Liang et al. | Counting crowd flow based on feature points | |
Liu et al. | Detecting and tracking people in real time with RGB-D camera | |
CN109146911B (en) | Target tracking method and device | |
CN105550634B (en) | Human face posture recognition methods based on Gabor characteristic and dictionary learning | |
CN103136516B (en) | The face identification method that visible ray and Near Infrared Information merge and system | |
CN104408405B (en) | Face representation and similarity calculating method | |
CN109461172A (en) | Manually with the united correlation filtering video adaptive tracking method of depth characteristic | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN103902989B (en) | Human action video frequency identifying method based on Non-negative Matrix Factorization | |
CN109766838B (en) | Gait cycle detection method based on convolutional neural network | |
CN105976397B (en) | A kind of method for tracking target | |
CN110135344B (en) | Infrared dim target detection method based on weighted fixed rank representation | |
CN103617413B (en) | Method for identifying object in image | |
CN107067410A (en) | A kind of manifold regularization correlation filtering method for tracking target based on augmented sample | |
CN107808391B (en) | Video dynamic target extraction method based on feature selection and smooth representation clustering | |
CN111428555B (en) | Joint-divided hand posture estimation method | |
CN102509293A (en) | Method for detecting consistency of different-source images | |
CN108062523A (en) | A kind of infrared remote small target detecting method | |
Dong et al. | Fusing multilevel deep features for fabric defect detection based NTV-RPCA | |
CN111815620B (en) | Fabric defect detection method based on convolution characteristic and low-rank representation | |
CN107909049A (en) | Pedestrian's recognition methods again based on least square discriminant analysis distance study | |
Bourennane et al. | An enhanced visual object tracking approach based on combined features of neural networks, wavelet transforms, and histogram of oriented gradients | |
CN104034732B (en) | A kind of fabric defect detection method of view-based access control model task-driven |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190412 |
|
CF01 | Termination of patent right due to non-payment of annual fee |