CN101430757A - Method for acquiring action classification by combining with spacing restriction information - Google Patents

Method for acquiring action classification by combining with spacing restriction information Download PDF

Info

Publication number
CN101430757A
CN101430757A CNA2008101375038A CN200810137503A CN101430757A CN 101430757 A CN101430757 A CN 101430757A CN A2008101375038 A CNA2008101375038 A CN A2008101375038A CN 200810137503 A CN200810137503 A CN 200810137503A CN 101430757 A CN101430757 A CN 101430757A
Authority
CN
China
Prior art keywords
action
acquiring
video
frame
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101375038A
Other languages
Chinese (zh)
Other versions
CN101430757B (en
Inventor
姚鸿勋
刘天强
纪荣嵘
孙晓帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN2008101375038A priority Critical patent/CN101430757B/en
Publication of CN101430757A publication Critical patent/CN101430757A/en
Application granted granted Critical
Publication of CN101430757B publication Critical patent/CN101430757B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for acquiring an action category by combining space constraint information, relates to the automatic monitoring field, and solves the problem of long training time and low classification precision of the existing methods for acquiring the action categories. The method comprises the following steps: reading video, tracking a target in a target profile section by snake and a particle filter, and accurately framing the target section with a rectangular frame in each frame; acquiring a target curve and a fitting function according to the width and height of the target section of each frame, and acquiring a prior probability of a current action a which is classified as a k category action by classifying support vector machines of an extracted characteristic value, dividing the video into m sections, and acquiring the coverage probabilities of all the frames in the video that a section I is trained by a k category action training set, and the probability sum of all the categories of actions in the section i; and acquiring the category number to which the current action a belongs according to the proportion of the times that the current action a covers the section i in all the times that all the video sections are covered.

Description

A kind of method of acquiring action classification by combining with spacing restriction information
Technical field
The present invention relates to automatically-monitored field, be specifically related to obtain the method for the classification of motion in conjunction with space constraint information.
Background technology
Abnormal behaviour detection for member in the family is that the research field of a hot topic, the especially nurse to old man, child and handicapped disabled person have great significance in recent years.But present intelligent family monitoring system all is based on sensor network and Wireless Telecom Equipment mostly, and cost and cost are higher, and is not suitable for general family and uses, so, based on the wired home monitoring technique of computer vision in recent years by people's extensive concern.But the method for this class technology is very limited at present, and great majority are based on the detection of ad hoc rules and specific exceptions behavior, and the generalization ability difference causes it to be not easy to promote.There is the long and not high shortcoming of nicety of grading of training time in the present method of obtaining the classification of motion.
Summary of the invention
In order to solve the long and not high problem of nicety of grading of the method training time of obtaining the classification of motion at present, a kind of method of acquiring action classification by combining with spacing restriction information is proposed now
The step of the method for the invention is:
Step 1, read video, utilize snake and particle filter that the target in the objective contour zone is followed the tracks of, and in each frame, all use an accurate frame of rectangle frame to live the target area;
Step 2, obtain aim curve according to the width w (t) and the height h (t) of the target area of each frame R ( t ) = w ( t ) h ( t ) ;
Step 3, aim curve R (t) is transformed to 2 π is the Fourier leaf-size class fitting function in cycle f ( t ) = a 0 + Σ f = 1 ∞ a f cos ft + b f sin ft , Wherein a n = 1 π ∫ - π π f ( x ) cos nxdx ( n = 0,1,2 , L ) , b n = 1 π ∫ - π π f ( x ) sin nxdx ( n = 1,2,3 L ) , Extract f=0 respectively, 1,2,3,4 o'clock a fAnd f=1,2,3,4,5 o'clock b fA proper vector that is combined into;
Step 4, by the proper vector of extracting being carried out the support vector machine classification, obtain the prior probability P (e that current action a is classified as the action of k class k), video is divided into m zone, obtain regional i and concentrated the Probability p (i/e that all frames covered in all videos by k class action training k) and the probability that in regional i, occurs of all classification actions and
Figure A200810137503D00041
Wherein k ∈ 1,2...n}, i ∈ 1,2...m};
Step 5, the ratio p (i/a) that accounts for the total degree that all video areas are capped in the training set according to the number of times of current action a overlay area i obtain the classification number of action under the current action a:
C = arg max k Σ i = 1 m p ( i / a ) p ( e k / i )
Advantage of the present invention is: it combines the rule that specific action has space constraint (1), overcome in the past based on multiclass classification of motion poor effect in the supervisory system of vision, the shortcoming that similar movement is obscured easily, because the erratic behavior of abnormal behaviour scene and the height regularity of regular event have very big difference, so the present invention has improved the precision that abnormal behaviour detects; (2) by the embedded space constraint information, having solved needs the problem of sampling in a large number and training before the motion detection system detects in the past, have the sorter fast convergence rate, need the few advantage of sample size.
Embodiment
The step of the described method of present embodiment is:
Step 1, read video, utilize snake and particle filter that the target in the objective contour zone is followed the tracks of, and in each frame, all use an accurate frame of rectangle frame to live the target area;
Step 2, obtain aim curve according to the width w (t) and the height h (t) of the target area of each frame R ( t ) = w ( t ) h ( t ) ;
Step 3, aim curve R (t) is transformed to 2 π is the Fourier leaf-size class fitting function in cycle f ( t ) = a 0 + Σ f = 1 ∞ a f cos ft + b f sin ft , Wherein a n = 1 π ∫ - π π f ( x ) cos nxdx ( n = 0,1,2 , L ) , b n = 1 π ∫ - π π f ( x ) sin nxdx ( n = 1,2,3 L ) , Extract f=0 respectively, 1,2,3,4 o'clock a fAnd f=1,2,3,4,5 o'clock b fA proper vector that is combined into;
Step 4, by the proper vector of extracting being carried out the support vector machine classification, obtain the prior probability P (e that current action a is classified as the action of k class k), video is divided into m zone, obtain regional i and concentrated the Probability p (i/e that all frames covered in all videos by k class action training k) and the probability that in regional i, occurs of all classification actions and
Figure A200810137503D00051
Wherein k ∈ 1,2...n}, i ∈ 1,2...m};
Step 5, the ratio p (i/a) that accounts for the total degree that all video areas are capped in the training set according to the number of times of current action a overlay area i obtain the classification number of action under the current action a:
C = arg max k Σ i = 1 m p ( i / a ) p ( e k / i )
P (e wherein k/ probability that i) action of all k classes took place on i video area in the expression training set, it be by p ( e k / i ) = p ( i / e k ) P ( e k ) Σ j = 1 n p ( i / e j ) p ( e j ) Obtain.The method that the present invention obtains the classification of motion has merged svm classifier device and the corresponding spatial information of such action, reaches the spatial information that takes place with specific action and revises the interfacial purpose of SVM.

Claims (1)

1, a kind of method of acquiring action classification by combining with spacing restriction information is characterized in that its step is:
Step 1, read video, utilize snake and particle filter that the target in the objective contour zone is followed the tracks of, and in each frame, all use an accurate frame of rectangle frame to live the target area;
Step 2, obtain aim curve according to the width w (t) and the height h (t) of the target area of each frame R ( t ) = w ( t ) h ( t ) ;
Step 3, aim curve R (t) is transformed to 2 π is the Fourier leaf-size class fitting function in cycle f ( t ) = a 0 + Σ f = 1 ∞ a f cos ft + b f sin ft , Wherein a n = 1 π ∫ - π π f ( x ) cos nxdx ( n = 0,1,2 , L ) , b n = 1 π ∫ - π π f ( x ) sin nxdx ( n = 1 , 2 , 3 L ) , Extract f=0 respectively, 1,2,3,4 o'clock a fAnd f=1, the proper vector that 2,3,4,5 o'clock bf is combined into;
Step 4, by the proper vector of extracting being carried out the support vector machine classification, obtain the prior probability P (e that current action a is classified as the action of k class k), video is divided into m zone, obtain regional i and concentrated the Probability p (i/e that all frames covered in all videos by k class action training k) and the probability that in regional i, occurs of all classification actions and
Figure A200810137503C00024
Wherein k ∈ 1,2...n}, i ∈ 1,2...m};
Step 5, the ratio p (i/a) that accounts for the total degree that all video areas are capped in the training set according to the number of times of current action a overlay area i obtain the classification number of action under the current action a:
C=arg max k Σ i = 1 m p ( i / a ) p ( e k / i ) .
CN2008101375038A 2008-11-12 2008-11-12 Method for acquiring action classification by combining with spacing restriction information Expired - Fee Related CN101430757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101375038A CN101430757B (en) 2008-11-12 2008-11-12 Method for acquiring action classification by combining with spacing restriction information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101375038A CN101430757B (en) 2008-11-12 2008-11-12 Method for acquiring action classification by combining with spacing restriction information

Publications (2)

Publication Number Publication Date
CN101430757A true CN101430757A (en) 2009-05-13
CN101430757B CN101430757B (en) 2010-12-01

Family

ID=40646142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101375038A Expired - Fee Related CN101430757B (en) 2008-11-12 2008-11-12 Method for acquiring action classification by combining with spacing restriction information

Country Status (1)

Country Link
CN (1) CN101430757B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9240053B2 (en) 2010-03-15 2016-01-19 Bae Systems Plc Target tracking
US9305244B2 (en) 2010-03-15 2016-04-05 Bae Systems Plc Target tracking

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9240053B2 (en) 2010-03-15 2016-01-19 Bae Systems Plc Target tracking
US9305244B2 (en) 2010-03-15 2016-04-05 Bae Systems Plc Target tracking

Also Published As

Publication number Publication date
CN101430757B (en) 2010-12-01

Similar Documents

Publication Publication Date Title
CN104881637B (en) Multimodal information system and its fusion method based on heat transfer agent and target tracking
CN111291699A (en) Substation personnel behavior identification method based on monitoring video time sequence action positioning and abnormity detection
CN105574506A (en) Intelligent face tracking system and method based on depth learning and large-scale clustering
CN205451095U (en) A face -identifying device
CN103854016B (en) Jointly there is human body behavior classifying identification method and the system of feature based on directivity
CN105303191A (en) Method and apparatus for counting pedestrians in foresight monitoring scene
CN102164270A (en) Intelligent video monitoring method and system capable of exploring abnormal events
CN109344765A (en) A kind of intelligent analysis method entering shop personnel analysis for chain shops
CN103235944A (en) Crowd flow division and crowd flow abnormal behavior identification method
CN102609720A (en) Pedestrian detection method based on position correction model
CN106778688A (en) The detection method of crowd's throat floater event in a kind of crowd scene monitor video
CN106295532A (en) A kind of human motion recognition method in video image
CN112084928A (en) Road traffic accident detection method based on visual attention mechanism and ConvLSTM network
CN109117774A (en) A kind of multi-angle video method for detecting abnormality based on sparse coding
Elbasi Reliable abnormal event detection from IoT surveillance systems
Liu et al. A binary-classification-tree based framework for distributed target classification in multimedia sensor networks
Jiang et al. A deep learning framework for detecting and localizing abnormal pedestrian behaviors at grade crossings
CN104299007A (en) Classifier training method for behavior recognition
CN101430757B (en) Method for acquiring action classification by combining with spacing restriction information
Vu et al. Traffic incident recognition using empirical deep convolutional neural networks model
Zhao et al. Hybrid generative/discriminative scene classification strategy based on latent Dirichlet allocation for high spatial resolution remote sensing imagery
CN104166831A (en) ALBP and SRC algorithm-based fatigue detection method and system
Lao et al. Human running detection: Benchmark and baseline
CN104899606A (en) Steganalysis method based on local learning
Singh et al. Optical flow-based weighted magnitude and direction histograms for the detection of abnormal visual events using combined classifier

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20101201

Termination date: 20121112