CN104063721A - Human behavior recognition method based on automatic semantic feature study and screening - Google Patents

Human behavior recognition method based on automatic semantic feature study and screening Download PDF

Info

Publication number
CN104063721A
CN104063721A CN201410319126.5A CN201410319126A CN104063721A CN 104063721 A CN104063721 A CN 104063721A CN 201410319126 A CN201410319126 A CN 201410319126A CN 104063721 A CN104063721 A CN 104063721A
Authority
CN
China
Prior art keywords
feature
space
semantic feature
level
interest points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410319126.5A
Other languages
Chinese (zh)
Other versions
CN104063721B (en
Inventor
胡卫明
王浩然
原春锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201410319126.5A priority Critical patent/CN104063721B/en
Publication of CN104063721A publication Critical patent/CN104063721A/en
Application granted granted Critical
Publication of CN104063721B publication Critical patent/CN104063721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an efficient human behavior recognition method based on automatic semantic feature study and screening. The efficient human behavior recognition method based on automatic semantic feature study and screening comprises the steps that space time interest points are detected from a movement video, and movement and apparent information around the space time interest points is extracted; bottom level features comprising the space time contextual information are designed based on space time interest point features, all space time interest point features of a partial area are described, and the relative space time position relationship between the interest points is recorded; based on the bottom level features, high-level semantic features are automatically generated through a non-negative matrix factorization algorithm based on a graph model; high-level semantics with representativeness and discrimination in each behavior category are selected by building a set sparse model based on an L2,1 norm, the most representative semantic feature in each behavior category is reserved through optimization of the model, and a classifier is trained only through the semantic feature from the same behavior category after optimization. The efficient human behavior recognition method based on automatic semantic feature study and screening largely improves the intelligence level of human behavior recognition.

Description

A kind of human behavior recognition methods based on semantic feature automatic learning and screening
Technical field
The present invention relates to Computer Applied Technology field, particularly a kind of behavior recognition methods based on semantic feature automatic learning and screening.
Background technology
Vision is the important channel that the mankind observed and be familiar with the world.Along with improving constantly of computer process ability, we wish that computing machine can have people's sector of breakdown visual performance, help even replace human eye and brain to external world things observe and perception.Be accompanied by the raising of computer hardware processing power and the appearance of computer vision technique, people expect to have and may become a reality this of computing machine.
The object that human behavior based on video is analyzed is to understand and identify people's individual actions, interpersonal reciprocal motion, the interactive relation of people and surrounding environment etc.It utilizes computer technology, do not need human intervention or the condition of few human intervention of trying one's best under, realize human detection, human body tracking based on video, and the understanding of behavior to the mankind.Although this is a very simple instinct reflection for the mankind's cognitive system, but for computer system, consider the complicacy of surrounding environment, the otherness of the mankind's the aspect such as figure, exercise habit, accurate understanding and the human behavior of analyzing in video have very large challenge.
Traditional human behavior recognition methods mainly adopts the low-level image feature of video, as: appearance features, shape facility, Optical-flow Feature and space-time interest points feature etc.Wherein, the method for space-time interest points feature bluebeard compound bag model is the most popular, and the advantage of the method is, model is simple and have higher recognition accuracy, and for noise, block with deformation and there is stronger stability, do not need target to follow the tracks of.
At " X.Burgos, P.Dollar, D.Lin, D.Anderson, P.Perona, Social behaviorrecognition in continuous video, in:Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition, 2012 " in (list of references 1), adopt the pixel intensity after normalization, gradient and Optical-flow Feature are described each space-time interest points region around, and then Describing Motion behavior, and obtain good recognition result on multiple motor behavior data sets, wherein the effect of Gradient Features is best.The method links together the proper vector of extracting in each sub regions around space-time interest points, forms a histogram feature, and the deficiency of this method is more responsive for the variation of the extraneous factors such as illumination.In " I.Laptev; T.Lindeberg; Local descriptorsfor spatio-temporal recognition; Spatial Coherence for Visual Motion Analysis; 2006 " (list of references 2), attempt that point of interest peripheral region is carried out to multiple cutting apart with Feature Combination and improve recognition accuracy, the combination of light stream and gradient has obtained best recognition effect.In " A.Klaser; M.Marszalek; C.Schmid; A spatio-temporal descriptor based on3D-gradients; in:Proceedings of the British Machine Vision Conference " (list of references 3), set up one and stablize and calculate simple three-dimensional space-time feature, it utilizes regular polyhedron that amount of space is changed into 20 directions.The method is still described local space time's point of interest feature by setting up the histogram of gradient direction.
In recent years, it is found that traditional low-level image feature has significant limitation for the description of motor behavior, the time of Describing Motion target and spatial information effectively, so people attempt to set up middle level on the basis of low-level image feature and high-rise semantic feature is carried out Describing Motion behavior more accurately.In " J.Liu; M.Shah; B.Kuipers; S.Savarese; Cross-view action recognition viaview knowledge transfer; in:Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition, pp.3209-3216,2011 " (list of references 4), adopt mutual information maximization technology to learn out a compact middle level dictionary.They are permeate a vision word of multiple vision words in dictionary with similar distribution, and utilize the method for space-time pyramid coupling to excavate temporal information.In " J.Liu; Y.Yang; I.Saleemi, M.Shah, Learningsemantic features for action recognition via diffusion map; Computer Visionand Image Understanding; Vol.116, No.3, pp.361-377; 2012 " (list of references 5), utilize diffusion mapping automatically to go out high-level semantic vocabulary from a large amount of middle level features learnings, wherein each middle level features is expressed as the vector form of mutual information.But classification corresponding to multiple vocabularies that algorithm produces, the vocabulary therefore producing lacks ubiquity, has limited the practical application of this algorithm.
Compare time and space attribute that high-level semantics features more can accurate description motor behavior with traditional low-level image feature.But also come with some shortcomings, such as: most high-rise attributes based on study are to be all based upon on the basis of low-level image feature, the low-level image feature extracting in video had both comprised foreground features and had also comprised a large amount of background characteristics, and these background characteristics can affect the identification of algorithm automatic learning high-level semantic attribute out.
Summary of the invention
(1) technical matters that will solve
The object of the invention is to overcome the deficiency of existing behavior recognition methods aspect high-level semantic study, thereby propose a kind of behavior recognition methods based on semantic feature automatic learning and screening.
(2) technical scheme
The present invention sets up space-time contextual feature on the basis of traditional point of interest feature, then utilize based on the Algorithms of Non-Negative Matrix Factorization of graph model and generate high-level semantics features, then design and a kind ofly extract the high-level semantics features representative for each behavior classification based on the sparse high-level characteristic filtering algorithm of group.
The human behavior recognition methods based on semantic feature automatic learning and screening that the present invention proposes comprises:
Step S1, from video, detect space-time interest points;
Step S2, extract the video low-level image feature of the peripheral region of described space-time interest points;
Step S3, set up space-time contextual feature according to described video low-level image feature;
Step S4, the Algorithms of Non-Negative Matrix Factorization of employing based on graph model, generate high-level semantics features according to described video low-level image feature;
Step S5, utilization are based on L 2,1the group of norm is sparse filters out the high-level semantic representative and property distinguished on high-level semantics features basis;
The high-level semantics features that step S6, utilization filter out carrys out training classifier, utilizes train sub-category to classify to video.
A kind of embodiment is that described step S2 comprises: utilize gradient histogram feature to extract the appearance features of space-time interest points peripheral region; Utilize light stream histogram feature to extract the motion feature of space-time interest points peripheral region.
A kind of embodiment is that described step S3 comprises: centered by single space-time interest points, search out the nearest N of distance center space-time interest points adjacent point of interest; Design a kind of space-time contextual feature, can describe interior N+1 the space-time interest points feature of regional area and the relative position relation between them simultaneously; Retrain different adjacent point of interest features with a weight vectors, the weight that the nearer adjacent interest point of distance center point of interest is endowed is larger.
A kind of embodiment is that described step S4 comprises: the linear expression that adopts the Non-negative Matrix Factorization based on graph model each sample to be decomposed into one group of base vector, and weighting coefficient in linear expression is all positive number; Use this algorithm that human motion behavior is resolved into the expression based on part, make similar human motion behavior remain similar simultaneously under the expression of new base vector.
A kind of embodiment is that described step S5 comprises:
What adopt matrix and vector combines the sparse model of group, impels to belong to other motor behavior of same class and carry out reconstruct by similar semantic feature; Retain representative semantic feature in each behavior classification, suppress the feature that those occur in sample in indivedual classes; Adopt to optimize and carried out reconstruct test sample book from the semantic feature of same behavior classification afterwards.
(3) beneficial effect
The present invention sets up stable low-level image feature by design space-time contextual feature, utilize on this basis the Algorithms of Non-Negative Matrix Factorization based on graph model to learn out descriptive stronger high-level semantics features, then, employing is organized sparse method and is filtered out the high-level semantic in each behavior classification with strong representativeness and the property distinguished, and utilizes these semantic informations that screen to classify.This method based on high-level semantic can be learnt out the essential attribute feature of different classes of behavior better, can obtain recognition effect better.
Brief description of the drawings
Fig. 1 is the process flow diagram of human behavior recognition methods of the present invention;
The high-level semantics features schematic diagram that Fig. 2 A and Fig. 2 B are one embodiment of the present of invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in further detail.
Fig. 1 is the process flow diagram of human behavior recognition methods of the present invention.As shown in Figure 1, method of the present invention comprises the steps:
Step S1, from video, detect space-time interest points.
So-called space-time interest points refers to the key point in the space obtaining by three-dimensional Corner Detection or filtering.The space-time interest points detecting in the present invention is by spatial domain is adopted to gaussian filtering, the key point in the video that filtering obtains to time domain employing Gabor.
Step S2, extract the video low-level image feature of the peripheral region of described space-time interest points.
Described " around " refers to a cube region with the center of space-time interest points position.The video low-level image feature extracting in the present invention is motion feature and the appearance features that can characterize space-time interest points peripheral region.
In embodiment, can extract multiple dimensioned space-time interest points, for example adopt these two kinds of features of light stream histogram and gradient histogram to describe respectively motion feature and the appearance features of space-time interest points peripheral region.
Step S3, set up space-time contextual feature according to described video low-level image feature.
Described space-time contextual feature refers to the common global feature forming of adjacent multiple space-time interest points, has embodied more contextual information.
This step S3 comprises centered by single space-time interest points, calculate the nearest N of distance center space-time interest points adjacent point of interest, then design a kind of space-time contextual feature, can describe interior N+1 the space-time interest points feature of regional area and the relative position relation between them simultaneously.
Meanwhile, retrain different adjacent point of interest features with a weight vectors, the weight that the nearer adjacent interest point of distance center point of interest is endowed is larger.Like this, for any one space-time interest points extracting in motor behavior video, can obtain point of interest feature and the spatial positional information adjacent with it.
According to the Euclidean distance under three-dimensional space-time coordinate, calculate the local visual word contextual feature corresponding to this region:
F p=[h 1,h 2,…,h k] T, (1)
h i = 1 if label ( p ) = i Σ j = 1 N - 1 β · δ ( label ( q j ) - i ) D σ ( p , q j ) otherwise - - - ( 2 )
Wherein, label (p) represents the vision word label of point of interest p.By local visual word contextual feature being rebuild to word bag model, each behavior video is expressed as a vector based on low-level image feature.
Step S4, the Algorithms of Non-Negative Matrix Factorization of employing based on graph model, generate high-level semantics features according to described video low-level image feature.
" high-level semantics features " refers to the high-level characteristic that can embody semantic information, is different from traditional low-level image feature.
Algorithms of Non-Negative Matrix Factorization based on graph model is decomposed into each sample the linear expression of one group of base vector, and weighting coefficient in linear expression is all positive number.Use this algorithm that human motion behavior is resolved into the expression based on part, make similar human motion behavior remain similar simultaneously under the expression of new base vector.
Order i=1 ..., C, j=1 ..., n i, the low-level image feature that represents a d dimension of i j video sample in behavior classification represents.All videos in classification i to measure feature form a matrix non-negative Matrix Factorization based on graph model minimizes following objective function:
Q = | | Y i - UV T | | F 2 + λTrace ( V T LV ) - - - ( 3 )
Wherein, with be two nonnegative matrixes, L=D-W is that Laplce schemes, and W is symmetrical non-negative similarity matrix.We adopt thermonuclear weight d is diagonal matrix, and the element on diagonal line is respective column in matrix W (or row, because W is symmetric matrix) sum, regards each column vector of matrix U as a behavior unit.
The high-level semantics features schematic diagram that Fig. 2 A and Fig. 2 B are one embodiment of the present of invention.As shown in Figure 2 A and 2 B, " walking " this behavior is to be made up of the motion of trunk and the motion of four limbs, and " waving " this behavior is made up of the motion of two arms.
Step S5, utilization are based on L 2,1the group of norm is sparse filters out the high-level semantic representative and property distinguished on high-level semantics features basis.
A kind of embodiment is, on existing matrix group Corresponding Sparse Algorithm basis, set up the group Corresponding Sparse Algorithm based on vectorial, utilize combining of matrix and vector to organize sparse model, impel to belong to others type games behavior of same class and carry out reconstruct by similar semantic feature, suppress the feature that those occur in sample in indivedual classes.By the optimization of model, retain representative semantic feature in each behavior classification, only adopt to optimize simultaneously and carried out reconstruct test sample book from the semantic feature of same behavior classification afterwards.
For vectorial b=[b 1, b 2..., b m] t, element is wherein divided into G group, and the number of elements of g group is m g, b = [ b 11 , · · · , b 1 m 1 , · · · , b g 1 , · · · , b g m g , · · · , b G 1 , · · · , b Gm G ] T . The L of vector b 2,1norm is defined as:
| | b | | 2,1 = Σ g = 1 G ( Σ k = 1 m g b gk 2 ) 1 2 - - - ( 4 )
Suppose that i behavior classification contains m iindividual behavior unit, initialization dictionary B=[B 1, B 2..., B c], wherein b ijrepresent j behavior unit of i class.The sparse model of selecting based on behavior unit that we propose is as follows:
min B , X i Σ i = 1 C | | Y i - BX i | | F 2 + γ 1 Σ k | | X . k i | | 2,1 + γ 2 | | X i | | 2,1 - - - ( 5 )
Wherein, ‖ ‖ fthe Frobenius norm of representing matrix, ‖ ‖ 2,1represent L 2,1norm. to vector in every group element do as a whole punishment, impel each motor behavior by carrying out reconstruct from other behavior unit of same class.‖ X i2,1to matrix X ievery a line punish as a whole, realize in the ranks sparse, impel from other motor behavior of same class and carry out reconstruct by similar behavior unit.γ 1and γ 2for regularization coefficient.
The high-level semantics features that step S6, utilization filter out carrys out training classifier, utilizes the sorter training to classify to video sample.
In the specific implementation, we wish to train a disaggregated model, wherein φ (y t, B) and be sparse model, B is the dictionary in model, f (x t)=f (φ (y t, B)) be forecast model, l tfor behavior video y tclass label, for Classification Loss function, the quantity that P is training sample.
min B E = min B Σ t = 1 P L ( l t , f ( φ ( y t , B ) ) ) - - - ( 6 )
Dictionary optimizing process adopts the method for iteration, is made up of: under the prerequisite of known dictionary B, solve the rarefaction representation of sample, under the prerequisite of the rarefaction representation of known sample, upgrade dictionary two parts.
Can learn out high-level semantics features representative in each behavior classification and that the property distinguished is stronger by above-mentioned training process, utilize these high-level semantics features to carry out training classifier (as SVM), obtain classifier parameters, the svm classifier device model that utilization trains is classified to test video, and output category result.
Lifting a specific embodiment below describes.
As shown in Figure 2 A, for a motor behavior " walking ", the method of traditional extraction low-level image feature can only count a histogram based on the feature such as gradient, light stream and represent this motor behavior, this method has been ignored the local motion that entire motion behavior comprises, for the separating capacity of different motion behavior a little less than.The high-level semantics features that we propose, by different local motions, as " motion of trunk ", " motions of four limbs " etc., is analyzed complete motor behavior, has stronger descriptive power than traditional low-level image feature.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; be understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any amendment of making, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (5)

1. the human behavior recognition methods based on semantic feature automatic learning and screening, is characterized in that, the method comprises:
Step S1, from video, detect space-time interest points;
Step S2, extract the video low-level image feature of the peripheral region of described space-time interest points;
Step S3, set up space-time contextual feature according to described video low-level image feature;
Step S4, the Algorithms of Non-Negative Matrix Factorization of employing based on graph model, generate high-level semantics features according to described video low-level image feature;
Step S5, utilization are based on L 2,1the group of norm is sparse filters out the high-level semantic representative and property distinguished on high-level semantics features basis;
The high-level semantics features that step S6, utilization filter out carrys out training classifier, utilizes train sub-category to classify to video.
2. the human behavior recognition methods based on semantic feature automatic learning and screening according to claim 1, is characterized in that, described step S2 comprises:
Utilize gradient histogram feature to extract the appearance features of space-time interest points peripheral region; Utilize light stream histogram feature to extract the motion feature of space-time interest points peripheral region.
3. the human behavior recognition methods based on semantic feature automatic learning and screening according to claim 1, is characterized in that, described step S3 comprises:
Centered by single space-time interest points, search out the nearest N of distance center space-time interest points adjacent point of interest; Design a kind of space-time contextual feature, can describe interior N+1 the space-time interest points feature of regional area and the relative position relation between them simultaneously; Retrain different adjacent point of interest features with a weight vectors, the weight that the nearer adjacent interest point of distance center point of interest is endowed is larger.
4. the human behavior recognition methods based on semantic feature automatic learning and screening according to claim 1, is characterized in that, described step S4 comprises:
The linear expression that adopts Non-negative Matrix Factorization based on graph model each sample to be decomposed into one group of base vector, and weighting coefficient in linear expression is all positive number; Use this algorithm that human motion behavior is resolved into the expression based on part, make similar human motion behavior remain similar simultaneously under the expression of new base vector.
5. the human behavior recognition methods based on semantic feature automatic learning and screening according to claim 1, is characterized in that, described step S5 comprises:
What adopt matrix and vector combines the sparse model of group, impels to belong to other motor behavior of same class and carry out reconstruct by similar semantic feature; Retain representative semantic feature in each behavior classification, suppress the feature that those occur in sample in indivedual classes; Adopt to optimize and carried out reconstruct test sample book from the semantic feature of same behavior classification afterwards.
CN201410319126.5A 2014-07-04 2014-07-04 A kind of human behavior recognition methods learnt automatically based on semantic feature with screening Active CN104063721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410319126.5A CN104063721B (en) 2014-07-04 2014-07-04 A kind of human behavior recognition methods learnt automatically based on semantic feature with screening

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410319126.5A CN104063721B (en) 2014-07-04 2014-07-04 A kind of human behavior recognition methods learnt automatically based on semantic feature with screening

Publications (2)

Publication Number Publication Date
CN104063721A true CN104063721A (en) 2014-09-24
CN104063721B CN104063721B (en) 2017-06-16

Family

ID=51551423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410319126.5A Active CN104063721B (en) 2014-07-04 2014-07-04 A kind of human behavior recognition methods learnt automatically based on semantic feature with screening

Country Status (1)

Country Link
CN (1) CN104063721B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881655A (en) * 2015-06-03 2015-09-02 东南大学 Human behavior recognition method based on multi-feature time-space relationship fusion
CN106529467A (en) * 2016-11-07 2017-03-22 南京邮电大学 Group behavior identification method based on multi-feature fusion
CN109508698A (en) * 2018-12-19 2019-03-22 中山大学 A kind of Human bodys' response method based on binary tree
CN111861275A (en) * 2020-08-03 2020-10-30 河北冀联人力资源服务集团有限公司 Method and device for identifying household working mode
CN112347879A (en) * 2020-10-27 2021-02-09 中国搜索信息科技股份有限公司 Theme mining and behavior analysis method for video moving target
CN112560817A (en) * 2021-02-22 2021-03-26 西南交通大学 Human body action recognition method and device, electronic equipment and storage medium
CN113590971A (en) * 2021-08-13 2021-11-02 浙江大学 Interest point recommendation method and system based on brain-like space-time perception characterization
CN117676187A (en) * 2023-04-18 2024-03-08 德联易控科技(北京)有限公司 Video data processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1850271B1 (en) * 2003-01-29 2009-09-09 Sony Deutschland Gmbh Method for video mode classification
CN102324031A (en) * 2011-09-07 2012-01-18 江西财经大学 Latent semantic feature extraction method in aged user multi-biometric identity authentication
CN102393910A (en) * 2011-06-29 2012-03-28 浙江工业大学 Human behavior identification method based on non-negative matrix decomposition and hidden Markov model
CN103077535A (en) * 2012-12-31 2013-05-01 中国科学院自动化研究所 Target tracking method on basis of multitask combined sparse representation
CN103150579A (en) * 2013-02-25 2013-06-12 东华大学 Abnormal human behavior detecting method based on video sequence

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1850271B1 (en) * 2003-01-29 2009-09-09 Sony Deutschland Gmbh Method for video mode classification
CN102393910A (en) * 2011-06-29 2012-03-28 浙江工业大学 Human behavior identification method based on non-negative matrix decomposition and hidden Markov model
CN102324031A (en) * 2011-09-07 2012-01-18 江西财经大学 Latent semantic feature extraction method in aged user multi-biometric identity authentication
CN103077535A (en) * 2012-12-31 2013-05-01 中国科学院自动化研究所 Target tracking method on basis of multitask combined sparse representation
CN103150579A (en) * 2013-02-25 2013-06-12 东华大学 Abnormal human behavior detecting method based on video sequence

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881655A (en) * 2015-06-03 2015-09-02 东南大学 Human behavior recognition method based on multi-feature time-space relationship fusion
CN104881655B (en) * 2015-06-03 2018-08-28 东南大学 A kind of human behavior recognition methods based on the fusion of multiple features time-space relationship
CN106529467A (en) * 2016-11-07 2017-03-22 南京邮电大学 Group behavior identification method based on multi-feature fusion
CN109508698B (en) * 2018-12-19 2023-01-10 中山大学 Human behavior recognition method based on binary tree
CN109508698A (en) * 2018-12-19 2019-03-22 中山大学 A kind of Human bodys' response method based on binary tree
CN111861275A (en) * 2020-08-03 2020-10-30 河北冀联人力资源服务集团有限公司 Method and device for identifying household working mode
CN111861275B (en) * 2020-08-03 2024-04-02 河北冀联人力资源服务集团有限公司 Household work mode identification method and device
CN112347879A (en) * 2020-10-27 2021-02-09 中国搜索信息科技股份有限公司 Theme mining and behavior analysis method for video moving target
CN112347879B (en) * 2020-10-27 2021-06-29 中国搜索信息科技股份有限公司 Theme mining and behavior analysis method for video moving target
CN112560817A (en) * 2021-02-22 2021-03-26 西南交通大学 Human body action recognition method and device, electronic equipment and storage medium
CN113590971A (en) * 2021-08-13 2021-11-02 浙江大学 Interest point recommendation method and system based on brain-like space-time perception characterization
CN113590971B (en) * 2021-08-13 2023-11-07 浙江大学 Interest point recommendation method and system based on brain-like space-time perception characterization
CN117676187A (en) * 2023-04-18 2024-03-08 德联易控科技(北京)有限公司 Video data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN104063721B (en) 2017-06-16

Similar Documents

Publication Publication Date Title
CN108875674B (en) Driver behavior identification method based on multi-column fusion convolutional neural network
CN104063721A (en) Human behavior recognition method based on automatic semantic feature study and screening
CN107679526B (en) Human face micro-expression recognition method
Hariharan et al. Object instance segmentation and fine-grained localization using hypercolumns
CN101894276B (en) Training method of human action recognition and recognition method
CN110399821B (en) Customer satisfaction acquisition method based on facial expression recognition
CN105574510A (en) Gait identification method and device
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN109815826A (en) The generation method and device of face character model
Wan et al. Action recognition based on two-stream convolutional networks with long-short-term spatiotemporal features
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN104616316A (en) Method for recognizing human behavior based on threshold matrix and characteristics-fused visual word
CN112307995A (en) Semi-supervised pedestrian re-identification method based on feature decoupling learning
CN104021384B (en) A kind of face identification method and device
CN110503000B (en) Teaching head-up rate measuring method based on face recognition technology
Bu Human motion gesture recognition algorithm in video based on convolutional neural features of training images
CN111738355A (en) Image classification method and device with attention fused with mutual information and storage medium
CN104966052A (en) Attributive characteristic representation-based group behavior identification method
CN104598889A (en) Human action recognition method and device
CN105868711B (en) Sparse low-rank-based human behavior identification method
CN103577804B (en) Based on SIFT stream and crowd's Deviant Behavior recognition methods of hidden conditional random fields
Sheeba et al. Hybrid features-enabled dragon deep belief neural network for activity recognition
CN103745242A (en) Cross-equipment biometric feature recognition method
CN114782979A (en) Training method and device for pedestrian re-recognition model, storage medium and terminal
Sheng et al. Action recognition using direction-dependent feature pairs and non-negative low rank sparse model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant