CN105574494B - Multi-classifier gesture recognition method and device - Google Patents

Multi-classifier gesture recognition method and device Download PDF

Info

Publication number
CN105574494B
CN105574494B CN201510920778.9A CN201510920778A CN105574494B CN 105574494 B CN105574494 B CN 105574494B CN 201510920778 A CN201510920778 A CN 201510920778A CN 105574494 B CN105574494 B CN 105574494B
Authority
CN
China
Prior art keywords
classifier
classifiers
similarity
histogram
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510920778.9A
Other languages
Chinese (zh)
Other versions
CN105574494A (en
Inventor
王贵锦
何礼
陈醒濠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201510920778.9A priority Critical patent/CN105574494B/en
Publication of CN105574494A publication Critical patent/CN105574494A/en
Application granted granted Critical
Publication of CN105574494B publication Critical patent/CN105574494B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24317Piecewise classification, i.e. whereby each classification requires several discriminant rules

Abstract

The invention discloses a multi-classifier gesture recognition method and a multi-classifier gesture recognition device, wherein the method comprises the following steps: obtaining the distribution centers of all sample contour point characteristics according to a K-means clustering algorithm, obtaining a first histogram through projection, and further obtaining a second histogram according to the contour point characteristics of the image to be identified; calculating the similarity of the second histogram and the histogram corresponding to each classifier in the sample library, and acquiring N classifiers according to a similarity threshold; obtaining a posture detection function according to the posture models of the N classifiers and the weight of each classifier in the N classifiers; the gesture detection function is a function corresponding to the gesture of the image to be recognized. The invention can effectively control the complexity of the submodel and realize the aggregation of the appearance similarity samples, thereby improving the effectiveness of model learning, meeting the training and learning task of mass data and effectively improving the performance of the gesture recognition method.

Description

Multi-classifier gesture recognition method and device
Technical Field
The invention relates to the field of human-computer interaction, in particular to a multi-classifier gesture recognition method and device.
Background
Generally, gesture recognition is one of key technologies of human-computer interaction, and currently, a gesture recognition method based on machine learning is mainly adopted, for example, a component recognition method is used for recognizing various parts of a human body, such as limbs, the head and the like, and then the components are connected to form a human body gesture. A large number of training samples are often required for training in order to ensure the performance of the machine learning method.
Currently, a single classifier is adopted for large-scale training, which not only needs a large amount of training resources (such as consumption of memory and training time), but also is difficult to ensure the performance of the trained classifier.
Disclosure of Invention
Because the current single classifier gesture recognition method needs a large amount of training resources and is difficult to ensure the performance of the classifier after training, the invention provides a multi-classifier gesture recognition method and a multi-classifier gesture recognition device.
In a first aspect, the present invention provides a multi-classifier gesture recognition method, including:
s1, obtaining distribution centers of all sample contour point features according to a K-means clustering algorithm, projecting the distribution centers to obtain a first histogram, and obtaining a second histogram corresponding to the image to be recognized according to the first histogram and the contour point features of the image to be recognized;
s2, calculating the similarity between the second histogram and the corresponding histogram of each classifier in the sample library, sorting all the classifiers according to the similarity from large to small, and acquiring the first N classifiers in the sorted classifiers according to a similarity threshold, wherein N is an integer greater than 0;
s3, obtaining a posture detection function according to the posture models of the first N classifiers and the weight of each classifier in the first N classifiers; the gesture detection function is a function corresponding to the gesture of the image to be recognized.
Preferably, step S1 is preceded by:
and S0, clustering all the images in the sample library according to the contour point characteristics to obtain a plurality of classifiers, and processing the histograms corresponding to all the images in each classifier to obtain the histogram corresponding to each classifier.
Preferably, step S1 includes: and performing soft projection on the distribution center.
Preferably, step S4 includes: and carrying out normalization processing on the similarity of the histograms corresponding to all the classifiers to obtain the similarity weight of the histograms corresponding to all the classifiers.
Preferably, step S5 includes: the gesture detection function is:
Figure BDA0000875375840000021
wherein, ckRepresents the kth classifier in the sorted classifiers, I represents the picture to be recognized, X represents the posture model, and q (X | c)kI) denotes the pose function of the kth classifier, p (c)kI) represents the similarity weight of the kth classifier.
In a second aspect, the present invention further provides a multi-classifier gesture recognition apparatus, including:
the characteristic alignment module is used for obtaining distribution centers of all sample contour point characteristics according to a K-means clustering algorithm, projecting the distribution centers to obtain a first histogram, and obtaining a second histogram corresponding to the image to be recognized according to the first histogram and the contour point characteristics of the image to be recognized;
the similarity calculation module is used for calculating the similarity between the second histogram and the histogram corresponding to each classifier in the sample library, sorting all the classifiers according to the similarity from large to small, and acquiring the first N classifiers in the sorted classifiers according to a similarity threshold, wherein N is an integer greater than 0;
the gesture recognition module is used for obtaining a gesture detection function according to the gesture models of the first N classifiers and the weight of each classifier in the first N classifiers; the gesture detection function is a function corresponding to the gesture of the image to be recognized.
Preferably, the method further comprises the following steps:
and the classifier histogram acquisition module is used for clustering all the images in the sample base according to the contour point characteristics to obtain a plurality of classifiers, and processing the histograms corresponding to all the images in each classifier to obtain the histogram corresponding to each classifier.
Preferably, the feature alignment module is further configured to soft project the distribution center.
Preferably, the similarity calculation module is further configured to perform normalization processing on the similarities of the histograms corresponding to all the classifiers to obtain similarity weights of the histograms corresponding to all the classifiers.
Preferably, the gesture detection function in the gesture recognition module is:
wherein, ckRepresents the kth classifier in the sorted classifiers, I represents the classifier to be identifiedPicture, X denotes a posture model, q (X | c)kI) denotes the pose function of the kth classifier, p (c)kI) represents the similarity weight of the kth classifier.
According to the technical scheme, the training sample clustering is carried out by using the alignment characteristics, and the human posture outline is described, so that the complexity of the sub-model can be effectively controlled, the aggregation of the appearance similarity samples can be realized, the effectiveness of model learning is improved, the training and learning task of mass data is met, and the performance of the posture recognition method is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart illustrating a multi-classifier gesture recognition method according to an embodiment of the present invention;
FIG. 2 is a feature alignment method of a multi-classifier gesture recognition method according to an embodiment of the present invention;
FIG. 3 is a gesture inference method of the multi-classifier gesture recognition method according to an embodiment of the present invention;
FIG. 4 is a flowchart of a multi-classifier gesture recognition method according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a multi-classifier gesture recognition apparatus according to an embodiment of the present invention.
Detailed Description
The following further describes embodiments of the invention with reference to the drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Fig. 1 shows a flowchart of a multi-classifier gesture recognition method provided in this embodiment, which includes:
s1, obtaining distribution centers of all sample contour point features according to a K-means clustering algorithm, projecting the distribution centers to obtain a first histogram, and obtaining a second histogram corresponding to the image to be recognized according to the first histogram and the contour point features of the image to be recognized;
s2, calculating the similarity between the second histogram and the corresponding histogram of each classifier in the sample library, sorting all the classifiers according to the similarity from large to small, and acquiring the first N classifiers in the sorted classifiers according to a similarity threshold, wherein N is an integer greater than 0;
s3, obtaining a posture detection function according to the posture models of the first N classifiers and the weight of each classifier in the first N classifiers; the gesture detection function is a function corresponding to the gesture of the image to be recognized.
In the embodiment, by adopting the sample library of the multiple classifiers, the complexity of the sub-model can be effectively controlled, the aggregation of the appearance similarity samples can be realized, so that the effectiveness of model learning is improved, the training and learning tasks of mass data are met, and the performance of the gesture recognition method is effectively improved.
As a preferable solution of this embodiment, step S1 includes:
and S0, clustering all the images in the sample library according to the contour point characteristics to obtain a plurality of classifiers, and processing the histograms corresponding to all the images in each classifier to obtain the histogram corresponding to each classifier.
By adopting a multi-classifier mode, the complexity of the sub-model can be effectively controlled, and the aggregation of the appearance similarity samples can be realized, so that the effectiveness of model learning is improved; by establishing the histogram of each classifier, the image to be detected can be conveniently and quickly detected to be the most similar classifier.
Further, step S1 includes: and performing soft projection on the distribution center.
The soft projection technology is a common noise reduction mode and is widely applied to histogram statistics of features such as HOG and sift.
Specifically, step S3 includes: the histograms of all the images in each classifier are processed using an averaging method.
The average method is a simple and effective processing method, and is used for representing the average contour point characteristics of the images in the classifier after accumulating and averaging the histograms of all the classifiers. Other processing methods may also be used.
Further, step S4 includes: and carrying out normalization processing on the similarity of the histograms corresponding to all the classifiers to obtain the similarity weight of the histograms corresponding to all the classifiers.
And carrying out normalization processing on the similarity, so as to conveniently set a unified threshold value in the follow-up process, wherein the threshold value is used for selecting a classifier in the gesture detection function.
Further, step S5 includes: the gesture detection function is:
Figure BDA0000875375840000061
wherein, ckRepresents the kth classifier in the sorted classifiers, I represents the picture to be recognized, X represents the posture model, and q (X | c)kI) denotes the pose function of the kth classifier, p (c)kI) represents the similarity weight of the kth classifier.
The gesture detection function combines and considers the gesture function of each classifier in the sorted classifiers and the corresponding similarity weight, multiplies the two, and more objectively selects the gesture closest to the image to be recognized.
Fig. 2 illustrates a feature alignment method of a multi-classifier gesture recognition method provided in this embodiment: each contour point feature only describes local information near the point, and for the whole posture, the feature description is a set of all contour points forming the human body contour, however, the points are unordered in the set, so that the points cannot be directly used for comparing the similarity of two human body postures. For this purpose, the embodiment adopts a codebook with feature distribution, and performs feature alignment and compares the similarity of two human postures by using the codebook. And B distribution centers (bin) are learned as codebooks by using a K-means clustering mode for the contour point characteristics of all the training samples. And then, carrying out feature alignment on the human body contour features by utilizing the learned codebook. Specifically, each of the contour points is soft-projected onto two distribution centers (bins) closest to its distribution, and the value of the projection weight is inversely proportional to the feature of the point and the similarity in the distribution. The soft projection technology is a common noise reduction mode and is widely applied to histogram statistics of features such as HOG and sift. After the features are aligned, the features of the human body contour can be represented by a feature histogram with the length of B, and the aligned features are represented by BH, so that the similarity comparison between the two contours can be completed by directly comparing the distances of the two histograms.
FIG. 3 illustrates a gesture inference method of the multi-classifier gesture recognition method provided by the present embodiment: aiming at an image I to be detected, firstly calculating the contour feature H of the image I, completing feature alignment by using the feature alignment method, extracting a one-dimensional alignment feature histogram BH, and aligning the one-dimensional alignment feature histogram BH and the distribution center BH (c) of a subclass modelk) Similarity comparison is carried out to obtain a similarity value p (c) of the image I and the subclass modelkI). The calculation formula is as follows: p (c)k|I)∝1=dst(H(I);H(ck) In which c iskRepresenting the kth sub-class model, then aligning and sorting according to the values of the similarity, and according to the following similarity accumulation threshold formula:
Figure BDA0000875375840000071
the pre-K sub-model is chosen for pose estimation. The final detection function is equation (1).
FIG. 4 is a flow chart of a multi-classifier gesture recognition method provided by the present embodiment; for all images in the sample library, firstly extracting three-dimensional shape features of the images, extracting aligned features by using the feature alignment method, clustering samples according to the aligned features, and then learning a model of each subclass by using a machine learning method. In the detection process, the samples to be detected are respectively compared with the submodels, and then the posture detection is carried out by utilizing a similarity accumulation threshold value method, so that the dynamic adjustment of the number of the submodels in the detection process is realized.
Fig. 5 is a schematic structural diagram of a multi-classifier gesture recognition device provided in this embodiment, including:
the feature alignment module 11 is configured to obtain distribution centers of all sample contour point features according to a K-means clustering algorithm, project the distribution centers to obtain a first histogram, and obtain a second histogram corresponding to an image to be recognized according to the first histogram and the contour point features of the image to be recognized;
the similarity calculation module 12 is configured to calculate a similarity between the second histogram and a histogram corresponding to each classifier in the sample library, sort all the classifiers according to a similarity value from large to small, and obtain the top N classifiers in the sorted classifiers according to a similarity threshold, where N is an integer greater than 0;
a gesture recognition module 13, configured to obtain a gesture detection function according to the gesture models of the first N classifiers and the weight of each classifier in the first N classifiers; the gesture detection function is a function corresponding to the gesture of the image to be recognized.
As a preferable aspect of this embodiment, the method further includes:
and the classifier histogram acquisition module is used for clustering all the images in the sample base according to the contour point characteristics to obtain a plurality of classifiers, and processing the histograms corresponding to all the images in each classifier to obtain the histogram corresponding to each classifier.
By adopting a multi-classifier mode, the complexity of the sub-model can be effectively controlled, and the aggregation of the appearance similarity samples can be realized, so that the effectiveness of model learning is improved; by establishing the histogram of each classifier, the image to be detected can be conveniently and quickly detected to be the most similar classifier.
Further, the feature alignment module is further configured to soft project the distribution center.
The soft projection technology is a common noise reduction mode and is widely applied to histogram statistics of features such as HOG and sift.
Specifically, the multi-classifier obtaining module is further configured to process histograms of all images in each classifier by using an average value method.
The average method is a simple and effective processing method, and is used for representing the average contour point characteristics of the images in the classifier after accumulating and averaging the histograms of all the classifiers. Other processing methods may also be used.
Further, the similarity calculation module is further configured to perform normalization processing on the similarities of the histograms corresponding to all the classifiers to obtain similarity weights of the histograms corresponding to all the classifiers.
And carrying out normalization processing on the similarity, so as to conveniently set a unified threshold value in the follow-up process, wherein the threshold value is used for selecting a classifier in the gesture detection function.
Still further, the gesture detection function in the gesture recognition module is formula (1).
The gesture detection function combines and considers the gesture function of each classifier in the sorted classifiers and the corresponding similarity weight, multiplies the two, and more objectively selects the gesture closest to the image to be recognized.
In the description of the present invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Claims (10)

1. A multi-classifier gesture recognition method, comprising:
s1, obtaining distribution centers of all sample contour point features according to a K-means clustering algorithm, projecting the distribution centers to obtain a first histogram, and obtaining a second histogram corresponding to the image to be recognized according to the first histogram and the contour point features of the image to be recognized;
s2, calculating the similarity between the second histogram and the corresponding histogram of each classifier in the sample library, sorting all the classifiers according to the similarity from large to small, and acquiring the first N classifiers in the sorted classifiers according to a similarity threshold, wherein N is an integer greater than 0;
s3, obtaining a posture detection function according to the posture models of the first N classifiers and the similarity weight of each classifier in the first N classifiers; the gesture detection function is a function corresponding to the gesture of the image to be recognized;
in the gesture detection process, images to be recognized are respectively compared with the sorted classifiers, and gesture detection is carried out according to a similarity accumulation threshold formula so as to dynamically adjust the number of the classifiers in the detection process;
the similarity accumulation threshold formula is as follows:
Figure FDA0002235624450000011
p(cii) is an image I to be recognized and a classifier ciA similarity value of ciAnd T is a similarity threshold value used for selecting the classifier in the gesture detection function.
2. The method according to claim 1, wherein step S1 is preceded by:
and S0, clustering all the images in the sample library according to the contour point characteristics to obtain a plurality of classifiers, and processing the histograms corresponding to all the images in each classifier to obtain the histogram corresponding to each classifier.
3. The method according to claim 2, wherein step S1 includes: and performing soft projection on the distribution center.
4. The method according to claim 3, wherein step S4 includes: and carrying out normalization processing on the similarity of the histograms corresponding to all the classifiers to obtain the similarity weight of the histograms corresponding to all the classifiers.
5. The method of claim 4, wherein the gesture detection function is:
Figure FDA0002235624450000021
wherein, ckRepresents the kth classifier of the sorted classifiers, I represents the image to be recognized, X represents the pose model, q (X | c)kI) denotes the pose function of the kth classifier, p (c)kI) is an image I to be recognized and a classifier ckWhich represents the similarity weight of the kth classifier.
6. A multi-classifier gesture recognition apparatus, comprising:
the characteristic alignment module is used for obtaining distribution centers of all sample contour point characteristics according to a K-means clustering algorithm, projecting the distribution centers to obtain a first histogram, and obtaining a second histogram corresponding to the image to be recognized according to the first histogram and the contour point characteristics of the image to be recognized;
the similarity calculation module is used for calculating the similarity between the second histogram and the histogram corresponding to each classifier in the sample library, sorting all the classifiers according to the similarity from large to small, and acquiring the first N classifiers in the sorted classifiers according to a similarity threshold, wherein N is an integer greater than 0;
the gesture recognition module is used for obtaining a gesture detection function according to the gesture models of the first N classifiers and the similarity weight of each classifier in the first N classifiers; the gesture detection function is a function corresponding to the gesture of the image to be recognized;
in the gesture detection process, images to be recognized are respectively compared with the sorted classifiers, and gesture detection is carried out according to a similarity accumulation threshold formula so as to dynamically adjust the number of the classifiers in the detection process;
the similarity accumulation threshold formula is as follows:
Figure FDA0002235624450000031
p(cii) is an image I to be recognized and a classifier ciA similarity value of ciAnd T is a similarity threshold value used for selecting the classifier in the gesture detection function.
7. The apparatus of claim 6, further comprising:
and the classifier histogram acquisition module is used for clustering all the images in the sample base according to the contour point characteristics to obtain a plurality of classifiers, and processing the histograms corresponding to all the images in each classifier to obtain the histogram corresponding to each classifier.
8. The apparatus of claim 7, wherein the feature alignment module is further configured to soft project the distribution center.
9. The apparatus according to claim 8, wherein the similarity calculation module is further configured to perform normalization processing on the similarities of the histograms corresponding to all the classifiers to obtain similarity weights of the histograms corresponding to all the classifiers.
10. The apparatus of claim 9, wherein the gesture detection function in the gesture recognition module is:
wherein, ckRepresents the kth classifier of the sorted classifiers, I represents the image to be recognized, X represents the pose model, q (X | c)kI) denotes the pose function of the kth classifier, p (c)kI) is an image I to be recognized and a classifier ckWhich represents the similarity weight of the kth classifier.
CN201510920778.9A 2015-12-11 2015-12-11 Multi-classifier gesture recognition method and device Active CN105574494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510920778.9A CN105574494B (en) 2015-12-11 2015-12-11 Multi-classifier gesture recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510920778.9A CN105574494B (en) 2015-12-11 2015-12-11 Multi-classifier gesture recognition method and device

Publications (2)

Publication Number Publication Date
CN105574494A CN105574494A (en) 2016-05-11
CN105574494B true CN105574494B (en) 2020-01-17

Family

ID=55884602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510920778.9A Active CN105574494B (en) 2015-12-11 2015-12-11 Multi-classifier gesture recognition method and device

Country Status (1)

Country Link
CN (1) CN105574494B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108133224B (en) * 2016-12-01 2021-11-16 富士通株式会社 Method for evaluating complexity of classification task
CN107766822A (en) * 2017-10-23 2018-03-06 平安科技(深圳)有限公司 Electronic installation, facial image cluster seeking method and computer-readable recording medium
CN110765954A (en) * 2019-10-24 2020-02-07 浙江大华技术股份有限公司 Vehicle weight recognition method, equipment and storage device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246884A (en) * 2013-05-22 2013-08-14 清华大学 Real-time human body action recognizing method and device based on depth image sequence
CN103745218A (en) * 2014-01-26 2014-04-23 清华大学 Gesture identification method and device in depth image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246884A (en) * 2013-05-22 2013-08-14 清华大学 Real-time human body action recognizing method and device based on depth image sequence
CN103745218A (en) * 2014-01-26 2014-04-23 清华大学 Gesture identification method and device in depth image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Recovering 3D Human Pose from Monocular Images;Ankur Agarwal et al;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20060131;第28卷(第1期);第44-58页 *
基于姿态聚类和候选重组的人体姿态估计;肖义;《中国优秀硕士学位论文全文数据库信息科技辑》;20130915(第09期);第I138-418页 *

Also Published As

Publication number Publication date
CN105574494A (en) 2016-05-11

Similar Documents

Publication Publication Date Title
CN107563345B (en) Human body behavior analysis method based on space-time significance region detection
US10002290B2 (en) Learning device and learning method for object detection
CN106778796B (en) Human body action recognition method and system based on hybrid cooperative training
WO2016138838A1 (en) Method and device for recognizing lip-reading based on projection extreme learning machine
CN108171133B (en) Dynamic gesture recognition method based on characteristic covariance matrix
CN106570480B (en) A kind of human action classification method based on gesture recognition
CN105320945A (en) Image classification method and apparatus
CN107798313A (en) A kind of human posture recognition method, device, terminal and storage medium
Shanta et al. Bangla sign language detection using sift and cnn
Fanello et al. One-shot learning for real-time action recognition
CN110555417A (en) Video image recognition system and method based on deep learning
Cai et al. Traffic sign recognition algorithm based on shape signature and dual-tree complex wavelet transform
CN105574494B (en) Multi-classifier gesture recognition method and device
Ravi et al. Sign language recognition with multi feature fusion and ANN classifier
Yan et al. An incremental intelligent object recognition system based on deep learning
Xiao et al. Trajectories-based motion neighborhood feature for human action recognition
Elsayed et al. Hand gesture recognition based on dimensionality reduction of histogram of oriented gradients
CN112101293A (en) Facial expression recognition method, device, equipment and storage medium
Bing et al. Research of face detection based on adaboost and asm
Paczolay et al. Wlab of university of szeged at lifeclef 2014 plant identification task
Gang et al. Improved bags-of-words algorithm for scene recognition
Xu et al. Using improved dense trajectory feature to realize action recognition
Xiao et al. Fast unstructured road detection and tracking from monocular video
CN111353353A (en) Cross-posture face recognition method and device
Linna et al. Online face recognition system based on local binary patterns and facial landmark tracking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant