CN101989326A - Human posture recognition method and device - Google Patents

Human posture recognition method and device Download PDF

Info

Publication number
CN101989326A
CN101989326A CN2009101614527A CN200910161452A CN101989326A CN 101989326 A CN101989326 A CN 101989326A CN 2009101614527 A CN2009101614527 A CN 2009101614527A CN 200910161452 A CN200910161452 A CN 200910161452A CN 101989326 A CN101989326 A CN 101989326A
Authority
CN
China
Prior art keywords
attitude
human body
module
body attitude
template database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2009101614527A
Other languages
Chinese (zh)
Other versions
CN101989326B (en
Inventor
楚汝峰
陈茂林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Beijing Samsung Telecommunications Technology Research Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Samsung Telecommunications Technology Research Co Ltd, Samsung Electronics Co Ltd filed Critical Beijing Samsung Telecommunications Technology Research Co Ltd
Priority to CN200910161452.7A priority Critical patent/CN101989326B/en
Priority to KR1020100036589A priority patent/KR20110013200A/en
Priority to US12/805,457 priority patent/US20110025834A1/en
Publication of CN101989326A publication Critical patent/CN101989326A/en
Application granted granted Critical
Publication of CN101989326B publication Critical patent/CN101989326B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/37Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Abstract

The invention provides a human posture recognition method and a human posture recognition device. The device comprises an input module, a preprocessing module, a training module, a feature extraction module, a template database building module, a search module and an output module, wherein the input module captures a human posture and forms an input image; the preprocessing module normalizes the input image to a fixed size and generates a sample in an independent shape; the training module reduces the dimensionality of sample data by a statistical learning method in a training phase to obtain a projection transformation matrix and build a nearest neighbor classifier; the feature extraction module extracts differentiated posture features from the sample data according to the projection transformation matrix in the training phase and a human recognition phase respectively; the template database building module builds a posture template database according to the differentiated posture features extracted in the training phase; the search module compares the differentiated posture features extracted in the human posture recognition phase with the posture template in the posture template database by using the nearest neighbor classifier to perform human posture matching; and the output module outputs an optimal matching posture and repositions a virtual human model.

Description

Human body attitude recognition methods and device
Technical field
Present invention relates in general to computer vision, more particularly, relate to the estimation of real-time human body attitude identification and motion analysis.
Background technology
The identification of human motion analysis and human body attitude is unusual important techniques, and this technology is used significant human body attitude, to help to realize man-machine interaction, virtual three-dimensional (3D) interactive game, 3D gesture recognition etc.In recent years, because it has the learning value and the commercial value of prospect, human motion capture research has received increasing concern.
There is the multiple scheme that is used for human motion analysis at present.Some schemes need be sticked the specific markers piece or be needed specific capturing movement equipment on object, in general environment (such as home entertaining, 3D interactive game etc.), above-mentioned need be inconvenient for the user, and has limited the application of these schemes.For the application of some reality, made very big effort make be used for human motion analysis mark still less.Existing method mainly is divided into two classes, that is, and and the method for analyzing based on human body and based on the method for sample.On the other hand, existing method also can be divided into method and the 3D laser scanning manikin householder method based on coloured image.
As everyone knows, coloured image can only provide two dimension (2D) information, such as color, texture, shape etc.Therefore, can cause attitude uncertain problem in the 2D information inevitably.For example, if some positions of human body from blocking (self-occlusion), then owing to the uncertainty of the human body attitude in the coloured image, can not be used based on the method for coloured image and carry out correct human body attitude identification.Even used more advanced attitude deduction method, the probabilistic colouring information of attitude also can cause reduction process speed and inaccurate attitude inferred results.In addition, change according to different seasons, people's dress ornament and ambient lighting, colouring information is unsettled (or not robust), therefore, in complex environment, can not meet the demands based on the human body attitude recognition methods of colouring information.Therefore, some researchists and slip-stick artist use the 3D model of laser scanning to obtain more accurate result.Yet because the expensive and large volume of acquisition equipment, laser scanner is impracticable in real environment (such as home entertaining, 3D interactive game etc.).In order to address this problem, need a kind of method and apparatus that in mixed and disorderly environment, carries out real-time human body attitude identification.
Summary of the invention
The present invention still concentrates on human body attitude identification or the human motion analysis that need not tag block.But, in the present invention, solve the problems of the prior art by new mode.At first, the present invention adopts the TOF depth camera (depth image and intensity image can be provided simultaneously) and the colour TV camera (coloured image is provided) of combination.Secondly, the invention provides a kind of method and apparatus of discerning human body attitude in complex environment, this method and apparatus can effectively utilize depth information and colouring information to carry out human body attitude identification.
According to an aspect of the present invention, provide a kind of human body attitude recognition device, this device comprises: load module, comprise depth camera and colour TV camera, and be used for catching simultaneously human body attitude, form input picture; Pretreatment module with the form of input picture pre-service for being fit to, and is a fixed size with image normalization, produces independently attitude sampling of shape, forms sampled data; Training module, the dimension of using statistical learning method to carry out sampled data in the training stage reduces, and arrives the projective transformation matrix of feature space to obtain the original image space, and makes up nearest neighbor classifier; Characteristic extracting module is extracted distinguishing posture feature in training stage and human body attitude cognitive phase from sampled data respectively according to described projective transformation matrix; Template database makes up module, and the distinguishing posture feature that extracts in the training stage according to characteristic extracting module makes up the attitude template database; Search module, distinguishing posture feature and the attitude template in the attitude template database characteristic extracting module extracted in the human body attitude cognitive phase by nearest neighbor classifier compare, to carry out the human body attitude coupling; Output module, the attitude of output optimum matching, and reorientate the virtual human body model based on the attitude of optimum matching.
According to a further aspect in the invention, provide a kind of human body attitude recognition methods, this method comprises: (a) utilize depth camera and colour TV camera to catch human body attitude simultaneously, form input picture; (b) with the form of input picture pre-service for being fit to, and with image normalization is fixed size, produces independently attitude sampling of shape, forms sampled data; (c) dimension of using statistical learning method to carry out sampled data in the training stage reduces, and arrives the projective transformation matrix of feature space to obtain the original image space, and makes up nearest neighbor classifier; (d) extract distinguishing posture feature in training stage and human body attitude cognitive phase from sampled data respectively according to described projective transformation matrix; (e) make up the attitude template database according to the distinguishing posture feature that in the training stage, extracts; (f) distinguishing posture feature and the attitude template in the attitude template database that will extract in the human body attitude cognitive phase by nearest neighbor classifier compares, to carry out the human body attitude coupling; (g) attitude of output optimum matching, and reorientate the virtual human body model based on the attitude of optimum matching.
Description of drawings
In conjunction with the drawings, from the description of the following examples, the present invention these and/or others and advantage will become clear, and are easier to understand, wherein:
Fig. 1 is the block diagram according to the human body attitude recognition device of the embodiment of the invention;
Fig. 2 shows the sampled images of catching according to the load module of the embodiment of the invention;
Fig. 3 is the process flow diagram according to the human body attitude recognition methods of the embodiment of the invention;
Fig. 4 shows the image preprocessing process according to the pretreatment module of the embodiment of the invention;
Fig. 5 shows the example according to the location shoulder point of the embodiment of the invention;
Fig. 6 shows the sorter training process according to the training module of the embodiment of the invention;
Fig. 7 shows the template database building process that makes up module according to the template database of the embodiment of the invention.
Fig. 8 shows the characteristic extraction procedure according to the characteristic extracting module of the embodiment of the invention;
Fig. 9 shows the human body attitude output procedure according to the characteristic matching of the search module of the embodiment of the invention and output module;
Figure 10 to Figure 13 shows experiment 1 and the experiment of carrying out according to the present invention 2.
Embodiment
Below, describe embodiments of the invention in detail with reference to accompanying drawing.
Fig. 1 is the block diagram according to the human body attitude recognition device of the embodiment of the invention.As shown in Figure 1, this human body attitude recognition device comprises that load module 101, pretreatment module 102, training module 103, template database (DB) make up module 104, characteristic extracting module 105, search module 106 and output module 107.
Load module 101 comprises two video cameras, that is, depth camera and colour TV camera, depth camera can be TOF (Time of Flight) depth camera for example.TOF depth camera and colour TV camera are used for catching simultaneously human body attitude, form input picture.Pretreatment module 102 is the form of input picture pre-service for being fit to, and is fixed size with image normalization, produces shape and independently samples.The raw data of normalized sampling is high-dimensional.After pre-service, training module 103 the training stage (promptly, learning phase) uses statistical learning method (such as PCA (pivot analysis), LLE (local linear the embedding) etc.), the dimension of carrying out sampled data reduces, to obtain the original image space (promptly to the projective transformation matrix of feature space, acquisition is used for the Feature Selection mechanism of feature extraction), and make up nearest neighbor classifier.In order to discern human body attitude, template DB makes up the previous attitude template database that module 104 makes up off-line.Make up in the module 104 at template DB, different human body attitudes is manually marked.Then, characteristic extracting module 105 is extracted distinguishing posture feature in the training stage from sampled data according to projective transformation matrix, so that template DB structure module 104 is finally set up the attitude corresponding relation between posture feature and the relevant attitude.
In stage, characteristic extracting module 105 is only extracted distinguishing posture feature according to projective transformation matrix in online gesture recognition.Search module 106 receives described distinguishing posture feature, distinguishing posture feature and the attitude template in the attitude template database characteristic extracting module 105 extracted in the human body attitude cognitive phase by nearest neighbor classifier compare, to carry out the human body attitude coupling.Afterwards, output module 107 provides the attitude of optimum matching, reorientates the virtual human body model.Thus, just finished whole human body gesture recognition process.
In the present invention, use two video cameras to catch identical scene simultaneously.A video camera is the TOF depth camera, and another video camera is a colour TV camera.Colour TV camera can be traditional CCD/CMOS video camera, and coloured image can be provided.The TOF depth camera can provide depth image and intensity image.Depth image is represented the distance between reference object and the TOF depth camera.Intensity image is represented the light intensity energy that the TOF depth camera receives.
Fig. 2 shows the sampled images of catching according to the load module 101 of the embodiment of the invention.As can be seen from Figure 2, intensity image provides background image clearly, and this background image is very suitable for carrying out foreground extraction and outline (silhouette) is cut apart.Intuitively, can easily use background intensity image clearly to locate the head and the trunk of human body.Under glasses reflection that the people wears very serious situation, if want to locate eye locations, then intensity image may not be optimal selection.Therefore, can use coloured image to locate eye locations.Exist multiple diverse ways in coloured image, to locate eye locations.In addition, in some cases, analysis has ambiguity for human body attitude for coloured image and sketch figure picture, therefore can make full use of the ambiguousness that depth image alleviates human body attitude.
Obtained three types input picture (coloured image, depth image and intensity image) afterwards, need be with the form of these image pre-service for being fit to.Utilize this input picture of three types to carry out the image pre-service.
Fig. 3 is the process flow diagram according to the human body attitude recognition methods of the embodiment of the invention.
With reference to Fig. 3, in operation 301, depth camera in the load module 101 and colour TV camera are caught human body attitude simultaneously, form input picture.In operation 302, pretreatment module 102 is the form of input picture pre-service for being fit to, and is fixed size with image normalization, produces shape and independently samples.In operation 303, the dimension that training module 103 uses statistical learning method to carry out sampled data in the training stage reduces dimension and reduces, and with the projective transformation matrix of acquisition original image space to feature space, and makes up nearest neighbor classifier.In operation 304, levy extraction module 104 and extract distinguishing posture feature in training stage and human body attitude cognitive phase from sampled data respectively according to projective transformation matrix.In operation 305, template database (DB) makes up module and makes up the attitude template database according to the distinguishing posture feature in the training stage.In operation 306, search module 106 compares by nearest neighbor classifier extracts characteristic extracting module 105 in the human body attitude cognitive phase distinguishing posture feature and the attitude template in the attitude template database, to carry out the human body attitude coupling.In operation 307, output module 107 is exported the attitude of optimum matching, and reorientates the virtual human body model based on the attitude of optimum matching.
Describe according to image pre-service of the present invention below with reference to Fig. 4 and Fig. 5.Fig. 4 shows the image preprocessing process according to the pretreatment module 102 of the embodiment of the invention.
With reference to Fig. 4, in operation 401, pretreatment module 102 working strength images are cut apart human region and are extracted outline.In this process, can use threshold segmentation method.In operation 402, pretreatment module 102 uses the human region of cutting apart as the mask in the coloured image (mask), so that detect head and trunk.Detecting device training and local feature that pretreatment module 102 can use existing AdaBoost algorithm to provide are provided for head and trunk.Pretreatment module 102 is a fixed size with image normalization, therefore needs some reference point.In operation 403, pretreatment module 102 selects eye locations and shoulder position as the reference point, this be because, for the front view of human body, eye locations is strong reference point in head zone, shoulder position is strong reference point in torso area.In order to locate eye locations con vigore, pretreatment module 102 can be used the eye detecting device of existing training, and this eye detecting device also can be trained based on AdaBoost algorithm and local characterization method.(comprise left shoulder point P in order to locate shoulder position con vigore LSWith right shoulder point P RS), pretreatment module 102 adopts a kind of simple method, and this method has the advantage of the depth image of mask as shown in Figure 4.Pretreatment module 102 detects at the horizontal projection of torso area and the bending point in the vertical projection as the shoulder point.
After having located eye locations and shoulder position, in operation 404, pretreatment module 102 is carried out the shape normalized.The normalized purpose of shape is to produce shape independently to sample.Suppose P 1Center between expression left eye and the right eye, P 2Represent left shoulder point P LSWith right shoulder point P RSBetween the center, D 1Expression P 1And P 2Between distance, D 2Represent left shoulder point P LSWith right shoulder point P RSBetween distance, then adopt D 1As the reference length of height of sampling h, adopt D 2Reference length as sampling width w.Equation below shape normalization part 1024 is used is pruned sampling and be normalized to 80 * 48 size: D 2/ D 1=5: 2 (this ratio is used for shape is carried out normalization); W=4 * D 2And h=6 * D 1(being used for the sample area size).For punch action, pretreatment module 102 will be sampled and be pruned and be normalized to 80 * 80 size, and w=h=6 * D is set 1, because the image of gathering does not comprise complicated punch action.
Fig. 5 shows the example according to the location shoulder point of the embodiment of the invention.Particularly, (a) among Fig. 5 is the outline of human body foreground area.(b) among Fig. 5 is this histogram in this image (this outline) vertical direction, the position of the horizontal direction of horizontal ordinate representative image (promptly, the row coordinate of image, span is 0~picture traverse), the ordinate implication is in some row coordinate points, should be listed as the aggregate-value (that is the vertical direction projection value of this row coordinate points) of all pixel values in the image.(c) among Fig. 5 is image histogram in the horizontal direction, the position of the vertical direction of horizontal ordinate representative image (promptly, the row-coordinate of image, span is 0~picture altitude), the ordinate implication is at some row-coordinate points, the aggregate-value of these all pixel values of row in the image (i.e. the horizontal direction projection value of this row-coordinate point).(d) among Fig. 5 is the result of location human body shoulder point (zone is detected).
Describe according to sorter training of the present invention below with reference to Fig. 6.Fig. 6 shows the sorter training process according to the training module 103 of the embodiment of the invention.
Training module 103 adopts PCA (pivot analysis) and LLE (local linear the embedding) learning method to obtain the projective transformation matrix of original image space to feature space.
With reference to Fig. 6, in operation 601, training module 103 is created training dataset.The standard that training dataset is selected is to make training sampling (that is, the attitude sampling in the training stage) variation and representative, makes training dataset comprise human action as much as possible.Training module 103 is mainly selected diversified training sampling according to different punch action, and makes the training sampling be evenly distributed in the image space.Then, in operation 602, training module 103 will train sampled data to be transformed to suitable input vector, to learn.That is, training module 103 directly expands into the 2D data one dimension (1D) vector.Then, in operation 603, training module 103 employing PCA (pivot analysis) and LLE statistical learning methods such as (local linear the embeddings) carries out dimension and reduces, with the acquisition projective transformation matrix.Those skilled in the art can obtain the concrete introduction about PCA and LLE from prior art, therefore no longer be described in greater detail here.After this, in operation 604, training module 103 makes up has L 1NN (arest neighbors) sorter of distance (measuring similarity value), L 1Be defined in hereinafter and describe.
Describe according to template DB structure of the present invention below with reference to Fig. 7.Fig. 7 shows the template DB building process that makes up module 104 according to the template DB of the embodiment of the invention.It is part and parcel that template DB makes up for the motion analysis based on sample.
With reference to Fig. 7, in operation 701, template DB makes up module 104 and selects different attitude samplings.Then, in operation 702, template DB makes up 104 pairs of attitude sampled images of module and manually marks.Preferably, template DB makes up module 104 uses and produces the data set that is marked based on the motion capture system of mark or suitable computer graphical software.Because current device and layout are limit, and have gathered 8 types punch action attitude in the present invention, and omitted the process of mark.The projective transformation matrix that characteristic extracting module 105 obtains according to training module 103 extracts the distinguishing feature of low dimension from sampling.Then, in operation 703, template DB makes up module 104 and sets up corresponding relation between described distinguishing characteristics and the attitude (skeleton) based on the distinguishing feature of extracting.In the present invention, set up corresponding relation between the index of punch action of described distinguishing characteristics and 8 types.After this, in operation 704, template DB makes up module 104 produces skeleton (or action) index that comprises eigenvector and be associated based on the corresponding relation of setting up template.
Describe according to online gesture recognition of the present invention below with reference to Fig. 8 and Fig. 9.After having trained the suitable template DB of sorter and foundation, can carry out online gesture recognition.Similar with the training stage, at first input picture is carried out pre-service.Process subsequently comprises feature extraction, characteristic matching and human body attitude output.
Fig. 8 shows the characteristic extraction procedure according to the characteristic extracting module 105 of the embodiment of the invention, and Fig. 9 shows according to the characteristic matching of the search module 106 of the embodiment of the invention and the human body attitude output procedure of output module 107.
The purpose of feature extraction is to extract distinguishing feature to mate.With reference to Fig. 8, in operation 801, characteristic extracting module 105 is transformed to suitable image vector with the depth data of input picture,, directly the 2D data is expanded into the 1D vector that is.Then, in operation 802, characteristic extracting module 105 uses the projective transformation matrix that obtains in the training stage will be from the data projection of image space to feature space.In the present invention, can use the PCA and the LLE projective transformation matrix of training.
Suppose X={x 1, x 2... x NThe 1D view data (wherein, N=w * h, w is the sampling width, h is a height of sampling) of expression input, the PCA/LLE projective transformation matrix that W represents to train (dimension of W is: N * M, and M<<N).Therefore, in operation 803, characteristic extracting module 105 can obtain eigenvector V, V=W TX, the dimension of eigenvector V is M.
After having carried out feature extraction, utilize NN (arest neighbors) sorter in template database, to take out top-n optimum matching attitude.Just, search module 106 compares by distinguishing posture feature and the attitude template in the attitude template database 104 that nearest neighbor classifier will extract in the human body attitude cognitive phase, to carry out the human body attitude coupling.Specifically, with reference to Fig. 9, in operation 901, search module 106 utilizes nearest neighbor classifier to calculate current eigenvector and the distance between the eigenvector in the template database.Suppose V 0Be current eigenvector (that is, the eigenvector of input), V iBe among the template DB eigenvector (i=1 ..., N), S iBe skeleton (attitude) index that is associated (i=1 ..., N).Service range is measured L 1=| V 0-V i| (i=1 ..., N), the eigenvector V of input 0Will with all N template V among the template DB iMate, obtain a series of L 1The measuring similarity value.In operation 902, search module 106 is based on described L 1Distance can obtain the index of top-n optimum matching in template database.In operation 903, output module 107 obtains the attitude (skeleton) of optimum matching in template database according to the index profit of optimum matching.Then, in operation 904, output module 107 is reorientated the virtual human body model based on the attitude (skeleton) of optimum matching.
For example, set up attitude template database 104 in the off-line learning stage, attitude template database 104 comprises the maneuver library of a cover taijiquan, and the image of 500 actions is arranged.When setting up attitude template database 104, extracted the eigenvector of each human action respectively, and the position of each articulation point has been marked (be convenient to output module 107 and drive the virtual portrait demonstration).In the stage that the online actions of reality is discerned, the user has done an action, captures the image of this action, has carried out pre-service by pretreatment module 102, and characteristic extracting module 105 is extracted distinguishing posture feature, has obtained the eigenvector of this action then; Search module 106 compares 500 stack features vectors in this eigenvector and the attitude template database 104 respectively by nearest neighbor classifier, calculates similarity, finds n action of similarity maximum, and this process is exactly the process of top-n arest neighbors classification; If n=1 finds a action the most close exactly; The human synovial dot information that output module 107 outputs are corresponding with this action carries out the driving or the demonstration of virtual portrait.
Experiment 1 and the experiment of carrying out according to the present invention 2 described below with reference to Figure 10 to Figure 13.
With reference to Figure 10, in experiment 1, at specific people.The attitude data that in training data, has comprised the people of test.In the training stage, related to 4 people, the punch action of 8 kinds of attitudes is arranged, 1079 samplings (each sample size is 80 * 80) are arranged, reorientate manikin according to 100 dimensions.At test phase, related to 4 people identical with the training stage, the punch action of 8 kinds of attitudes is arranged, tested 1079 samplings.
Figure 11 shows the result of experiment 1.(a) among Figure 11 shows the Search Results that adopts the LLE method to obtain, (b) among Figure 11 shows the Search Results that adopts the PCA method to obtain, (a) in Figure 11 and (b) in, an image in the upper left corner be transfused to as the inquiry, other image is output as rreturn value.
With reference to Figure 12, in experiment 2, at unspecific people.The attitude data that in training data, does not comprise the people of test.In the training stage, related to 4 people, the punch action of 8 kinds of attitudes is arranged, 1079 samplings are arranged, reorientate manikin according to 100 dimensions.At test phase, related to 2 people different with the training stage, the punch action of 8 kinds of attitudes is arranged, tested 494 samplings.
Figure 13 shows the result of experiment 2.(a) among Figure 13 shows the Search Results that adopts the LLE method to obtain, (b) among Figure 13 shows the Search Results that adopts the PCA method to obtain, (a) in Figure 13 and (b) in, an image in the upper left corner be transfused to as the inquiry, other image is output as rreturn value.
Therefore, compare with traditional method based on coloured image, the present invention is because the use depth data can solve the fuzzy problem in the outline.The present invention has utilized depth information and colouring information, and a kind of shape method for normalizing can be provided, and this method can obtain independently gesture recognition of shape.In addition, the present invention has adopted statistical learning method and method for fast searching, makes the human body attitude recognition device simple in structure and more effective.
Though the present invention is specifically described with reference to its exemplary embodiment and is shown, but will be understood by those skilled in the art that, under the situation that does not break away from the spirit and scope of the present invention that are defined by the claims, can carry out the various changes of form and details to it.

Claims (18)

1. human body attitude recognition device comprises:
Load module comprises depth camera and colour TV camera, is used for catching simultaneously human body attitude, forms input picture;
Pretreatment module with the form of input picture pre-service for being fit to, and is a fixed size with image normalization, produces independently attitude sampling of shape, forms sampled data;
Training module, the dimension of using statistical learning method to carry out sampled data in the training stage reduces, and arrives the projective transformation matrix of feature space to obtain the original image space, and makes up nearest neighbor classifier;
Characteristic extracting module is extracted distinguishing posture feature in training stage and human body attitude cognitive phase from sampled data respectively according to described projective transformation matrix;
Template database makes up module, and the distinguishing posture feature that extracts in the training stage according to characteristic extracting module makes up the attitude template database;
Search module, distinguishing posture feature and the attitude template in the attitude template database characteristic extracting module extracted in the human body attitude cognitive phase by nearest neighbor classifier compare, to carry out the human body attitude coupling;
Output module, the attitude of output optimum matching, and reorientate the virtual human body model based on the attitude of optimum matching.
2. human body attitude recognition device according to claim 1, wherein, depth camera forms the depth image and the intensity image of human body attitude, and colour TV camera forms the coloured image of human body attitude.
3. human body attitude recognition device according to claim 2, wherein, pretreatment module working strength image is cut apart human body attitude and is extracted outline, the human region that use is cut apart detects head and trunk, select eye locations and shoulder position as carrying out the shape normalized, produce independently attitude sampling of shape with reference to point.
4. human body attitude recognition device according to claim 3, wherein, training module is created training dataset, make the attitude sampling be evenly distributed in the image space, sampled data is transformed to input vector, the dimension that adopts statistical learning method to carry out sampled data reduces, to obtain described projective transformation matrix.
5. human body attitude recognition device according to claim 4, wherein, described statistical learning method comprises pca method or local linear embedding grammar.
6. human body attitude recognition device according to claim 5, wherein, template database makes up module and selects different attitude samplings, and the attitude sampled images is manually marked; Characteristic extracting module is extracted the distinguishing feature of low dimension from the attitude sampling according to described projective transformation matrix; Template database makes up module and sets up corresponding relation between described distinguishing characteristics and the attitude based on the distinguishing feature of extracting, and produces the template of the attitude index that comprises eigenvector and be associated based on the corresponding relation of setting up, to make up template database.
7. human body attitude recognition device according to claim 6, wherein, characteristic extracting module is transformed to the one-dimensional data vector with the depth data of input picture, uses the projective transformation matrix obtain in the training stage will be from the data projection of image space to feature space, with the acquisition eigenvector.
8. human body attitude recognition device according to claim 7, wherein, search module calculates current eigenvector and the distance between the eigenvector in the template database by nearest neighbor classifier, obtains the index of optimum matching in template database based on described distance.
9. human body attitude recognition device according to claim 8, wherein, output module obtains the attitude of optimum matching according to the index of optimum matching in template database, and reorientates the virtual human body model based on the attitude of optimum matching.
10. human body attitude recognition methods may further comprise the steps:
(a) utilize depth camera and colour TV camera to catch human body attitude simultaneously, form input picture;
(b) with the form of input picture pre-service for being fit to, and with image normalization is fixed size, produces independently attitude sampling of shape, forms sampled data;
(c) dimension of using statistical learning method to carry out sampled data in the training stage reduces, and arrives the projective transformation matrix of feature space to obtain the original image space, and makes up nearest neighbor classifier;
(d) extract distinguishing posture feature in training stage and human body attitude cognitive phase from sampled data respectively according to described projective transformation matrix;
(e) make up the attitude template database according to the distinguishing posture feature that in the training stage, extracts;
(f) distinguishing posture feature and the attitude template in the attitude template database that will extract in the human body attitude cognitive phase by nearest neighbor classifier compares, to carry out the human body attitude coupling;
(g) attitude of output optimum matching, and reorientate the virtual human body model based on the attitude of optimum matching.
11. human body attitude recognition methods according to claim 10, wherein, depth camera forms the depth image and the intensity image of human body attitude, and colour TV camera forms the coloured image of human body attitude.
12. human body attitude recognition methods according to claim 11, wherein, step (b) comprising:
The working strength image is cut apart human body attitude and is extracted outline;
Human region after use is cut apart detects head and trunk,
Select eye locations and shoulder position as carrying out the shape normalized, produce independently attitude sampling of shape with reference to point.
13. human body attitude recognition methods according to claim 12, wherein, step (c) comprising:
Create training dataset, make the attitude sampling be evenly distributed in the image space;
Sampled data is transformed to input vector;
The dimension that adopts statistical learning method to carry out sampled data reduces, to obtain described projective transformation matrix.
14. human body attitude recognition methods according to claim 13, wherein, described statistical learning method comprises pca method or local linear embedding grammar.
15. human body attitude recognition methods according to claim 14, wherein, step (e) comprising:
Select different attitude samplings, the attitude sampled images is manually marked;
Based on the corresponding relation of setting up in the distinguishing feature of training stage extraction between described distinguishing characteristics and the attitude;
Produce the template that comprises eigenvector and the attitude index that is associated based on the corresponding relation of setting up, to make up template database.
16. human body attitude recognition methods according to claim 15, wherein, step (d) comprising:
The depth data of input picture is transformed to the one-dimensional data vector;
The projective transformation matrix that use obtained in the training stage will be from the data projection of image space to feature space, to obtain eigenvector.
17. human body attitude recognition methods according to claim 16, wherein, step (f) comprising:
Calculate current eigenvector and the distance between the eigenvector in the template database by nearest neighbor classifier;
In template database, obtain the index of optimum matching based on described distance.
18. human body attitude recognition methods according to claim 17, wherein, step (g) comprising:
In template database, obtain the attitude of optimum matching according to the index of optimum matching;
Reorientate the virtual human body model based on the attitude of optimum matching.
CN200910161452.7A 2009-07-31 2009-07-31 Human posture recognition method and device Expired - Fee Related CN101989326B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN200910161452.7A CN101989326B (en) 2009-07-31 2009-07-31 Human posture recognition method and device
KR1020100036589A KR20110013200A (en) 2009-07-31 2010-04-20 Identifying method of human attitude and apparatus of the same
US12/805,457 US20110025834A1 (en) 2009-07-31 2010-07-30 Method and apparatus of identifying human body posture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910161452.7A CN101989326B (en) 2009-07-31 2009-07-31 Human posture recognition method and device

Publications (2)

Publication Number Publication Date
CN101989326A true CN101989326A (en) 2011-03-23
CN101989326B CN101989326B (en) 2015-04-01

Family

ID=43745858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910161452.7A Expired - Fee Related CN101989326B (en) 2009-07-31 2009-07-31 Human posture recognition method and device

Country Status (2)

Country Link
KR (1) KR20110013200A (en)
CN (1) CN101989326B (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184008A (en) * 2011-05-03 2011-09-14 北京天盛世纪科技发展有限公司 Interactive projection system and method
CN102324041A (en) * 2011-09-09 2012-01-18 深圳泰山在线科技有限公司 Pixel classification method, joint body gesture recognition method and mouse instruction generating method
CN102436301A (en) * 2011-08-20 2012-05-02 Tcl集团股份有限公司 Human-machine interaction method and system based on reference region and time domain information
CN102509074A (en) * 2011-10-18 2012-06-20 Tcl集团股份有限公司 Target identification method and device
CN103310193A (en) * 2013-06-06 2013-09-18 温州聚创电气科技有限公司 Method for recording important skill movement moments of athletes in gymnastics video
CN103366160A (en) * 2013-06-28 2013-10-23 西安交通大学 Objectionable image distinguishing method integrating skin color, face and sensitive position detection
CN103390150A (en) * 2012-05-08 2013-11-13 北京三星通信技术研究有限公司 Human body part detection method and device
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN103718175A (en) * 2011-07-27 2014-04-09 三星电子株式会社 Apparatus, method, and medium detecting object pose
CN103778436A (en) * 2014-01-20 2014-05-07 电子科技大学 Pedestrian gesture inspecting method based on image processing
CN103890752A (en) * 2012-01-11 2014-06-25 三星电子株式会社 Apparatus for recognizing objects, apparatus for learning classification trees, and method for operating same
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN104463089A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Human body posture recognizing device
CN104573612A (en) * 2013-10-16 2015-04-29 北京三星通信技术研究有限公司 Equipment and method for estimating postures of multiple overlapped human body objects in range image
WO2015096508A1 (en) * 2013-12-28 2015-07-02 华中科技大学 Attitude estimation method and system for on-orbit three-dimensional space object under model constraint
CN104952221A (en) * 2015-07-09 2015-09-30 李乔亮 Intelligent table lamp with myopia prevention function
CN105631861A (en) * 2015-12-21 2016-06-01 浙江大学 Method of restoring three-dimensional human body posture from unmarked monocular image in combination with height map
CN105997094A (en) * 2016-05-09 2016-10-12 北京科技大学 A posture identification device and method
CN106780569A (en) * 2016-11-18 2017-05-31 深圳市唯特视科技有限公司 A kind of human body attitude estimates behavior analysis method
CN107358149A (en) * 2017-05-27 2017-11-17 深圳市深网视界科技有限公司 A kind of human body attitude detection method and device
CN108121963A (en) * 2017-12-21 2018-06-05 北京奇虎科技有限公司 Processing method, device and the computing device of video data
CN108154161A (en) * 2016-12-05 2018-06-12 上海西门子医疗器械有限公司 The method of training grader, the method and medical instrument for determining detected object position
WO2018107872A1 (en) * 2016-12-15 2018-06-21 广州视源电子科技股份有限公司 Method and device for predicting body type
CN108345869A (en) * 2018-03-09 2018-07-31 南京理工大学 Driver's gesture recognition method based on depth image and virtual data
CN108629345A (en) * 2017-03-17 2018-10-09 北京京东尚科信息技术有限公司 Dimensional images feature matching method and device
CN109101866A (en) * 2018-06-05 2018-12-28 中国科学院自动化研究所 Pedestrian recognition methods and system again based on segmentation outline
CN110008998A (en) * 2018-11-27 2019-07-12 美律电子(深圳)有限公司 Label data generating system and method
CN110020630A (en) * 2019-04-11 2019-07-16 成都乐动信息技术有限公司 Method, apparatus, storage medium and the electronic equipment of assessment movement completeness
CN110362843A (en) * 2018-11-20 2019-10-22 莆田学院 A kind of visual human's entirety posture approximation generation method based on typical posture
CN111353543A (en) * 2020-03-04 2020-06-30 镇江傲游网络科技有限公司 Motion capture data similarity measurement method, device and system
CN112288798A (en) * 2019-07-24 2021-01-29 鲁班嫡系机器人(深圳)有限公司 Posture recognition and training method, device and system
CN115294375A (en) * 2022-10-10 2022-11-04 南昌虚拟现实研究院股份有限公司 Speckle depth estimation method and system, electronic device and storage medium
US11564651B2 (en) 2020-01-14 2023-01-31 GE Precision Healthcare LLC Method and systems for anatomy/view classification in x-ray imaging

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8908995B2 (en) 2009-01-12 2014-12-09 Intermec Ip Corp. Semi-automatic dimensioning with imager on a portable device
US9208571B2 (en) * 2011-06-06 2015-12-08 Microsoft Technology Licensing, Llc Object digitization
KR101908284B1 (en) 2012-01-13 2018-10-16 삼성전자주식회사 Apparatus and method for analysising body parts association
US9779546B2 (en) 2012-05-04 2017-10-03 Intermec Ip Corp. Volume dimensioning systems and methods
US9007368B2 (en) 2012-05-07 2015-04-14 Intermec Ip Corp. Dimensioning system calibration systems and methods
US10007858B2 (en) 2012-05-15 2018-06-26 Honeywell International Inc. Terminals and methods for dimensioning objects
US10321127B2 (en) 2012-08-20 2019-06-11 Intermec Ip Corp. Volume dimensioning system calibration systems and methods
US9939259B2 (en) 2012-10-04 2018-04-10 Hand Held Products, Inc. Measuring object dimensions using mobile computer
US9841311B2 (en) 2012-10-16 2017-12-12 Hand Held Products, Inc. Dimensioning system
US9080856B2 (en) 2013-03-13 2015-07-14 Intermec Ip Corp. Systems and methods for enhancing dimensioning, for example volume dimensioning
US10228452B2 (en) 2013-06-07 2019-03-12 Hand Held Products, Inc. Method of error correction for 3D imaging device
US9239950B2 (en) 2013-07-01 2016-01-19 Hand Held Products, Inc. Dimensioning system
US9464885B2 (en) 2013-08-30 2016-10-11 Hand Held Products, Inc. System and method for package dimensioning
US9823059B2 (en) 2014-08-06 2017-11-21 Hand Held Products, Inc. Dimensioning system with guided alignment
US10775165B2 (en) 2014-10-10 2020-09-15 Hand Held Products, Inc. Methods for improving the accuracy of dimensioning-system measurements
US10810715B2 (en) 2014-10-10 2020-10-20 Hand Held Products, Inc System and method for picking validation
US9779276B2 (en) 2014-10-10 2017-10-03 Hand Held Products, Inc. Depth sensor based auto-focus system for an indicia scanner
US9557166B2 (en) 2014-10-21 2017-01-31 Hand Held Products, Inc. Dimensioning system with multipath interference mitigation
US9897434B2 (en) 2014-10-21 2018-02-20 Hand Held Products, Inc. Handheld dimensioning system with measurement-conformance feedback
US9752864B2 (en) 2014-10-21 2017-09-05 Hand Held Products, Inc. Handheld dimensioning system with feedback
US10060729B2 (en) 2014-10-21 2018-08-28 Hand Held Products, Inc. Handheld dimensioner with data-quality indication
US9762793B2 (en) 2014-10-21 2017-09-12 Hand Held Products, Inc. System and method for dimensioning
US9786101B2 (en) 2015-05-19 2017-10-10 Hand Held Products, Inc. Evaluating image values
US10066982B2 (en) 2015-06-16 2018-09-04 Hand Held Products, Inc. Calibrating a volume dimensioner
US20160377414A1 (en) 2015-06-23 2016-12-29 Hand Held Products, Inc. Optical pattern projector
US9857167B2 (en) 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner
US9835486B2 (en) 2015-07-07 2017-12-05 Hand Held Products, Inc. Mobile dimensioner apparatus for use in commerce
EP3396313B1 (en) 2015-07-15 2020-10-21 Hand Held Products, Inc. Mobile dimensioning method and device with dynamic accuracy compatible with nist standard
US10094650B2 (en) 2015-07-16 2018-10-09 Hand Held Products, Inc. Dimensioning and imaging items
US20170017301A1 (en) 2015-07-16 2017-01-19 Hand Held Products, Inc. Adjusting dimensioning results using augmented reality
US10249030B2 (en) 2015-10-30 2019-04-02 Hand Held Products, Inc. Image transformation for indicia reading
US10225544B2 (en) 2015-11-19 2019-03-05 Hand Held Products, Inc. High resolution dot pattern
US10025314B2 (en) 2016-01-27 2018-07-17 Hand Held Products, Inc. Vehicle positioning and object avoidance
US10339352B2 (en) 2016-06-03 2019-07-02 Hand Held Products, Inc. Wearable metrological apparatus
US9940721B2 (en) 2016-06-10 2018-04-10 Hand Held Products, Inc. Scene change detection in a dimensioner
US10163216B2 (en) 2016-06-15 2018-12-25 Hand Held Products, Inc. Automatic mode switching in a volume dimensioner
US10909708B2 (en) 2016-12-09 2021-02-02 Hand Held Products, Inc. Calibrating a dimensioner using ratios of measurable parameters of optic ally-perceptible geometric elements
US11047672B2 (en) 2017-03-28 2021-06-29 Hand Held Products, Inc. System for optically dimensioning
US10733748B2 (en) 2017-07-24 2020-08-04 Hand Held Products, Inc. Dual-pattern optical 3D dimensioning
US10584962B2 (en) 2018-05-01 2020-03-10 Hand Held Products, Inc System and method for validating physical-item security
CN111797791A (en) * 2018-12-25 2020-10-20 上海智臻智能网络科技股份有限公司 Human body posture recognition method and device
US11639846B2 (en) 2019-09-27 2023-05-02 Honeywell International Inc. Dual-pattern optical 3D dimensioning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060269145A1 (en) * 2003-04-17 2006-11-30 The University Of Dundee Method and system for determining object pose from images
CN101079103A (en) * 2007-06-14 2007-11-28 上海交通大学 Human face posture identification method based on sparse Bayesian regression
CN101332362A (en) * 2008-08-05 2008-12-31 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060269145A1 (en) * 2003-04-17 2006-11-30 The University Of Dundee Method and system for determining object pose from images
CN101079103A (en) * 2007-06-14 2007-11-28 上海交通大学 Human face posture identification method based on sparse Bayesian regression
CN101332362A (en) * 2008-08-05 2008-12-31 北京中星微电子有限公司 Interactive delight system based on human posture recognition and implement method thereof

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184008A (en) * 2011-05-03 2011-09-14 北京天盛世纪科技发展有限公司 Interactive projection system and method
CN103718175A (en) * 2011-07-27 2014-04-09 三星电子株式会社 Apparatus, method, and medium detecting object pose
CN103718175B (en) * 2011-07-27 2018-10-12 三星电子株式会社 Detect equipment, method and the medium of subject poses
CN102436301A (en) * 2011-08-20 2012-05-02 Tcl集团股份有限公司 Human-machine interaction method and system based on reference region and time domain information
CN102436301B (en) * 2011-08-20 2015-04-15 Tcl集团股份有限公司 Human-machine interaction method and system based on reference region and time domain information
CN102324041A (en) * 2011-09-09 2012-01-18 深圳泰山在线科技有限公司 Pixel classification method, joint body gesture recognition method and mouse instruction generating method
CN102324041B (en) * 2011-09-09 2014-12-03 深圳泰山在线科技有限公司 Pixel classification method, joint body gesture recognition method and mouse instruction generating method
CN102509074A (en) * 2011-10-18 2012-06-20 Tcl集团股份有限公司 Target identification method and device
US9508152B2 (en) 2012-01-11 2016-11-29 Samsung Electronics Co., Ltd. Object learning and recognition method and system
US10163215B2 (en) 2012-01-11 2018-12-25 Samsung Electronics Co., Ltd. Object learning and recognition method and system
CN103890752A (en) * 2012-01-11 2014-06-25 三星电子株式会社 Apparatus for recognizing objects, apparatus for learning classification trees, and method for operating same
US10867405B2 (en) 2012-01-11 2020-12-15 Samsung Electronics Co., Ltd. Object learning and recognition method and system
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN103390150A (en) * 2012-05-08 2013-11-13 北京三星通信技术研究有限公司 Human body part detection method and device
CN103390150B (en) * 2012-05-08 2019-01-08 北京三星通信技术研究有限公司 human body part detection method and device
CN103310193A (en) * 2013-06-06 2013-09-18 温州聚创电气科技有限公司 Method for recording important skill movement moments of athletes in gymnastics video
CN103310193B (en) * 2013-06-06 2016-05-25 温州聚创电气科技有限公司 A kind of method that records sportsman's important technology action moment in gymnastics video
CN103366160A (en) * 2013-06-28 2013-10-23 西安交通大学 Objectionable image distinguishing method integrating skin color, face and sensitive position detection
CN104573612A (en) * 2013-10-16 2015-04-29 北京三星通信技术研究有限公司 Equipment and method for estimating postures of multiple overlapped human body objects in range image
CN104573612B (en) * 2013-10-16 2019-10-22 北京三星通信技术研究有限公司 The device and method of the posture for the multiple human objects being overlapped in estimating depth image
CN104463089A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Human body posture recognizing device
WO2015096508A1 (en) * 2013-12-28 2015-07-02 华中科技大学 Attitude estimation method and system for on-orbit three-dimensional space object under model constraint
CN103778436B (en) * 2014-01-20 2017-04-05 电子科技大学 A kind of pedestrian's attitude detecting method based on image procossing
CN103778436A (en) * 2014-01-20 2014-05-07 电子科技大学 Pedestrian gesture inspecting method based on image processing
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN104952221B (en) * 2015-07-09 2017-06-13 深圳大学 Myopia-proof intelligent desk lamp
CN104952221A (en) * 2015-07-09 2015-09-30 李乔亮 Intelligent table lamp with myopia prevention function
CN105631861B (en) * 2015-12-21 2019-10-01 浙江大学 Restore the method for 3 D human body posture from unmarked monocular image in conjunction with height map
CN105631861A (en) * 2015-12-21 2016-06-01 浙江大学 Method of restoring three-dimensional human body posture from unmarked monocular image in combination with height map
CN105997094A (en) * 2016-05-09 2016-10-12 北京科技大学 A posture identification device and method
CN105997094B (en) * 2016-05-09 2019-03-29 北京科技大学 A kind of gesture recognition device and method
CN106780569A (en) * 2016-11-18 2017-05-31 深圳市唯特视科技有限公司 A kind of human body attitude estimates behavior analysis method
CN108154161A (en) * 2016-12-05 2018-06-12 上海西门子医疗器械有限公司 The method of training grader, the method and medical instrument for determining detected object position
WO2018107872A1 (en) * 2016-12-15 2018-06-21 广州视源电子科技股份有限公司 Method and device for predicting body type
CN108629345A (en) * 2017-03-17 2018-10-09 北京京东尚科信息技术有限公司 Dimensional images feature matching method and device
US11210555B2 (en) 2017-03-17 2021-12-28 Beijing Jingdong Shangke Information Technology Co., Ltd. High-dimensional image feature matching method and device
CN107358149A (en) * 2017-05-27 2017-11-17 深圳市深网视界科技有限公司 A kind of human body attitude detection method and device
CN108121963A (en) * 2017-12-21 2018-06-05 北京奇虎科技有限公司 Processing method, device and the computing device of video data
CN108345869A (en) * 2018-03-09 2018-07-31 南京理工大学 Driver's gesture recognition method based on depth image and virtual data
CN109101866B (en) * 2018-06-05 2020-12-15 中国科学院自动化研究所 Pedestrian re-identification method and system based on segmentation silhouette
CN109101866A (en) * 2018-06-05 2018-12-28 中国科学院自动化研究所 Pedestrian recognition methods and system again based on segmentation outline
CN110362843A (en) * 2018-11-20 2019-10-22 莆田学院 A kind of visual human's entirety posture approximation generation method based on typical posture
CN110008998A (en) * 2018-11-27 2019-07-12 美律电子(深圳)有限公司 Label data generating system and method
CN110008998B (en) * 2018-11-27 2021-07-13 美律电子(深圳)有限公司 Label data generating system and method
CN110020630A (en) * 2019-04-11 2019-07-16 成都乐动信息技术有限公司 Method, apparatus, storage medium and the electronic equipment of assessment movement completeness
CN112288798A (en) * 2019-07-24 2021-01-29 鲁班嫡系机器人(深圳)有限公司 Posture recognition and training method, device and system
US11564651B2 (en) 2020-01-14 2023-01-31 GE Precision Healthcare LLC Method and systems for anatomy/view classification in x-ray imaging
CN111353543A (en) * 2020-03-04 2020-06-30 镇江傲游网络科技有限公司 Motion capture data similarity measurement method, device and system
CN115294375A (en) * 2022-10-10 2022-11-04 南昌虚拟现实研究院股份有限公司 Speckle depth estimation method and system, electronic device and storage medium
CN115294375B (en) * 2022-10-10 2022-12-13 南昌虚拟现实研究院股份有限公司 Speckle depth estimation method and system, electronic device and storage medium

Also Published As

Publication number Publication date
CN101989326B (en) 2015-04-01
KR20110013200A (en) 2011-02-09

Similar Documents

Publication Publication Date Title
CN101989326B (en) Human posture recognition method and device
Mahmood et al. WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors
CN105930767B (en) A kind of action identification method based on human skeleton
CN108776773B (en) Three-dimensional gesture recognition method and interaction system based on depth image
Liu et al. Hand gesture recognition using depth data
US9111147B2 (en) Assisted video surveillance of persons-of-interest
US7308112B2 (en) Sign based human-machine interaction
Correa et al. Human detection and identification by robots using thermal and visual information in domestic environments
US20110025834A1 (en) Method and apparatus of identifying human body posture
CN102004899B (en) Human face identifying system and method
CN103839040A (en) Gesture identification method and device based on depth images
Xu et al. Real-time dynamic gesture recognition system based on depth perception for robot navigation
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
CN108898063A (en) A kind of human body attitude identification device and method based on full convolutional neural networks
US20080285807A1 (en) Apparatus for Recognizing Three-Dimensional Motion Using Linear Discriminant Analysis
Choi et al. Human body orientation estimation using convolutional neural network
Zhang et al. A survey on human pose estimation
CN108171133A (en) A kind of dynamic gesture identification method of feature based covariance matrix
CN110796101A (en) Face recognition method and system of embedded platform
Li et al. Robust multiperson detection and tracking for mobile service and social robots
CN109325408A (en) A kind of gesture judging method and storage medium
Sapp et al. A Fast Data Collection and Augmentation Procedure for Object Recognition.
CN108830222A (en) A kind of micro- expression recognition method based on informedness and representative Active Learning
Vidhate et al. Virtual paint application by hand gesture recognition system
CN104731324A (en) Gesture inner plane rotating detecting model generating method based on HOG+SVM framework

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150401

Termination date: 20170731

CF01 Termination of patent right due to non-payment of annual fee