CN104090663A - Gesture interaction method based on visual attention model - Google Patents

Gesture interaction method based on visual attention model Download PDF

Info

Publication number
CN104090663A
CN104090663A CN201410334996.XA CN201410334996A CN104090663A CN 104090663 A CN104090663 A CN 104090663A CN 201410334996 A CN201410334996 A CN 201410334996A CN 104090663 A CN104090663 A CN 104090663A
Authority
CN
China
Prior art keywords
visual attention
attention location
model
gesture
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410334996.XA
Other languages
Chinese (zh)
Other versions
CN104090663B (en
Inventor
冯志全
何娜娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201410334996.XA priority Critical patent/CN104090663B/en
Publication of CN104090663A publication Critical patent/CN104090663A/en
Application granted granted Critical
Publication of CN104090663B publication Critical patent/CN104090663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a gesture interaction method based on a visual attention model, and the distribution rule of user vision attention in a gesture interaction interface is studied for the first time. Firstly, the human eye sight changing process is accurately tracked through an eye tracker; secondly, data output by the eye tracker are analyzed, the common rule of vision attention is disclosed, and the vision attention model of an operator is built; thirdly, the model is described through a quintic Gauss formula; finally, the vision attention model is applied to gesture tracking algorithm design. The gesture interaction method based on the visual attention model has the advantages that the model is kept strongly in a small area, obviously improves the efficiency of gesture interaction, improves speed, also improves accuracy, and truly simulates the visual characteristics of careful observation of human beings in a certain area.

Description

A kind of gesture interaction method based on visual attention location model
Technical field
The present invention relates to a kind of gesture interaction method based on visual attention location model.
Background technology
In man-machine interactive system, gesture can access accurately real-time tracking, and follow-up processing is played an important role at present.At present, conventional gesture track algorithm has: MeanShif algorithm is a kind ofly based on kernel density estimation, to obtain image characteristic analysis method, and this algorithm is simple because calculating, real-time is good is used widely; CamShif algorithm is a kind of improvement to Mean Shift algorithm, and it is a kind of algorithm that utilizes color probability model, is called as the MeanShift algorithm of continuous adaptive; Kalman filtering algorithm is mainly next moment position of prediction estimating target, and next state is made to optimum estimation, has therefore improved like this stability and the accuracy of system; 2007, Raskin combined Gauss's dynamic model and Annealed Particle filtering, and this method improves a lot to improving the performance of following the tracks of; 2008, Feng Zhi complete utilization PF (Particle Filter) method was carried out the follow-up study of motion nature staff; 2013, Morshid has proposed Gaussian process Annealed Particle filtering algorithm (GPAPF), GPAPF algorithm is followed the tracks of the body part of Annealed Particle filtering with Gaussian process dynamical model incorporates to get up, the dimension that reduces state vector with Gaussian process dynamic model increases the stability that body part is followed the tracks of, in order to keep rotation and translation invariance, created a potential space, thereby make GPAPF there is good performance simultaneously; 2013, Feng Zhiquan proposed the feedback and tracking (FTBM) based on behavior model, and this algorithm is primarily implemented in and in selection-translation-dispose procedure, sets up a behavior model, and experiment shows, FTBM algorithm can reduce dimension and reach real-time follow-up.Although the research of both at home and abroad gesture being followed the tracks of has made great progress, in superincumbent research, all do not have to go to study man-machine interaction from visual angle.Human vision is paid close attention to machine-processed this rapid screening ability herein and be incorporated in man-machine interaction, can effectively must give prominence to target signature like this, interference-shielded information and can make computing machine also there is the anthropoid attention of class intelligence.
In real world, eyes are as the information source window of human maximum, and the 80% above information of obtaining every day obtains by visually-perceptible; The mankind's vision is not at every moment to go to pay close attention to interested region simultaneously, but goes to obtain interested information in certain moment.
Visual attention location is a very complicated process, because of its unusual complicacy and uncertainty, has contained a plurality of subjects such as cognitive science, Neurobiology, psychology.The research of large number of biological scholar and psychology aspect shows, has two kinds of different visual attention locations in our brain: bottom-up ((bottom-up attention) and top-down (top-down attention) two kinds of research methods.1980, Treisman and Gelade proposed influential characteristic synthetic theoretical (feature integration theory), and attention selection is divided into two stages; 1985, Koch and Ullman expanded the characteristic synthetic theoretical model of Treisman, have proposed first the concept of remarkable figure; It is to set up on the people's such as Treisman and Koch basis that L.Itti and C.Koch proposed bottom-up visual attention location computation model in 1998, and this model calculated amount is less, speed is fast, but it is for noise is comparatively responsive, robustness is poor.At present, Itti model is visual attention location model with strongest influence power, often this model as a comparison object weigh the performance of other models.2007, the people such as Gao utilized the step of Itti model to obtain characteristic pattern, proposed Liao Yong center-periphery differentiation theoretical (discriminant center-surround hypothesis) and calculated conspicuousness.Up to the present, mainly concentrating on of visual attention location research extracted to many-sided feature to input picture, as color, towards, brightness, motion etc., form the remarkable figure in each feature dimensions, then these remarkable figure are analyzed to the target that gains attention, but according to obtaining real-time eye movement data, analyze herein, find out the rule that human vision is paid close attention to, set up visual attention location model, finally this model is applied to gesture interaction system.The tracking that this algorithm selects a small amount of Useful Information to carry out gesture from a large amount of visual information of man-machine interactive platform is processed, and the gesture interaction behavior that can allow computing machine simulate really people improves the speed of man-machine interaction.Experiment showed, that this algorithm can improve speed, and can reappear really people's reciprocal process.
Summary of the invention
For solving above technical deficiency, the invention provides a kind of speed fast, the gesture interaction method based on visual attention location model that precision is high, can the visual attention location behavior of simulating human more really in computing machine.
The present invention is achieved by the following measures:
A kind of gesture interaction method based on visual attention location model of the present invention, comprises the following steps:
Step 1, utilizes eye movement instrument to carry out man-machine interaction experiment to experimenter, gathers and record sight line positional information, the fixation time information of experimenter in interactive process, and draws visual attention location intensity P according to this sight line positional information analytical calculation;
Step 2, by the man-machine interaction that different crowd is carried out based on eye movement instrument, test, draw experiment statistics data, it is transverse axis to the distance of object that experimenter's hand is take in making, the visual attention location fitting result chart that experimenter's vision attention P of take is the longitudinal axis, and set up visual attention location model, this visual attention location model represents by 5 sub-Gaussian M;
Step 3, the visual attention location model based in step 2, adopts computer construction virtual experimental scene, and in virtual experimental scene, builds three-dimension gesture man-machine interactive platform, visual attention location model is applied in the virtual experimental scene of this man-machine interaction;
First in virtual experimental scene, calculate three-dimensional hand to the current initial distance of choosing object, and derive in this initial distance and visual attention location fitting result chart on transverse axis experimenter's hand to the mapping relations formula between the maximum distance of object;
Then the three-dimensional hand that in experiment with computing scene, every frame is corresponding is to the current object distance of choosing, utilize mapping relations formula to draw corresponding visual attention location modal distance parameter, and bring in 5 sub-Gaussian M of visual attention location model, calculate corresponding visual attention location intensity P;
Finally according to visual attention location intensity P, select to carry out particle filter method or the animation method that gesture is followed the tracks of.
In step 1, first extract the time period A that translation stage three-dimension gesture overlaps with visual attention location point; Secondly, find the time period B on the computing machine that time period A is corresponding; Again, by B, obtain frame number C corresponding to this event; Finally, by frame number C, obtain paying close attention to accordingly time and hand to the distance D istance of target location, draw the visual attention location intensity P that distance D istance is corresponding, that is:
P = Attention _ time Frame _ time - - - ( 1 )
Wherein Attention_time refers to the time of every frame visual attention location; Frame_time refers to the time of every frame.
In step 2,3, this visual attention location model represents by 5 sub-Gaussian M, as follows:
f ( x ) = a 1 e - ( x - b 1 c 1 ) 2 + a 1 e - ( x - b 2 c 2 ) 2 + a 3 e - ( x - b 3 c 3 ) 2 + a 4 e - ( x - b 4 c 4 ) 2 + a 5 e - ( x - b 5 c 5 ) 2 - - - ( 2 )
Wherein: a1=0.7783, b1=-8.575, c1=20.64, a2=-0.009063, b2=37.51, c2=1.105, a3=-0.4649, b3=60.18, c3=34.45, a4=0.6527, b4=19.05, c4=22.51, a5=1.308, b5=55.04, c5=29.67.
In step 3, set the threshold value of paying close attention to intensity P in visual attention location model M, if hand to choosing visual attention location intensity P corresponding to object distance to be greater than this threshold value, is carried out the particle filter method that gesture is followed the tracks of; If hand to choosing visual attention location intensity P corresponding to object distance to be less than this threshold value, is carried out the animation method that gesture is followed the tracks of.
The invention has the beneficial effects as follows: 1, in three-dimension gesture interactive interface, studied first the visual attention location regularity of distribution of user to three-dimension gesture model.For the research of three-dimension gesture track algorithm provides new cognition foundation.2, this model keeps stronger in fraction region, and needn't " follow the tracks of frame by frame " but go emulation with animation for watching detailed information outside focus attentively, thereby significantly improves the efficiency of gesture interaction.Improved speed.3, in the stronger region of visual attention location, used PF, like this with regard to real simulation the visual signatures that examine at certain area of the mankind.Precision is also mentioned.
Accompanying drawing explanation
Fig. 1 is visual attention location fitting effect of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the present invention is done to further detailed description:
A kind of gesture interaction method based on visual attention location model of the present invention, studies the regularity of distribution of user's visual attention location in gesture interaction interface first.First, by using eye movement instrument to complete, people's an eye line change procedure is accurately followed the trail of; Secondly, the data analysis to the output of eye movement instrument, has disclosed the universal law of visual attention location, and has set up operator's visual attention location model; Again, utilize 5 Gauss formulas to be described this model.Finally, visual attention location model is designed for gesture track algorithm.Innovative point is herein: in three-dimension gesture interactive interface, studied first the visual attention location regularity of distribution of user to three-dimension gesture model, thereby provide new cognition foundation for the research of three-dimension gesture track algorithm.By the contrast experiment who carries out with several related algorithms, show, this algorithm can improve 30% gesture interaction speed and tracking accuracy effectively.
1, obtain the data that sight line changes;
Eye movement instrument is the variation that is used for specially test and records eyes, and follows the tracks of thus the professional instrument that sight line changes.The instrument adopting is in this article Tobii Studio.From Tobii Studio reference manual, can understand the process of obtaining sight line data.Tobii Studio can record the coordinate X that vision is watched the fixation time of position, first blinkpunkt, blinkpunkt duration, total fixation times, blinkpunkt attentively, the parameters such as Y, blinkpunkt sequence.Therefore eye movement can imply how human brain is collected or filter information, after we have tested, just can come decision operation person to watch the information such as selection, search strategy attentively in motion situation by these data.
In order to make, the sight line that Tobii Studio follows the tracks of has truly, reliability, and the crowd that we have chosen all ages and classes, different sexes, different identity carries out a large amount of man-machine interactions and tests (before experiment, not telling the object that experimenter tests) in different scenes.In whole experimentation, experimenter carries out all operations according to the custom of oneself.
2, analyze the data that eye movement instrument obtains;
Experimental result has shown the sight line positional information of each experimenter in whole interactive process, because most of operation in whole process is translation event, so the emphasis of this paper using translation process as research.First, we extract the time period A that translation stage three-dimension gesture overlaps with visual attention location point; Secondly, find the time period B on the computing machine that time period A is corresponding; Again, by B, obtain frame number C corresponding to this event; Finally, by frame C, obtain paying close attention to accordingly time and hand to the distance D istance of target location, can obtain like this concern intensity P that distance D istance is corresponding (the concern probability of vision):
P = Attention _ time Frame _ time - - - ( 1 )
Wherein Attention_time refers to the time of every frame visual attention location; Frame_time refers to the time of every frame.
This paper next research emphasis is studied concern intensity exactly, finds out visual attention location rule, obtains visual attention location model, finally this model is applied to gesture interaction platform.
3, set up visual attention location model;
The regularity of distribution of paying close attention in order to find out human vision, we need to allow different crowds test at different interaction platforms.In mutual whole process, experimenter is completely according to the custom naturally of oneself.According to data statistics process in step 2, we need to carry out a large amount of experiment statisticses and utilize MATLAB software to obtain the visual attention location fitting effect of this event, and as shown in Figure 1, wherein horizontal ordinate is that hand arrives object distance, and ordinate is visual attention location intensity.Visible, visual attention location intensity, along with distance reduces and strengthens, has met people's visual characteristic preferably.
Thus, the visual attention location model of this paper is expressed as follows with 5 sub-Gaussian M:
f ( x ) = a 1 e - ( x - b 1 c 1 ) 2 + a 1 e - ( x - b 2 c 2 ) 2 + a 3 e - ( x - b 3 c 3 ) 2 + a 4 e - ( x - b 4 c 4 ) 2 + a 5 e - ( x - b 5 c 5 ) 2 - - - ( 2 )
A1=0.7783 wherein, b1=-8.575, c1=20.64, a2=-0.009063, b2=37.51, c2=1.105, a3=-0.4649, b3=60.18, c3=34.45, a4=0.6527, b4=19.05, c4=22.51, a5=1.308, b5=55.04, c5=29.67.
4, build three-dimension gesture interaction scenarios;
Based on visual attention location model set forth above, need to design a 3D virtual interacting platform and realize visual attention location mechanism.This paper design interior decoration (little vase) is put into the scene on desk, the object in scene is mainly a desk and two little vases and three-dimensional hand, each little vase can be put on desk.This experiment is to utilize natural hand to control three-dimensional staff, and in 3D scene, main matter is for grabbing little vase, the little vase of translation, putting little vase.Visual attention location model is applied to the experiment scene of man-machine interaction.
(1) implement hardware environment
Generic USB camera, common computer (CPU be Intel (R) Core (TM) 2, dominant frequency 2.66GHz, in save as 4GB)
(2) implement software environment
Programming language is VC6.0 and has used OpenGl technology to build three-dimensional interaction platform, MATLAB, 3DMAX.
(3) algorithm performing step
For gesture, following the tracks of PF is comparative maturity, and PF method is applicable to various non-linear, non-Gausses' problem.Therefore, this paper adopts PF method in the higher region of visual attention location intensity, otherwise we replace " following the tracks of frame by frame " with animation.Specific algorithm is as follows:
Algorithm 1. gesture interaction algorithms.
Input: the distance of current frame number, model, three-dimensional hand are to the distance of object
Output: the concern probability of relative visual attention location model
Step1. initialization, method in this paper obtains the original state of gesture.
Step2. select object:
What a) adopt herein is that the mode of token ring is chosen interested object.
B) calculate three-dimensional hand to the current initial distance dis_initializtion that chooses object, wherein suppose the distance distance of Fig. 1 visual attention location model.
C) calculate Value = dis _ initializtion dis tan ce - - - ( 3 )
Step3. in three-dimensional scenic, hand is to choosing the distance of object and the mapping relations of visual attention location model M distance and obtaining corresponding visual attention location probability.
A) calculate three-dimensional hand that every frame is corresponding to the current object distance distance_new that chooses.
B) obtain this apart from distance_new with respect to visual attention location model M the mapping relations formula apart from Model_distance:
Model _ dis tan ce = dis tan ce _ new * 1 Value - - - ( 4 )
C) obtaining the distance Model_distance with respect to visual attention location model, is formula (2) according to 5 sub-Gaussians of visual attention location, can obtain corresponding visual attention location intensity P:
P=f(Model_distance) (5)
Wherein P represents is exactly that distance that every frame is corresponding is with respect to the visual attention location intensity of visual attention location model M.
Step4. visual attention location model M is applied to gesture interaction platform.
A) find out the particular value of paying close attention to intensity in visual attention location model M, suppose that Yu is modal distance while dwindling, visual attention location intensity is not less than Yu, and when modal distance increases, visual attention location intensity is not more than Yu.
B), according to obtaining hand to choosing the visual attention location intensity P that object distance distance_new is corresponding, if P>Yu carries out PF method so, otherwise carry out animation.Judge that whether collision detection is successful, if success finishes, otherwise frame is cumulative.
The above is only the preferred implementation of this patent; it should be pointed out that for those skilled in the art, do not departing under the prerequisite of the art of this patent principle; can also make some improvement and replacement, these improvement and replacement also should be considered as the protection domain of this patent.

Claims (4)

1. the gesture interaction method based on visual attention location model, is characterized in that, comprises the following steps:
Step 1, utilizes eye movement instrument to carry out man-machine interaction experiment to experimenter, gathers and record sight line positional information, the fixation time information of experimenter in interactive process, and calculates visual attention location intensity P according to this information analysis;
Step 2, by the man-machine interaction that different crowd is carried out based on eye movement instrument, test, draw experiment statistics data, it is transverse axis to the distance of object that experimenter's hand is take in making, the visual attention location fitting result chart that experimenter's vision attention P of take is the longitudinal axis, and set up visual attention location model, this visual attention location model represents by 5 sub-Gaussian M;
Step 3, the visual attention location model based in step 2, adopts computer construction virtual experimental scene, and in virtual experimental scene, builds three-dimension gesture man-machine interactive platform, visual attention location model is applied in the virtual experimental scene of this man-machine interaction;
First in virtual experimental scene, calculate three-dimensional hand to the current initial distance of choosing object, and derive in this initial distance and visual attention location fitting result chart on transverse axis experimenter's hand to the mapping relations formula between the maximum distance of object;
Then the three-dimensional hand that in experiment with computing scene, every frame is corresponding is to the current object distance of choosing, utilize mapping relations formula to draw corresponding visual attention location modal distance parameter, and bring in 5 sub-Gaussian M of visual attention location model, calculate corresponding visual attention location intensity P;
Finally according to visual attention location intensity P, select to carry out particle filter method or the animation method that gesture is followed the tracks of.
2. the gesture interaction method based on visual attention location model according to claim 1, is characterized in that: in step 1, first extract the time period A that translation stage three-dimension gesture overlaps with visual attention location point; Secondly, find the time period B on the computing machine that time period A is corresponding; Again, by B, obtain frame number C corresponding to this event; Finally, by frame number C, obtain paying close attention to accordingly time and hand to the distance D istance of target location, draw the visual attention location intensity P that distance D istance is corresponding, that is:
P = Attention _ time Frame _ time - - - ( 1 )
Wherein Attention_time refers to the time of every frame visual attention location; Frame_time refers to the time of every frame.
3. the gesture interaction method based on visual attention location model according to claim 1, is characterized in that: in step 2,3, this visual attention location model represents by 5 sub-Gaussian M, as follows:
f ( x ) = a 1 e - ( x - b 1 c 1 ) 2 + a 1 e - ( x - b 2 c 2 ) 2 + a 3 e - ( x - b 3 c 3 ) 2 + a 4 e - ( x - b 4 c 4 ) 2 + a 5 e - ( x - b 5 c 5 ) 2 - - - ( 2 )
Wherein: a1=0.7783, b1=-8.575, c1=20.64, a2=-0.009063, b2=37.51, c2=1.105, a3=-0.4649, b3=60.18, c3=34.45, a4=0.6527, b4=19.05, c4=22.51, a5=1.308, b5=55.04, c5=29.67.
4. the gesture interaction method based on visual attention location model according to claim 1, is characterized in that:
In step 3, set the threshold value of paying close attention to intensity P in visual attention location model M, if hand to choosing visual attention location intensity P corresponding to object distance to be greater than this threshold value, is carried out the particle filter method that gesture is followed the tracks of; If hand, to choosing visual attention location intensity P corresponding to object distance to be less than this threshold value, is carried out animation method.
CN201410334996.XA 2014-07-14 2014-07-14 A kind of view-based access control model pays close attention to the gesture interaction method of model Active CN104090663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410334996.XA CN104090663B (en) 2014-07-14 2014-07-14 A kind of view-based access control model pays close attention to the gesture interaction method of model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410334996.XA CN104090663B (en) 2014-07-14 2014-07-14 A kind of view-based access control model pays close attention to the gesture interaction method of model

Publications (2)

Publication Number Publication Date
CN104090663A true CN104090663A (en) 2014-10-08
CN104090663B CN104090663B (en) 2016-03-23

Family

ID=51638384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410334996.XA Active CN104090663B (en) 2014-07-14 2014-07-14 A kind of view-based access control model pays close attention to the gesture interaction method of model

Country Status (1)

Country Link
CN (1) CN104090663B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766054A (en) * 2015-03-26 2015-07-08 济南大学 Vision-attention-model-based gesture tracking method in human-computer interaction interface
CN105045373A (en) * 2015-03-26 2015-11-11 济南大学 Three-dimensional gesture interacting method used for expressing user mental model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
CN103279191A (en) * 2013-06-18 2013-09-04 北京科技大学 3D (three dimensional) virtual interaction method and system based on gesture recognition technology
WO2013144807A1 (en) * 2012-03-26 2013-10-03 Primesense Ltd. Enhanced virtual touchpad and touchscreen

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
WO2013144807A1 (en) * 2012-03-26 2013-10-03 Primesense Ltd. Enhanced virtual touchpad and touchscreen
CN103279191A (en) * 2013-06-18 2013-09-04 北京科技大学 3D (three dimensional) virtual interaction method and system based on gesture recognition technology

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104766054A (en) * 2015-03-26 2015-07-08 济南大学 Vision-attention-model-based gesture tracking method in human-computer interaction interface
CN105045373A (en) * 2015-03-26 2015-11-11 济南大学 Three-dimensional gesture interacting method used for expressing user mental model
CN105045373B (en) * 2015-03-26 2018-01-09 济南大学 A kind of three-dimension gesture exchange method of user oriented mental model expression

Also Published As

Publication number Publication date
CN104090663B (en) 2016-03-23

Similar Documents

Publication Publication Date Title
Orchard et al. Converting static image datasets to spiking neuromorphic datasets using saccades
CN107766842B (en) Gesture recognition method and application thereof
CN108509910A (en) Deep learning gesture identification method based on fmcw radar signal
CN106599770A (en) Skiing scene display method based on body feeling motion identification and image matting
CN103336967B (en) A kind of hand motion trail detection and device
CN104484890A (en) Video target tracking method based on compound sparse model
CN103902989A (en) Human body motion video recognition method based on non-negative matrix factorization
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN105809713A (en) Object tracing method based on online Fisher discrimination mechanism to enhance characteristic selection
CN104267380A (en) Associated display method for full-pulse signal multidimensional parameters
Zhao et al. Annealed particle filter algorithm used for lane detection and tracking
CN103077383B (en) Based on the human motion identification method of the Divisional of spatio-temporal gradient feature
CN104090663B (en) A kind of view-based access control model pays close attention to the gesture interaction method of model
Pang et al. Dance video motion recognition based on computer vision and image processing
Jetley et al. 3D activity recognition using motion history and binary shape templates
CN107229330B (en) A kind of character input method and device based on Steady State Visual Evoked Potential
Sharifi et al. Marker-based human pose tracking using adaptive annealed particle swarm optimization with search space partitioning
CN107451578A (en) Deaf-mute's sign language machine translation method based on somatosensory device
Zhao et al. Simulation of sports training recognition system based on internet of things video behavior analysis
Saravanakumar et al. Eye Tracking and blink detection for human computer interface
He et al. Human behavior feature representation and recognition based on depth video
Zanca et al. A unified computational framework for visual attention dynamics
Zhou et al. A Novel Algorithm of Edge Location Based on Omni-directional and Multi-scale MM.
Korakakis et al. A short survey on modern virtual environments that utilize AI and synthetic data
Bousaaid et al. Hand gesture detection and recognition in cyber presence interactive system for E-learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160323