CN104766054A - Vision-attention-model-based gesture tracking method in human-computer interaction interface - Google Patents

Vision-attention-model-based gesture tracking method in human-computer interaction interface Download PDF

Info

Publication number
CN104766054A
CN104766054A CN201510137223.7A CN201510137223A CN104766054A CN 104766054 A CN104766054 A CN 104766054A CN 201510137223 A CN201510137223 A CN 201510137223A CN 104766054 A CN104766054 A CN 104766054A
Authority
CN
China
Prior art keywords
model
visual attention
distance
gesture
attention location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510137223.7A
Other languages
Chinese (zh)
Inventor
冯志全
何娜娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201510137223.7A priority Critical patent/CN104766054A/en
Publication of CN104766054A publication Critical patent/CN104766054A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to a vision-attention-model-based gesture tracking method in a human-computer interaction interface. Firstly, the human eye sight change process is accurately tracked through an eye tracker; then data output by the eye tracker are analyzed, and a vision attention model of an operator for an related event is built; finally, on the basis of the vision attention model, a three-dimensional gesture tracking algorithm is designed. The key of the vision-attention-model-based gesture tracking method is to obtain the vision attention model of the operator in a three-dimensional scene. The vision-attention-model-based gesture tracking method has the advantages that vision attention and human-computer interaction are combined, the vision rule of the human can be presented, and more attention is paid to pursuing the natural human-computer interaction style; in addition, the bottleneck problem in the gesture tracking speed at present is solved through the vision attention distribution rule of a user in the human-computer interaction process. A vision attention mechanism is introduced, the average running time is shortened, the human-computer interaction efficiency is improved, the human-computer interaction speed is increased, and the gesture tracking accuracy is improved.

Description

In human-computer interaction interface, view-based access control model pays close attention to the gesture tracking method of model
Technical field
The present invention relates to view-based access control model in a kind of human-computer interaction interface and pay close attention to the gesture tracking method of model.
Background technology
Along with the develop rapidly of computer technology, the interacting activity of people and computing machine becomes an important component part of people's daily life gradually, and the human-computer interaction technology meeting interpersonal communication custom becomes current development trend.Current human-computer interaction technology from " centered by machine " to " focus be put on man " changes, and more focuses on pursuing natural man-machine interaction style.
Gesture be a kind of naturally comfortable and meet people custom interactive mode, existing a lot of gesture interaction algorithm is used for the interaction mechanism of simulating people at present, but all can not simulate the interbehavior of people really, afterwards, people find by carrying out research to human visual system, in the face of a complicated scene, human vision attention location system can be primarily focused on a few significant visual object fast, and this process is exactly visual attention location.Therefore, the gesture interaction of view-based access control model can meet the interaction habits of the mankind most with its naturally friendly interactive mode, thus the focus making the gesture interaction of view-based access control model become the research of man-machine interaction aspect has naturality, substantivity and terseness based on eye tracking with based on the man-machine interaction mode of gesture identification, can natural resources be saved, really realize the man-machine interaction mode of harmony and natural.
A lot of about the research of gesture interaction algorithm at present, these algorithms can improve the performance of gesture tracking, the stability of raising system and accuracy, as: MeanShift algorithm is that one obtains image characteristic analysis method based on kernel density estimation, this algorithm because of calculate simple, real-time is good and be used widely, it is the refreshing probability density distribution to static state and the algorithm that designs mainly; Yang Jie proposes a kind of Camshift track algorithm of improvement, and first this algorithm carries out target detection, then determines segmentation threshold, finally adopts Camshift algorithm to carry out target following.Experiment shows, this algorithm can obtain good tracking effect.Hu Peng proposes a kind of Target Segmentation method of semi-automation, and the method effectively can be partitioned into moving target and extract its feature.Liang Juan proposes the track algorithm combined with Camshift based on Kalman filter, and this algorithm still can be followed the tracks of steadily when target seriously blocks and disturbed with color.Tan Wen-jun proposes gesture tracking algorithm Kalman filter and complexion model combined, and the complexion model that this algorithm adopts is YCbCr oval complexion model spatially.Experiment shows, the situation such as hand deformation and track turning of this algorithm in gesture motion process effectively can follow the tracks of gesture.Gauss's dynamic model and Annealed Particle filtering combine by Raskin, to reach the object reducing state vector dimension.Feng Zhi-Quan is with behavioural analysis and be modeled as point of penetration, and propose the gesture tracking algorithm based on state micromechanism, this algorithm can obtain more accurate tracking results.Utilize PF (Particle Filter) method based on Gauss's sampling to carry out the follow-up study of motion nature staff, PF algorithm can improve the tracking accuracy of three-dimension gesture.Although have made great progress the research of gesture tracking both at home and abroad, these algorithms are all follow the tracks of frame by frame, and calculated amount is very large, causes interactive efficiency low.
Visual attention location mechanism is a complicated process, and the complicacy unusual because of it and uncertainty, cover multiple subjects such as cognitive science, Neurobiology, psychology.In recent years, visual attention location model has achieved huge progress, the research of these visual attention location models mainly concentrates on and processes image, obtains the tone of image, saturation degree and brightness, then converts each characteristic pattern, the conspicuousness of evaluating objects, remove redundancy background information, obtain corresponding remarkable figure, last according to significantly scheming, original image is split, obtains target.
Summary of the invention
For solving above technical deficiency, the invention provides a kind of gesture tracking method improving view-based access control model concern model in the human-computer interaction interface of man-machine interaction speed and tracking accuracy.
The present invention is achieved by the following measures:
In a kind of human-computer interaction interface of the present invention, view-based access control model pays close attention to the gesture tracking method of model, comprises the following steps:
Step 1, is divided into four-stage by experimenter man-machine interaction task, is respectively gesture translation stage, captures the object stage, catches object translation stage and release object stage; The related data utilizing eye tracker collection experimenter each stage in man-machine interaction task process is carried out experimenter;
Step 2, the data of the visual attention location intensity P in each stage are drawn according to the Correlative data analysis gathered in step 1, and the data analysis to visual attention location intensity P, corresponding relation between the data summarizing distance Distance and visual attention location intensity P, its middle distance Distance is the distance of gesture model to target location Goals; In three-dimensional scenic, the visual attention location intensity P of target location Goals is defined as: wherein T 1represent in T.T. T the concern time of target location Goals;
Step 3, according to the corresponding data of each stage middle distance Distance and visual attention location intensity P, finds out the Changing Pattern of visual attention location intensity P, uses relevant data analysis software, obtains the visual attention location strength model in each stage;
Step 4, visual attention location strength model step 3 drawn combines with the particle filter PF algorithm adopted in gesture tracking, find out the relation of population Num in gesture tracking precision A and particle filter PF algorithm, and then draw the linear relationship that population Num and visual attention location intensity P exists, when adopting particle filter PF algorithm to carry out computing, the change along with visual attention location intensity P upgrades the value of population Num.
In step 3 above-mentioned, gesture translation stage and catch the visual attention location strength model T in object translation stage to be: wherein, x is that the distance Distance of gesture model to Goals, t (x) represent the visual attention location intensity that Distance is corresponding, a i, b iand c i(1<=i<=3) be parameter, and a 1=0.1579, b 1=114.6, c 1=21.43, a 2=1.388, b 2=-25.56, c 2=37.29, a 3=0.7465, b 3=30.12, c 3=24.67, a 4=0.6378, b 4=57.04, c 4=17.71.
In step 3 above-mentioned, capture visual attention location strength model G:g (the x)=p in object stage 1x 5+ p 2x 4+ p 3x 3+ p 4x 2+ p 5x+p 6; Wherein, x is that the distance Distance of gesture model to Goals, g (x) represent the visual attention location intensity that Distance is corresponding, p i(1<=i<=6) be parameter and p 1=0.0008501, p 2=-0.0125, p 3=0.06148, p 4=-0.1083, p 5=0.03885, p 6=0.9664.
In step 3 above-mentioned, visual attention location strength model R:r (the x)=r in release object stage 1x 5+ r 2x 4+ r 3x 3+ r 4x 2+ r 5x+r 6; Wherein, x is that the distance Distance of gesture model to Goals, r (x) represent the visual attention location intensity that Distance is corresponding, r i(1<=i<=6) be parameter and r 1=0.000799, r 2=-0.01286, r 3=0.07247, r 4=-0.1658, r 5=0.1307, r 6=0.9641.
In step 4 above-mentioned, the relational expression of gesture tracking precision A and population Num: A=f (Num); The linear relation that population Num and P exists:
The invention has the beneficial effects as follows: visual attention location is combined with man-machine interaction, the visual law of the mankind can be presented, more focus on pursuing natural man-machine interaction style, and breach the bottleneck problem of the gesture tracking speed existed at present by the visual attention location regularity of distribution of user in interactive process.By introducing visual attention location mechanism, reducing average operating time, improve man-machine interaction efficiency, man-machine interaction speed and gesture tracking precision.
Accompanying drawing explanation
Fig. 1 is the visual attention location strength model curve synoptic diagram of translation event in the present invention.
Fig. 2 is the visual attention location strength model curve synoptic diagram of event of grabbing in the present invention.
Fig. 3 is the visual attention location strength model curve synoptic diagram of event of putting in the present invention.
The relation curve schematic diagram of Fig. 4 curve synoptic diagram tracking accuracy and population.
Embodiment
Below in conjunction with accompanying drawing, further detailed description is done to the present invention:
The present invention is directed to and existingly based on the slow-footed problem of 3D gesture model in the three-dimensional human-computer interaction interface of gesture input, visual attention location mechanism is incorporated in human-computer interaction interface, propose a kind of view-based access control model and pay close attention to the gesture tracking method of distributed model.First, achieved by eye tracker human eye sight change procedure is accurately followed the trail of; Then, to the data analysis that eye tracker exports, the visual attention location model of operator to pertinent events is set up; Finally, based on visual attention location model, design three-dimension gesture track algorithm.Core work is to obtain the visual attention location model of operator in three-dimensional scenic.Experimental result shows, the model that algorithm extracts not only meets the visual attention location mechanism of operator, and can effectively improve interactive speed and tracking accuracy.
First, experimenter man-machine interaction task is divided into four-stage, is respectively gesture translation stage, captures the object stage, catch object translation stage and release object stage; The related data utilizing eye tracker collection experimenter each stage in man-machine interaction task process is carried out experimenter.
Eyes are windows of mankind's soul, and through this window, we can probe into the rule of many psychological activities of people.The Information procession of the mankind depends on vision to a great extent.Research finds, the external information that the mankind obtain about has more than 80% to be obtained by eyes, so the research that eye moves is considered to the most effective means in visual information working research.How people sees that the scientific research of things was never interrupted.In recent years, the instrument that the measurement eye of some precisions moves rule is come out one after another, for psychologic experimental study provides new effective tool.This important step of also having impelled the research of visual attention location to stride forward.Visual Trace Technology, because of its importance in man-machine interaction, becomes one of focus that Present Domestic studies outward.By eye tracker, the region that we can record eye gaze and the time of watching attentively, object each several part watch order etc. attentively.By to these data analysis, we can analyze focus when operator sees object, area-of-interest and browse custom etc.
Tobii eye tracker is mainly used to the research of the visual attention location of operator, it can obtain the positional information that user watches attentively by the rotation information obtaining eyeball, as fixation time, the coordinate X of blinkpunkt duration, always fixation times, blinkpunkt of visual fixations position, first blinkpunkt, the parameters etc. such as Y, blinkpunkt sequence.By carrying out statistical study to these data, the visual attention location rule drawing user can be inferred.Therefore, next gesture interaction experiment has been carried out by Tobii eye tracker.
According to the cognitive theory in modern times, mutual process was undertaken by the stage, and the feature of these stages can map out the common psychological process of the mankind such as thinking, attention of operator.Find through experimental analysis research, the flow process of an interactive task can be reduced to four-stage: the 1. translation stage of gesture; 2. the object stage is captured; 3. the object translation stage is caught; 4. the object stage is discharged; And in each mutual stage, the point-of-interest of human vision is different.
Then the data of the visual attention location intensity P in each stage are drawn according to Correlative data analysis, and the data analysis to visual attention location intensity P, the corresponding relation between the data summarizing distance Distance and visual attention location intensity P.Middle distance Distance is the distance of gesture model to target location Goals.
One time interactive task can be reduced to four-stage, and each stage definitions is elementary event by we, is called for short event.In three-dimensional scenic, the visual attention location intensity P of target location Goals is defined as:
P = T 1 T - - - ( 1 )
Wherein T 1represent to the concern time of target location Goals in T.T. T, concern time remaining is called concern in the process of more than 100ms by us.
In order to find out the intrinsic visual attention location rule of each event, carry out a large amount of experiments of the gesture interaction based on Tobii eye tracker.In order to make experimental data, there is statistical significance, we have selected different sexes, different academic backgrounds, different industries 200 people participate in experiment, each experimenter carry out respectively 5 times experiment and they complete identical interactive task according to oneself natural, the most familiar manner of execution.Tobii eye tracker output file mainly comprises: the screen coordinate of eye tracker data, hand, elementary event time, the time etc. that sight line overlaps with gesture model, wherein this file record of screen coordinate of hand screen coordinate of gesture model from start to end, the eye tracker data file screen coordinate of all focus from start to end.Completed the mutual experiment of assembling electric oven by 200 experimenters, the file that we export Tobii eye tracker, carry out statistical study according to as follows:
1. the data that eye tracker equipment exports are processed;
2. the time period that time period of each event and hand overlap with visual attention location point is obtained;
3. frame number and visual attention location intensity P that in each event, hand overlaps with visual attention location point is obtained;
4. hand in each event and the non-coincidence concern intensity of visual attention location point are set to 0;
5. frame number and hand is tried to achieve in each event to the corresponding relation of the distance of object;
6. according to 2., 3. and 4. the relation of Distance geometry visual attention location intensity P arriving object in one's hands;
Just can obtain translation according to above step, grab, the data of visual attention location intensity P that event of putting is corresponding, by these data analysis, just can find the rule of visual attention location.
Subsequently according to the corresponding data of each stage middle distance Distance and visual attention location intensity P, find out the Changing Pattern of visual attention location intensity P, use relevant data analysis software, obtain the visual attention location strength model in each stage.
After getting the data of visual attention location intensity by experiment, just can set about data analysis.Data find by analysis, and gesture model is when distance object is near, and visual attention location intensity is large especially, and gesture model is when distance object is far away, and visual attention location intensity is relatively weak, and even low is 0.We analyze by means of MATLAB instrument, the Changing Pattern of the visual attention location intensity of the event that can obtain translation event, grab event, put, respectively shown in following Fig. 1,2,3.In Fig. 1,2,3, horizontal ordinate represents the distance Distance of gesture model to target location Goals, and ordinate represents the visual attention location intensity P that Distance is corresponding.In the process of removing gripper part, Goals refers to electric oven device, and in the process returning assembling electric oven, Goals refers to the position that device is put.
As can be seen from Figure 1, at translation stage, visual attention location intensity P has certain rule governed.In translation event, gesture model is 276 to the maximum distance of Goals by we, is designated as model.Obviously find from Fig. 1, as Distance<=3/14*model, visual attention location intensity P remains on more than 0.89, and during Distance>3/14*model, visual attention location intensity P reduces gradually, is even reduced to 0.According to the corresponding data of Distance and P in translation event and use MATLAB instrument, the visual attention location strength model T that we can obtain translation event is:
t ( x ) = &Sigma; i = 1 4 a i e - ( x - b i c i ) 2 - - - ( 2 )
Wherein, x is that the distance Distance of gesture model to Goals, t (x) represent the visual attention location intensity that Distance is corresponding, a i, b iand c i(1<=i<=3) be parameter, and a 1=0.1579, b 1=114.6, c 1=21.43, a 2=1.388, b 2=-25.56, c 2=37.29, a 3=0.7465, b 3=30.12, c 3=24.67, a 4=0.6378, b 4=57.04, c 4=17.71.
Shown in Fig. 2, in the stage of gripper part, visual attention location intensity P keeps higher always, and description operation person is when gripper part, and visual attention location is more concentrated, and this meets the visual attention location mechanism of the mankind completely.By to the analysis of grabbing event data, the visual attention location strength model G that the event of grabbing is corresponding can be drawn:
g(x)=p 1x 5+p 2x 4+p 3x 3+p 4x 2+p 5x+p 6(3)
Wherein, x is that the distance Distance of gesture model to Goals, g (x) represent the visual attention location intensity that Distance is corresponding, p i(1<=i<=6) be parameter and p 1=0.0008501, p 2=-0.0125, p 3=0.06148, p 4=-0.1083, p 5=0.03885, p 6=0.9664.
Shown in Fig. 3, in the process of putting assembly ware, although the visual attention location intensity P of operator has fluctuation, P remains on more than 0.96.Compared with the visual attention location intensity of translation event, the visual attention location intensity of grabbing event is higher, and this illustrates that the visual attention location of operator is more concentrated when putting object.
By to putting event data analysis, the visual attention location strength model R that the event of putting is corresponding can be obtained:
r(x)=r 1x 5+r 2x 4+r 3x 3+r 4x 2+r 5x+r 6(4)
In formula, x is that the distance Distance of gesture model to Goals, r (x) represent the visual attention location intensity that Distance is corresponding, r i(1<=i<=6) be parameter and r 1=0.000799, r 2=-0.01286, r 3=0.07247, r 4=-0.1658, r 5=0.1307, r 6=0.9641.
Formula (2), (3) and (4) all can simulate the visual attention location mechanism of the mankind preferably.
Above-mentioned visual attention location strength model is combined with the particle filter PF algorithm adopted in gesture tracking, find out the relation of population Num in gesture tracking precision A and particle filter PF algorithm, and then draw the linear relationship that population Num and visual attention location intensity P exists, when adopting particle filter PF algorithm to carry out computing, the change along with visual attention location intensity P upgrades the value of population Num.
Experiment shows, in the interested region of operator, visual attention location intensity is higher, and description operation person's visual attention location is more concentrated; Otherwise visual attention location intensity is lower, show that the visual attention location of operator compares dispersion.In order to solve tracking velocity bottleneck problem, at visual attention location intensity large regions, we adopt and use PF algorithm; Animation is then used in other regions.We know that PF algorithm can realize good gesture tracking, does therefore we have to think deeply the tracking accuracy A of PF algorithm and population Num exist certain relation?
In order to find out the relation of tracking accuracy A and population Num, We conducted a large amount of experiments.First be operator completes electric oven assembling when Num=5, obtain the average of gesture tracking precision A; Secondly, when the value of Num adds 10 successively, operator completes the assembling of electric oven, obtains the average of gesture tracking precision A; Until Num is 110.Find through experiment, along with the increase of Num, gesture tracking precision A is improved, but Num increases to certain value, and A is close to 1.
4 obviously find from the graph, along with the increase of population Num, tracking accuracy A is also along with increase, but Num increases to certain value 80, and tracking accuracy A remains unchanged and close to 1.
Through analysis above, we can draw following relational expression:
A=f(Num) (5)
When population Num increases, tracking accuracy A also can improve, and this illustrates that visual attention location is more concentrated, thus visual attention location intensity P is also larger.Can draw in sum: as Num=0, A=0, P=0; As Num=80, A=1, P=1.For simplicity, this patent adopts Num and P to there is linear relationship, draws following relation:
P = 1 80 * Num - - - ( 6 )
Based on visual attention location model set forth above, need design 3D virtual interacting platform to realize visual attention location mechanism.Interior decoration (little vase) to be put into the scene on desk by this paper design, and the object in scene is a desk and two little vases and three-dimensional hand mainly, and each little vase can be put on desk.This experiment utilizes natural hand to manipulate three-dimensional staff, and in 3D scene, main matter is for grabbing little vase, the little vase of translation, putting little vase.By the experiment scene of visual attention location models applying to man-machine interaction.
The above is only the preferred implementation of this patent; it should be pointed out that for those skilled in the art, under the prerequisite not departing from the art of this patent principle; can also make some improvement and replacement, these improve and replace the protection domain that also should be considered as this patent.

Claims (5)

1. in human-computer interaction interface, view-based access control model pays close attention to a gesture tracking method for model, it is characterized in that, comprises the following steps:
Step 1, is divided into four-stage by experimenter man-machine interaction task, is respectively gesture translation stage, captures the object stage, catches object translation stage and release object stage; The related data utilizing eye tracker collection experimenter each stage in man-machine interaction task process is carried out experimenter;
Step 2, the data of the visual attention location intensity P in each stage are drawn according to the Correlative data analysis gathered in step 1, and the data analysis to visual attention location intensity P, corresponding relation between the data summarizing distance Distance and visual attention location intensity P, its middle distance Distance is the distance of gesture model to target location Goals; In three-dimensional scenic, the visual attention location intensity P of target location Goals is defined as: wherein T 1represent in T.T. T the concern time of target location Goals;
Step 3, according to the corresponding data of each stage middle distance Distance and visual attention location intensity P, finds out the Changing Pattern of visual attention location intensity P, uses relevant data analysis software, obtains the visual attention location strength model in each stage;
Step 4, visual attention location strength model step 3 drawn combines with the particle filter PF algorithm adopted in gesture tracking, find out the relation of population Num in gesture tracking precision A and particle filter PF algorithm, and then draw the linear relationship that population Num and visual attention location intensity P exists, when adopting particle filter PF algorithm to carry out computing, the change along with visual attention location intensity P upgrades the value of population Num.
2. the visual attention location distributed model construction method of three-dimension gesture in gesture interaction interface according to claim 1, is characterized in that: in step 3, gesture translation stage and catch the visual attention location strength model T in object translation stage to be: t ( x ) = &Sigma; i = 1 4 a i e - ( x - b i c i ) 2 ;
Wherein, x is that the distance Distance of gesture model to target location Goals, t (x) represent the visual attention location intensity that Distance is corresponding, a i, b iand c i(1<=i<=3) be parameter, and a 1=0.1579, b 1=114.6, c 1=21.43, a 2=1.388, b 2=-25.56, c 2=37.29, a 3=0.7465, b 3=30.12, c 3=24.67, a 4=0.6378, b 4=57.04, c 4=17.71.
3. view-based access control model pays close attention to the gesture tracking method of model in human-computer interaction interface according to claim 1, it is characterized in that: in step 3, captures visual attention location strength model G:g (the x)=p in object stage 1x 5+ p 2x 4+ p 3x 3+ p 4x 2+ p 5x+p 6; Wherein, x is that the distance Distance of gesture model to target location Goals, g (x) represent the visual attention location intensity that Distance is corresponding, p i(1<=i<=6) be parameter and p 1=0.0008501, p 2=-0.0125, p 3=0.06148, p 4=-0.1083, p 5=0.03885, p 6=0.9664.
4. view-based access control model pays close attention to the gesture tracking method of model in human-computer interaction interface according to claim 1, it is characterized in that: in step 3, visual attention location strength model R:r (the x)=r in release object stage 1x 5+ r 2x 4+ r 3x 3+ r 4x 2+ r 5x+r 6; Wherein, x is that the distance Distance of gesture model to target location Goals, r (x) represent the visual attention location intensity that Distance is corresponding, r i(1<=i<=6) be parameter and r 1=0.000799, r 2=-0.01286, r 3=0.07247, r 4=-0.1658, r 5=0.1307, r 6=0.9641.
5. view-based access control model pays close attention to the gesture tracking method of model in human-computer interaction interface according to claim 1, it is characterized in that: in step 4, the relational expression of gesture tracking precision A and population Num: A=f (Num); The linear relation that population Num and P exists:
CN201510137223.7A 2015-03-26 2015-03-26 Vision-attention-model-based gesture tracking method in human-computer interaction interface Pending CN104766054A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510137223.7A CN104766054A (en) 2015-03-26 2015-03-26 Vision-attention-model-based gesture tracking method in human-computer interaction interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510137223.7A CN104766054A (en) 2015-03-26 2015-03-26 Vision-attention-model-based gesture tracking method in human-computer interaction interface

Publications (1)

Publication Number Publication Date
CN104766054A true CN104766054A (en) 2015-07-08

Family

ID=53647868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510137223.7A Pending CN104766054A (en) 2015-03-26 2015-03-26 Vision-attention-model-based gesture tracking method in human-computer interaction interface

Country Status (1)

Country Link
CN (1) CN104766054A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110362210A (en) * 2019-07-24 2019-10-22 济南大学 The man-machine interaction method and device of eye-tracking and gesture identification are merged in Virtual assemble

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567093A (en) * 2009-05-25 2009-10-28 济南大学 Method for initializing three-dimension gesture model
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN103473801A (en) * 2013-09-27 2013-12-25 中国科学院自动化研究所 Facial expression editing method based on single camera and motion capturing data
CN104090663A (en) * 2014-07-14 2014-10-08 济南大学 Gesture interaction method based on visual attention model
CN104408760A (en) * 2014-10-28 2015-03-11 燕山大学 Binocular-vision-based high-precision virtual assembling system algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567093A (en) * 2009-05-25 2009-10-28 济南大学 Method for initializing three-dimension gesture model
CN102184551A (en) * 2011-05-10 2011-09-14 东北大学 Automatic target tracking method and system by combining multi-characteristic matching and particle filtering
CN103473801A (en) * 2013-09-27 2013-12-25 中国科学院自动化研究所 Facial expression editing method based on single camera and motion capturing data
CN104090663A (en) * 2014-07-14 2014-10-08 济南大学 Gesture interaction method based on visual attention model
CN104408760A (en) * 2014-10-28 2015-03-11 燕山大学 Binocular-vision-based high-precision virtual assembling system algorithm

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110362210A (en) * 2019-07-24 2019-10-22 济南大学 The man-machine interaction method and device of eye-tracking and gesture identification are merged in Virtual assemble
CN110362210B (en) * 2019-07-24 2022-10-11 济南大学 Human-computer interaction method and device integrating eye movement tracking and gesture recognition in virtual assembly

Similar Documents

Publication Publication Date Title
CN108829245B (en) A kind of virtual sand table intersection control routine based on multi-modal brain-machine interaction technology
Li et al. Identifying emotions from non-contact gaits information based on microsoft kinects
Hu et al. Fixationnet: Forecasting eye fixations in task-oriented virtual environments
Chen et al. Phase space reconstruction for improving the classification of single trial EEG
CN107656613A (en) A kind of man-machine interactive system and its method of work based on the dynamic tracking of eye
CN102402289B (en) Mouse recognition method for gesture based on machine vision
CN103699216A (en) Email communication system and method based on motor imagery and visual attention mixed brain-computer interface
CN103699217A (en) Two-dimensional cursor motion control system and method based on motor imagery and steady-state visual evoked potential
CN107239137B (en) A kind of character input method and device based on dummy keyboard
CN109480834A (en) A kind of Method of EEG signals classification based on quick multiple dimension empirical mode decomposition
CN105212949A (en) A kind of method using skin pricktest signal to carry out culture experience emotion recognition
Chalasani et al. Egocentric gesture recognition for head-mounted ar devices
Zafar et al. Initial-dip-based classification for fNIRS-BCI
Saha et al. Common spatial pattern in frequency domain for feature extraction and classification of multichannel EEG signals
Fang et al. Recent advances of P300 speller paradigms and algorithms
CN107272905B (en) A kind of exchange method based on EOG and EMG
CN107229330B (en) A kind of character input method and device based on Steady State Visual Evoked Potential
CN113082448A (en) Virtual immersion type autism children treatment system based on electroencephalogram signal and eye movement instrument
Wang Simulation of sports movement training based on machine learning and brain-computer interface
CN104766054A (en) Vision-attention-model-based gesture tracking method in human-computer interaction interface
Buvaneswari et al. A review of EEG based human facial expression recognition systems in cognitive sciences
Wagner et al. A sensing architecture for empathetic data systems
Glowinski et al. Towards real-time affect detection based on sample entropy analysis of expressive gesture
Zhang Virtual reality games based on brain computer interface
Limbaga et al. Development of an EEG-based Brain-Controlled System for a Virtual Prosthetic Hand

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150708