CN104090663B - A kind of view-based access control model pays close attention to the gesture interaction method of model - Google Patents

A kind of view-based access control model pays close attention to the gesture interaction method of model Download PDF

Info

Publication number
CN104090663B
CN104090663B CN201410334996.XA CN201410334996A CN104090663B CN 104090663 B CN104090663 B CN 104090663B CN 201410334996 A CN201410334996 A CN 201410334996A CN 104090663 B CN104090663 B CN 104090663B
Authority
CN
China
Prior art keywords
visual attention
model
attention location
distance
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410334996.XA
Other languages
Chinese (zh)
Other versions
CN104090663A (en
Inventor
冯志全
何娜娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN201410334996.XA priority Critical patent/CN104090663B/en
Publication of CN104090663A publication Critical patent/CN104090663A/en
Application granted granted Critical
Publication of CN104090663B publication Critical patent/CN104090663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A kind of view-based access control model of the present invention pays close attention to the gesture interaction method of model, studies the regularity of distribution of user's visual attention location in gesture interaction interface first.First, by using eye tracker to complete, human eye sight change procedure is accurately followed the trail of; Secondly, to the data analysis that eye tracker exports, disclose the universal law of visual attention location, and establish the visual attention location model of operator; Again, 5 Gauss formulas are utilized to be described this model.Finally, visual attention location model is used for gesture tracking algorithm design.The invention has the beneficial effects as follows: this model keeps comparatively strong in fraction region, significantly improve the efficiency of gesture interaction, improve speed, precision is also mentioned.The visual signature that the real simulation mankind examine at certain area.

Description

A kind of view-based access control model pays close attention to the gesture interaction method of model
Technical field
The present invention relates to the gesture interaction method that a kind of view-based access control model pays close attention to model.
Background technology
At present in man-machine interactive system, gesture can obtain accurately real-time tracking, plays an important role to follow-up process.At present, conventional gesture tracking algorithm has: MeanShif algorithm a kind ofly obtains image characteristic analysis method based on kernel density estimation, this algorithm because of calculate simple, real-time is good and be used widely; CamShif algorithm improves the one of MeanShift algorithm, and it is a kind of algorithm utilizing color probability model, is called as the MeanShift algorithm of continuous adaptive; The subsequent time position of Kalman filter algorithm mainly predicted estimate target, makes optimum estimation to next state, therefore which enhances stability and the accuracy of system; 2007, Gauss's dynamic model and Annealed Particle filtering combined by Raskin, and this method improves a lot to improving the performance of following the tracks of; 2008, Feng Zhi complete utilization PF (ParticleFilter) method carried out the follow-up study of motion nature staff; 2013, Morshid proposes Gaussian process Annealed Particle filtering algorithm (GPAPF), GPAPF algorithm is followed the tracks of the body part of Annealed Particle filtering and is combined with Gaussian process dynamic model, the stability of body part tracking is increased with the dimension of Gaussian process dynamic model minimizing state vector, simultaneously in order to keep rotating and translation invariance and create a potential space, thus GPAPF is made to have good performance; 2013, Feng Zhiquan proposed the feedback and tracking (FTBM) of Behavior-based control model, and this algorithm is primarily implemented in selection-translation-dispose procedure and sets up a behavior model, and experiment shows, FTBM algorithm can be reduced dimension and reaches real-time follow-up.Although have made great progress the research of gesture tracking both at home and abroad, all do not have in superincumbent research to go to study man-machine interaction from visual angle.Human vision being paid close attention to machine-processed this rapid screening ability is herein incorporated in man-machine interaction, effectively can must give prominence to target signature like this, interference-shielded information and computing machine can be made also to have class anthropoid attention intelligence.
In real world, eyes are as the information source window of human maximum, and more than 80% information that every day obtains is obtained by visually-perceptible; The vision of the mankind is not at every moment go to pay close attention to interested region simultaneously, but goes to obtain interested information in certain moment.
Visual attention location is a very complicated process, and the complicacy unusual because of it and uncertainty, cover multiple subjects such as cognitive science, Neurobiology, psychology.The research of large number of biological scholar and psychology aspect shows, the visual attention location that in our brain, existence two kinds is different: bottom-up ((bottom-upattention) and top-down (top-downattention) two kinds of research methods.1980, Treisman and Gelade proposed influential characteristic synthetic theory (featureintegrationtheory), and attention selection is divided into two stages; 1985, the characteristic synthetic theoretical model of Koch and Ullman to Treisman was expanded, and proposes the concept of remarkable figure first; L.Itti and C.Koch proposed bottom-up visual attention location computation model in 1998 be set up on the basis of the people such as Treisman and Koch, and this model calculated amount is less, speed fast, but it is comparatively responsive for noise, robustness is poor.At present, Itti model is visual attention location model with strongest influence power, often this model is weighed the performance of other models as comparison other.2007, the people such as Gao utilized the step of Itti model to obtain characteristic pattern, propose and differentiate that theoretical (discriminantcenter-surroundhypothesis) calculates conspicuousness with center-periphery.Up to the present, to input picture, many-sided feature is extracted to mainly concentrating on of visual attention location research, as color, towards, brightness, motion etc., form the remarkable figure in each feature dimensions, then carry out analysis to these remarkable figure to gain attention target, but analyze according to the real-time eye movement data of acquisition herein, find out the rule that human vision is paid close attention to, set up visual attention location model, finally by this models applying to gesture interaction system.This algorithm selects a small amount of useful information to carry out the tracking process of gesture from a large amount of visual information of man-machine interactive platform, and computing machine can be allowed to simulate the gesture interaction behavior of people really to improve the speed of man-machine interaction.Experiment proves, this algorithm can improve speed, and can reappear the reciprocal process of people really.
Summary of the invention
For solving above technical deficiency, the invention provides a kind of speed fast, the view-based access control model that precision is high pays close attention to the gesture interaction method of model, can the visual attention location behavior of more real simulating human in a computer.
The present invention is achieved by the following measures:
A kind of view-based access control model of the present invention pays close attention to the gesture interaction method of model, comprises the following steps:
Step 1, utilizes eye tracker to carry out man-machine interaction experiment to experimenter, gathers and records eye position information, the fixation time information of experimenter in interactive process, and calculating visual attention location intensity P according to eye position information, fixation time information analysis;
Step 2, adopt the method for step 1, by to different crowd the man-machine interaction carried out based on eye tracker test, to make with experimenter's hand to the distance of object as transverse axis, with the visual attention location fitting result chart that experimenter's visual attention location intensity P is the longitudinal axis, and set up visual attention location model, this visual attention location model is represented by 5 sub-Gaussian M; This visual attention location model is represented by 5 sub-Gaussian M, as follows:
Wherein: a1=0.7783, b1=-8.575, c1=20.64, a2=-0.009063, b2=37.51, c2=1.105, a3=-0.4649, b3=60.18, c3=34.45, a4=0.6527, b4=19.05, c4=22.51, a5=1.308, b5=55.04, c5=29.67;
Step 3, based on the visual attention location model in step 2, adopts computer construction virtual experimental scene, and in virtual experimental scene, builds three-dimension gesture man-machine interactive platform, by visual attention location models applying in the virtual experimental scene of this man-machine interaction;
First in virtual experimental scene, calculate three-dimensional hand to the current initial distance choosing object, and to derive in this initial distance and visual attention location fitting result chart experimenter's hand on transverse axis to object maximum distance between mapping relations formula;
Then the three-dimensional hand that in experiment with computing scene, every frame is corresponding chooses object distance to current, mapping relations formula is utilized to draw corresponding visual attention location modal distance parameter, and bring in 5 sub-Gaussian M of the visual attention location model in step 2, calculate corresponding visual attention location intensity P;
Finally select according to visual attention location intensity P the particle filter method or the animation method that perform gesture tracking.
In step 1, the time period A that translation stage three-dimension gesture overlaps with visual attention location point is first extracted; Secondly, the time period B on the computing machine that time period A is corresponding is found; Again, frame number C corresponding to gesture translation is obtained by B; Finally, paid close attention to time and hand accordingly to the distance Distance of target location by frame number C, drawn the visual attention location intensity P that distance Distance is corresponding, that is:
Wherein Attention_time refers to the time of every frame visual attention location; Frame_time refers to the time of every frame.
In step 3, pay close attention to the threshold value of intensity P in setting visual attention location model N, if hand is greater than this threshold value to the visual attention location intensity P choosing object distance corresponding, then perform the particle filter method of gesture tracking; If hand is less than this threshold value to the visual attention location intensity P choosing object distance corresponding, then perform animation method.
The invention has the beneficial effects as follows: 1, in three-dimension gesture interactive interface, have studied the visual attention location regularity of distribution of user to three-dimension gesture model first.Research for three-dimension gesture track algorithm provides new cognition foundation.2, this model keeps comparatively strong in fraction region, and " need not follow the tracks of frame by frame " for the detailed information of watching attentively outside focus but go emulation with animation, thus significantly improves the efficiency of gesture interaction.Improve speed.3, PF is employed in the region that visual attention location is stronger, the visual signature that the mankind examine at certain area with regard to real simulation like this.Precision is also mentioned.
Accompanying drawing explanation
Fig. 1 is visual attention location fitting effect of the present invention.
Embodiment
Below in conjunction with accompanying drawing, further detailed description is done to the present invention:
A kind of view-based access control model of the present invention pays close attention to the gesture interaction method of model, studies the regularity of distribution of user's visual attention location in gesture interaction interface first.First, by using eye tracker to complete, human eye sight change procedure is accurately followed the trail of; Secondly, to the data analysis that eye tracker exports, disclose the universal law of visual attention location, and establish the visual attention location model of operator; Again, 5 Gauss formulas are utilized to be described this model.Finally, visual attention location model is used for gesture tracking algorithm design.Innovative point is herein: in three-dimension gesture interactive interface, have studied the visual attention location regularity of distribution of user to three-dimension gesture model first, thus provides new cognition foundation for the research of three-dimension gesture track algorithm.Shown by the contrast experiment carried out with several related algorithm, this algorithm can improve gesture interaction speed and the tracking accuracy of 30% effectively.
1, the data of sight line change are obtained;
Eye tracker is the change being used for testing and recording eyes specially, and follows the tracks of the special instrument of sight line change thus.The instrument adopted in this article is TobiiStudio.The process obtaining sight line data can be understood from TobiiStudio reference manual.TobiiStudio can record visual fixations position, the fixation time of first blinkpunkt, the coordinate X of blinkpunkt duration, always fixation times, blinkpunkt, the parameters such as Y, blinkpunkt sequence.Eye is dynamic may imply that how human brain is collected or filter information, therefore after we have tested, just can judge that operator watches the information such as selection, search strategy attentively in motion situation by these data.
The sight line of following the tracks of to make TobiiStudio has truly, reliability, we have chosen all ages and classes, different sexes, different identity crowd in different scenes, carry out a large amount of man-machine interaction experiment (not telling the object that experimenter tests on pretreatment).In whole experimentation, experimenter carries out all operations according to the custom of oneself.
2, the data that eye tracker obtains are analyzed;
Experimental result shows the eye position information of each experimenter in whole interactive process, because in whole process, major part operation is translation event, so this paper is using the emphasis of translation motion as research.First, we extract the time period A that translation stage three-dimension gesture overlaps with visual attention location point; Secondly, the time period B on the computing machine that time period A is corresponding is found; Again, frame number C corresponding to this event is obtained by B; Finally, paid close attention to time and the hand distance Distance to target location accordingly by frame C, concern intensity P (the concern probability of vision) that like this can be corresponding apart from Distance namely:
Wherein Attention_time refers to the time of every frame visual attention location; Frame_time refers to the time of every frame.
The following research emphasis of this paper studies concern intensity exactly, finds out visual attention location rule, obtains visual attention location model, finally by this models applying to gesture interaction platform.
3, visual attention location model is set up;
In order to find out the regularity of distribution that human vision is paid close attention to, we need to allow different crowds test at different interaction platforms.In mutual whole process, experimenter's being naturally accustomed to completely according to oneself.According to data statistics process in step 2, we need to carry out a large amount of experiment statisticses and utilize MATLAB software to obtain the visual attention location fitting effect of this event, and as shown in Figure 1, wherein horizontal ordinate is that hand arrives object distance, and ordinate is visual attention location intensity.Visible, visual attention location intensity reduces along with distance and strengthens, and has met the visual characteristic of people preferably.
Thus, the visual attention location model of this paper is expressed as follows with 5 sub-Gaussian M:
Wherein a1=0.7783, b1=-8.575, c1=20.64, a2=-0.009063, b2=37.51, c2=1.105, a3=-0.4649, b3=60.18, c3=34.45, a4=0.6527, b4=19.05, c4=22.51, a5=1.308, b5=55.04, c5=29.67.
4, three-dimension gesture interaction scenarios is built;
Based on visual attention location model set forth above, need design 3D virtual interacting platform to realize visual attention location mechanism.Interior decoration (little vase) to be put into the scene on desk by this paper design, and the object in scene is a desk and two little vases and three-dimensional hand mainly, and each little vase can be put on desk.This experiment utilizes natural hand to manipulate three-dimensional staff, and in 3D scene, main matter is for grabbing little vase, the little vase of translation, putting little vase.By the experiment scene of visual attention location models applying to man-machine interaction.
(1) hardware environment is implemented
Generic USB camera, common computer (CPU be Intel (R) Core (TM) 2, dominant frequency 2.66GHz, in save as 4GB)
(2) software environment is implemented
Programming language is VC6.0 and employs OpenGl technology to have built three-dimensional interaction platform, MATLAB, 3DMAX.
(3) algorithm realization step
Be comparative maturity for gesture tracking PF, PF method is applicable to the problem of various non-linear, non-gaussian.Therefore, this paper adopts PF method in the region that visual attention location intensity is higher, otherwise we replace " following the tracks of frame by frame " with animation.Specific algorithm is as follows:
Algorithm 1. gesture interaction algorithm.
Input: the distance of current frame number, model, three-dimensional hand are to the distance of object
Export: relative visual pays close attention to the concern probability of model
Step1. initialization, method in this paper obtains the original state of gesture.
Step2. object is selected:
What a) adopt herein is that the mode of token ring carries out choosing interested object.
B) calculate three-dimensional hand to the current initial distance dis_initializtion choosing object, wherein suppose the distance distance of Fig. 1 visual attention location model.
C) calculate
Step3., in three-dimensional scenic, hand is to choosing the mapping relations of the distance of object and visual attention location model N distance and obtaining corresponding visual attention location probability.
A) the three-dimensional hand calculating every frame corresponding chooses object distance distance_new to current.
B) the mapping relations formula of this distance distance_new relative to visual attention location model N distance Model_distance is obtained:
C) obtain the distance Model_distance relative to visual attention location model, according to 5 sub-Gaussians and the formula (2) of visual attention location, corresponding visual attention location intensity P can be obtained:
P=f(Model_distance)(5)
What wherein P represented is exactly the visual attention location intensity of distance corresponding to every frame relative to visual attention location model N.
Step4. visual attention location model N is applied to gesture interaction platform.
A) find out the particular value paying close attention to intensity in visual attention location model N, when supposing that Yu and modal distance are reduced, visual attention location intensity is not less than Yu, and when modal distance increases, visual attention location intensity is not more than Yu.
B) according to obtaining hand to the visual attention location intensity P choosing object distance distance_new corresponding, if P>Yu, so perform PF method, otherwise perform animation.Judge that whether collision detection is successful, if success, then terminate, otherwise frame adds up.
The above is only the preferred implementation of this patent; it should be pointed out that for those skilled in the art, under the prerequisite not departing from the art of this patent principle; can also make some improvement and replacement, these improve and replace the protection domain that also should be considered as this patent.

Claims (3)

1. view-based access control model pays close attention to a gesture interaction method for model, it is characterized in that, comprises the following steps:
Step 1, utilizes eye tracker to carry out man-machine interaction experiment to experimenter, gathers and records eye position information, the fixation time information of experimenter in interactive process, and calculating visual attention location intensity P according to eye position information, fixation time information analysis;
Step 2, adopt the method for step 1, by to different crowd the man-machine interaction carried out based on eye tracker test, to make with experimenter's hand to the distance of object as transverse axis, with the visual attention location fitting result chart that experimenter's visual attention location intensity P is the longitudinal axis, and set up visual attention location model, this visual attention location model is represented by 5 sub-Gaussian M; As follows:
f ( x ) = a 1 e - ( x - b 1 c 1 ) 2 + a 2 e - ( x - b 2 c 2 ) 2 + a 3 e - ( x - b 3 c 3 ) 2 + a 4 e - ( x - b 4 c 4 ) 2 + a 5 e - ( x - b 5 c 5 ) 2 - - - ( 2 )
Wherein: a1=0.7783, b1=-8.575, c1=20.64, a2=-0.009063, b2=37.51, c2=1.105, a3=-0.4649, b3=60.18, c3=34.45, a4=0.6527, b4=19.05, c4=22.51, a5=1.308, b5=55.04, c5=29.67;
Step 3, based on the visual attention location model in step 2, adopts computer construction virtual experimental scene, and in virtual experimental scene, builds three-dimension gesture man-machine interactive platform, by visual attention location models applying in the virtual experimental scene of this man-machine interaction;
First in virtual experimental scene, calculate three-dimensional hand to the current initial distance choosing object, and to derive in this initial distance and visual attention location fitting result chart experimenter's hand on transverse axis to object maximum distance between mapping relations formula;
Then the three-dimensional hand that in experiment with computing scene, every frame is corresponding chooses object distance to current, mapping relations formula is utilized to draw corresponding visual attention location modal distance parameter, and bring in 5 sub-Gaussian M of the visual attention location model in step 2, calculate corresponding visual attention location intensity P;
Finally select according to visual attention location intensity P the particle filter method or the animation method that perform gesture tracking.
2. view-based access control model pays close attention to the gesture interaction method of model according to claim 1, it is characterized in that: in step 1, first extracts the time period A that translation stage three-dimension gesture overlaps with visual attention location point; Secondly, the time period B on the computing machine that time period A is corresponding is found; Again, frame number C corresponding to gesture translation is obtained by B; Finally, paid close attention to time and hand accordingly to the distance Distance of target location by frame number C, drawn the visual attention location intensity P that distance Distance is corresponding, that is:
P = A t t e n t i o n _ t i m e F r a m e _ t i m e - - - ( 1 )
Wherein Attention_time refers to the time of every frame visual attention location; Frame_time refers to the time of every frame.
3. view-based access control model pays close attention to the gesture interaction method of model according to claim 1, it is characterized in that: in step 3, the threshold value of intensity P is paid close attention in setting visual attention location model N, if hand is greater than this threshold value to the visual attention location intensity P choosing object distance corresponding, then perform the particle filter method of gesture tracking; If hand is less than this threshold value to the visual attention location intensity P choosing object distance corresponding, then perform animation method.
CN201410334996.XA 2014-07-14 2014-07-14 A kind of view-based access control model pays close attention to the gesture interaction method of model Active CN104090663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410334996.XA CN104090663B (en) 2014-07-14 2014-07-14 A kind of view-based access control model pays close attention to the gesture interaction method of model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410334996.XA CN104090663B (en) 2014-07-14 2014-07-14 A kind of view-based access control model pays close attention to the gesture interaction method of model

Publications (2)

Publication Number Publication Date
CN104090663A CN104090663A (en) 2014-10-08
CN104090663B true CN104090663B (en) 2016-03-23

Family

ID=51638384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410334996.XA Active CN104090663B (en) 2014-07-14 2014-07-14 A kind of view-based access control model pays close attention to the gesture interaction method of model

Country Status (1)

Country Link
CN (1) CN104090663B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045373B (en) * 2015-03-26 2018-01-09 济南大学 A kind of three-dimension gesture exchange method of user oriented mental model expression
CN104766054A (en) * 2015-03-26 2015-07-08 济南大学 Vision-attention-model-based gesture tracking method in human-computer interaction interface

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
CN103279191A (en) * 2013-06-18 2013-09-04 北京科技大学 3D (three dimensional) virtual interaction method and system based on gesture recognition technology
WO2013144807A1 (en) * 2012-03-26 2013-10-03 Primesense Ltd. Enhanced virtual touchpad and touchscreen

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344816A (en) * 2008-08-15 2009-01-14 华南理工大学 Human-machine interaction method and device based on sight tracing and gesture discriminating
WO2013144807A1 (en) * 2012-03-26 2013-10-03 Primesense Ltd. Enhanced virtual touchpad and touchscreen
CN103279191A (en) * 2013-06-18 2013-09-04 北京科技大学 3D (three dimensional) virtual interaction method and system based on gesture recognition technology

Also Published As

Publication number Publication date
CN104090663A (en) 2014-10-08

Similar Documents

Publication Publication Date Title
CN107766842B (en) Gesture recognition method and application thereof
CN106557774B (en) The method for real time tracking of multichannel core correlation filtering
CN102547123B (en) Self-adapting sightline tracking system and method based on face recognition technology
CN102789568B (en) Gesture identification method based on depth information
CN105550678A (en) Human body motion feature extraction method based on global remarkable edge area
CN103336967B (en) A kind of hand motion trail detection and device
CN103218832B (en) Based on the vision significance algorithm of global color contrast and spatial distribution in image
WO2018028102A1 (en) Memory mimicry guided pattern recognition method
CN103902989A (en) Human body motion video recognition method based on non-negative matrix factorization
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection
CN105809713A (en) Object tracing method based on online Fisher discrimination mechanism to enhance characteristic selection
CN105426882A (en) Method for rapidly positioning human eyes in human face image
CN109271840A (en) A kind of video gesture classification method
CN104090663B (en) A kind of view-based access control model pays close attention to the gesture interaction method of model
CN104267380A (en) Associated display method for full-pulse signal multidimensional parameters
CN103077383B (en) Based on the human motion identification method of the Divisional of spatio-temporal gradient feature
Zhao et al. Annealed particle filter algorithm used for lane detection and tracking
CN110110765A (en) A kind of multisource data fusion target identification method based on deep learning
CN107340863B (en) A kind of exchange method based on EMG
Pang et al. Dance video motion recognition based on computer vision and image processing
CN103810480B (en) Method for detecting gesture based on RGB-D image
CN107272905B (en) A kind of exchange method based on EOG and EMG
You et al. Multi-stream I3D network for fine-grained action recognition
Haselhoff et al. An evolutionary optimized vehicle tracker in collaboration with a detection system
Song et al. Real-time single camera natural user interface engine development

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160323