CN105404449B - Can level expansion more pie body-sensing menus and its grammar-guided recognition methods - Google Patents

Can level expansion more pie body-sensing menus and its grammar-guided recognition methods Download PDF

Info

Publication number
CN105404449B
CN105404449B CN201510430265.XA CN201510430265A CN105404449B CN 105404449 B CN105404449 B CN 105404449B CN 201510430265 A CN201510430265 A CN 201510430265A CN 105404449 B CN105404449 B CN 105404449B
Authority
CN
China
Prior art keywords
pie
menu
sensing
menus
grammar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510430265.XA
Other languages
Chinese (zh)
Other versions
CN105404449A (en
Inventor
金哲凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Media and Communications
Original Assignee
Zhejiang University of Media and Communications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Media and Communications filed Critical Zhejiang University of Media and Communications
Priority to CN201510430265.XA priority Critical patent/CN105404449B/en
Publication of CN105404449A publication Critical patent/CN105404449A/en
Application granted granted Critical
Publication of CN105404449B publication Critical patent/CN105404449B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides it is a kind of can level expansion more pie body-sensing menus, for tv display screen or intelligent appliance or other motion sensing control devices, the menu is made of multiple pie submenus, wherein, each pie submenu includes 4 options of upper and lower, left and right arrangement, and each option includes: the new pie submenu of display or a certain scheduled function of execution after activating.The present invention also provides a kind of grammar-guided recognition methods of above-mentioned more pie body-sensing menus, carry out selecting and activating for more pie body-sensing menus according to human action.More pie body-sensing menus of the invention realize the dynamic expansion of menu by way of multiple pie dynamic generations and positioning, and that menu is presented to the user for is clear, effect is dynamically unfolded step by step.Its grammar-guided recognition methods is realized stable starting by way of human body sensitivity point touch, and knows selection of the method for distinguishing realization to above-mentioned body-sensing menu by grammar-guided, and can be with the new menu of rapid build.

Description

Can level expansion more pie body-sensing menus and its grammar-guided recognition methods
Technical field
The present invention relates to intelligent television field, support operates TV menu with body action;It can also be used for intelligent appliance system System, with electric appliances such as body action control air-conditionings;Or other carry out the occasion of multilayer command selection with somatosensory device.
Background technique
Menu is one of the most common type circle as one of the element of WIMP (window, icon, menu, pointer) interface model Surface element.The value of menu is for the concept that one group meets inheritance to be successively unfolded with user action by tree structure, The mental process of this and human knowledge, memory things are agreed with, therefore are both easy to use, and are also easy memory.
The use process of menu is divided into three steps: starting, positions and chooses.
1, " starting " shows menu to user according to certain signal, is necessary in the occasion using body-sensing menu.
2, " positioning " i.e. user finds required option, is realized in conventional system by " movement+stagnation " of mouse, dish The successively expansion simultaneously of the tree structure of individual event.
3, user " is chosen " i.e. to select required item, the common mouse that passes through clicks realization.
With the development of human-computer interaction technology, it is detached from traditional mouse, is gradually appeared using the body-sensing menu of somatosensory device, There are three types of thinkings on interaction design for they: indication type, action recognition type and physical feeling identification type.The idea of indication type is to protect Continue to employ family using the impression of mouse and is extended in somatosensory device.Often assume that issuing one from user's body part penetrates To the virtual ray of screen, a cursor is generated, this mobile cursor of user carries out the selection of menu.The idea of action recognition type is not The use feeling of mouse is sticked to again, it identifies user action by somatosensory devices such as Kinect, acceleration induction device, coiling properties And it is construed to menu selection.The idea of physical feeling identification type is more special, and menu item is virtually placed in partes corporis humani by it Menu item selection is reached by these positions of the touching of hand or control stick in position.
Pie interface is proposed that option is arranged on a circle by it as cutting cake, Nokia by Callahan earliest The iPod of mobile phone and apple has used this interface earlier.Kurtenbah is applied on hand-held PDA, Yong Hutong Selection is reached in the stroke for crossing writing pencil, Zhao etc. to the double hit of Pie menu and click accuracy rate, in terms of carry out Compare, it is believed that on PDA click type is more excellent.Lenman realizes the Pie menu clicked with the means that still image identifies, And the advantage for summarizing it is that average moving distance is most short.
Nokia possesses " multimode unifies formula pie menu " (publication number 101622593) patent, is applied to its mobile phone key Disk.The Institute of Software, Chinese Academy of Science possesses " a kind of based on the cake-shape menu selection methodbased of pen obliquity information " (publication number 101286111) patent is applied to the haptic devices such as mobile phone, PDA.Both patents have the characteristics that common, that is, apply occasion (mobile phone or PDA) area is narrow, can only accommodate cake (circle) type, and so-called " menu " is the combination of order button in fact, There is no the effects of the level of aforementioned conventional menu expansion.The equipment such as occasion, that is, television set applied by this patent then have sufficiently big Display space, can show multiple pie simultaneously, by the method for this patent can show level expansion dynamic effect.
The function of TV is increasingly intelligent, due to its special historical role, the following main intelligence that will become family Terminal, thus need a kind of accurate handy body-sensing menu.Pie interface has control-rod (or hand) mean motion apart from short excellent Point can reduce positioning and choose the identification difficulty of movement, and it is feasible for being used as the body-sensing menu of TV or other equipment, but It must solve the problems, such as follows: 1) need stable starting.2) effect for needing multi-menu to be successively unfolded, as previously mentioned, current Pie interface application does not support this effect, and single layer menu is degenerated in fact for button, do not embody menu interface be memonic and The value of classification.3) a kind of recognition methods that the body action of user is converted into pie menu selection is needed.It is answered from TV With requiring to set out, this method should can support the menu easily more renewed.
Summary of the invention
In view of the above technical problems, the present invention provides one kind in tv display screen or intelligent appliance or other body-sensing devices On can level expansion pie body-sensing menu and its grammar-guided recognition methods, starting stablize and can successively be unfolded, facilitate use The menu that family controls and more renews.
In order to achieve the above technical purposes, present invention employs following technical solutions:
It is a kind of can level expansion more pie body-sensing menus, for tv display screen or intelligent appliance or other motion sensing controls On device, the menu is made of multiple pie submenus, wherein each pie submenu includes the 4 of upper and lower, left and right arrangement A option, each option include: the new pie submenu of display or a certain scheduled function of execution after activating;
It multiple submenus for successively activating while coming across in display area, the layer that similar normal linear menu has is presented Secondary expansion effect.
Further, the multiple pie submenu is the set P={ p of the circle of r by one group of radius0,p1,p2,...pt} It constitutes, wherein
pi=(xi,yi),
(xi,yi) it is piCentral coordinate of circle in display area.
Further, if having shown that s circle in display area, occur in display area after menu option activation When new pie submenu, for already present pi∈{p0,p1,p2,...ps, position is
Emerging ps+1Position be
Wherein, activation vector is indicated,To activate new cake The activation vector of type submenu.
The grammar-guided recognition methods of more pie body-sensing menus provided by the invention carries out more cakes according to human action Selecting and activating for type body-sensing menu, includes the following steps:
Step 1 acquires skeleton data using somatosensory device;
Step 2 handles the skeleton data, extracts feature vector, and described eigenvector includes that hand and human body are quick Feel the real-time position information of point;
Step 3, defines several atomic events, and each atomic event represents the positional relationship of hand and sensitive spot, movement speed Degree, movement travel, the direction of motion and internal timer state specific change;Assign each atomic event one English alphabet, Alphabet is established, and is formulated accordingly from the rule of characteristic vector pickup letter;
Step 4 mentions set of eigenvectors cooperation letter using alphabetical extractor according to the alphabet and extracting rule It takes, generates letter stream;
Step 5 defines the elemental motion of several menu operations, assigns one terminal symbol of each movement;Formulate regular expressions Elemental motion is described as the specific combination of aforementioned letter by formula;By the letter stream input lexical analyzer, the morphological analysis Device filters letter stream according to regular expressions, generates terminal symbol stream;
Step 6, by the terminal symbol stream input syntax analyzer, the syntax analyzer according to the terminal symbol stream and Respective rule identifies human action, and judges whether it is executable menucommand;
Step 7, if not executable menucommand, then repeatedly step 4 is to six, until identifying an executable menu Order, the executable menucommand includes the starting of more pie menus and menu item selects and activation.
Further, in the step 2, human body sensitive spot includes volume, shoulder and waist flank.Above-mentioned sensitive spot meet as Lower requirement: one, being not easy to obscure, and human body is not inclined in nature is placed in the point for hand (or control-rod) for a long time;Two, base It can recognize in the somatosensory device of image (including infrared image), the point generally not portion in the body, mostly at the edge of body contour; Three, hand is accessible.
Further, in the step 2, the skeleton data are handled, extract feature vector specifically include it is as follows Step: 1) data burr is removed;2) device coordinate is converted human body coordinate by human body coordinate transform;3) feature vector is extracted.
Further, after converting human body coordinate for device coordinate, in order to which the individual difference for eliminating height is accurate to identifying The influence of rate defines length adjustment coefficient δ, and the coordinate after point (x, y, z) is adjusted is (x/ δ, y/ δ, z/ δ), and δ is human body throat To the vertical range of abdomen and the sum of the horizontal distance of two shoulders.
Further, in the step 6, executable menucommand include the startings of more pie menus, hovering, activation and Upper and lower, left and right selection.
More pie body-sensing menus of the invention realize the dynamic of menu by way of multiple pie dynamic generations and positioning State expansion, that menu is presented to the user for is clear, effect is dynamically unfolded step by step.
The grammar-guided recognition methods of more pie body-sensing menus of the invention, is realized by way of human body sensitivity point touch Stable starting, and selection of the method for distinguishing realization to above-mentioned body-sensing menu is known by grammar-guided, and can be new with rapid build Menu.
Compared with prior art, beneficial effects of the present invention are as follows:
1) starting for reaching body-sensing menu by sensitivity point touch, stablizes and is not easy to obscure.
2) it is unfolded by a series of level that generations and locating rule realize pie menu.
3) recognition methods for using grammar-guided, continuously will be converted into a system by human action by alphabetical extractor Predefined letter is arranged, then alphabetical stream is screened and analyzed by specific lexical analyzer and syntax analyzer, is identified Out to the operation of expansion pie body-sensing menu.It, can since the description of morphological analysis and syntactic analysis has codes and standards It easily modifies to adapt to the extension of the variation of menu or application.
Detailed description of the invention
Fig. 1 is the input of the grammar-guided recognition methods of more pie body-sensing menus of the invention, output flow diagram;
Fig. 2 shows for what the grammar-guided recognition methods of more pie body-sensing menus of the invention was implemented in DTV control Example diagram;
Fig. 3 is implementation framework of the grammar-guided recognition methods of more pie body-sensing menus of the invention in Kinect device Figure.
Specific embodiment
For a further understanding of the present invention, the preferred embodiment of the invention is described below with reference to embodiment, still It should be appreciated that these descriptions are only further explanation the features and advantages of the present invention, rather than to the claims in the present invention Limitation.
The present invention provides it is a kind of can level expansion more pie body-sensing menus, tv display screen or intelligent family can be used in On electric or other motion sensing control devices.Menu is made of multiple pie submenus, wherein each pie submenu include it is upper and lower, 4 options of left and right arrangement, each option show new pie submenu after activating or execute a certain scheduled function.
It after new menu activation, multiple submenus for successively activating while coming across in display area, according to ad hoc approach The level expansion effect that similar normal linear menu has is presented in positioning.
Specifically, to realize said effect, multiple pie submenus are the set P={ p of the circle of r by one group of radius0,p1, p2,...ptConstitute, wherein pi=(xi,yi), (xi,yi) it is piCentral coordinate of circle in display area.
When new menu activation, if having shown that s circle in display area, in viewing area after menu option activation When occurring new pie submenu in domain, for already present pi∈{p0,p1,p2,...ps, position is
Emerging ps+1Position be
Wherein, activation vector is indicated,To activate new pie The activation vector of submenu.
To above-mentioned more pie body-sensing menus, the present invention also provides a kind of grammar-guided recognition methods, can be dynamic according to human body Make to carry out selecting and activating for more pie body-sensing menus, include the following steps:
Step 1 acquires skeleton data using somatosensory device;
Step 2 handles the skeleton data, extracts feature vector, and described eigenvector includes that hand and human body are quick Feel the real-time position information of point;
Step 3, defines several atomic events, and each atomic event represents the positional relationship of hand and sensitive spot, movement speed Degree, movement travel, the direction of motion and internal timer state specific change;Assign each atomic event one English alphabet, Alphabet is established, and is formulated accordingly from the rule of characteristic vector pickup letter;
Step 4 mentions set of eigenvectors cooperation letter using alphabetical extractor according to the alphabet and extracting rule It takes, generates letter stream;
Step 5 defines the elemental motion of several menu operations, assigns one terminal symbol of each movement;Formulate regular expressions Elemental motion is described as the specific combination of aforementioned letter by formula;By the letter stream input lexical analyzer, the morphological analysis Device filters letter stream according to regular expressions, generates terminal symbol stream;
Step 6, by the terminal symbol stream input syntax analyzer, the syntax analyzer according to the terminal symbol stream and Respective rule identifies human action, and judges whether it is executable menucommand;
Step 7, if not executable menucommand, then repeatedly step 4 is to six, until identifying an executable menu Order, the executable menucommand includes the starting of more pie menus and menu item selects and activation.
Human body sensitive spot includes volume, shoulder and waist flank.Above-mentioned sensitive spot meets following requirement: one, being not easy to obscure, human body It is not inclined in nature and hand (or control-rod) is placed in the point for a long time;Two, based on image (including infrared image) Somatosensory device can recognize, the point generally not portion in the body, mostly at the edge of body contour;Three, hand is accessible.
Particularly, in order to eliminate influence of the individual difference to recognition accuracy of height, human body is converted by device coordinate After coordinate, length adjustment coefficient δ is defined, the coordinate after point (x, y, z) is adjusted is (x/ δ, y/ δ, z/ δ), and δ is that human body throat arrives The sum of the horizontal distance of the vertical range of abdomen and two shoulders.
Specifically, grammar-guided recognition methods used in the present invention is 6 yuan of combinations in a preferred embodiment,
Gram=(N, T, P, S, F, A)
N: nonterminal symbol collection
T: terminal symbol collection
P: production collection
S: start symbol
F: from feature vector to the transfer function of letter
A: set of actions
As shown in Fig. 1, the grammar-guided recognition methods in the present embodiment, the hand that somatosensory device is transmitted and sensitive spot are real When position data input alphabetical extractor, the latter is continuously generated character string c.C inputs lexical analyzer, and the latter is according to regular expressions Formula identifies vocabulary token, and may cause the change of alphabetical extractor internal state.Token input syntax analyzer, Hou Zhetong The reduction recognition menu operation of production collection P is crossed, and it is output to subsequent feedback (display) module, exports pie body-sensing dish The signal (starting, hovering, upper and lower, left and right etc.) that single choice is selected.
When menu changes, this method support easily and quickly constructs new menu.
In above-mentioned steps, alphabetical extractor includes position, speed by the function set F various information current to somatosensory device Degree, timer state, distance, direction of motion etc. make discrete description, and this description is expressed as single letter.If word Superclass Σ={ a1,a2,…an, character string c=b1b2…bm, wherein bi∈Σ.Input vector set V={ v1,v2,…vi… vk}.F is the set of one group of subfunction, i.e. F={ f1,f2,…fi…fm, there is bi=fi(V);Wherein i=1..m.
Specifically, string length m=5, letter collection Σ and its explanation see the table below in the present embodiment:
1 alphabet of table
Alphabetical sequence c1, c2... ci... input lexical analyzer, the latter identify vocabulary token.Token ∈ T, termination Symbol set T is shown in Table 2.
2 terminal symbol table of table
Wherein, the corresponding regular expression of each token, main regular expression are as follows:
1)EnterHeadBox→0.000h.000
(EnterShoulderBox, EnterWaistBox are similar)
2)SelectHead→EnterHeadBox(h.000)*(h.b00)+(0.b00)
(SelectShoulder, SelectWaist are similar)
3)FreezeOccur→0f000
4)FreezeFail→FreezeOccur(00100)
5)FreezeSignal→FreezeOccur(0f100)+(0fb00)
6)RunLength→(0.110)+(0.1a0)
7)Up→RunLength(0.1au)
(Down, Right, Left are similar)
8)TimeOutError→(h|s|w|0)(f|0)e(a|1|0)(u|d|r|l|)
Lexical analyzer is found the string of matching AD HOC from inlet flow and is made to the internal state of alphabetical extraction module It is corresponding to change.The last one terminal symbol LeafSelected is a special token in table 2.Internal system maintains menu Object M,
M={ Tree, Function, Selected }
M contains the information of menu: Tree is menu tree structure, and Function is the control life of TV associated by menu item It enables, Selected is M current state, i.e. which menu item is selected.When M.Selected is a leaf node, system is logical Know that lexical analyzer generates LeafSelected terminal symbol.M object is easier building, when that need to replace menu, only needs Generate new M object.
The token that morphological analysis generates is entered syntax analyzer, and the latter passes through the reduction recognition custom menu of production Operation, and activate corresponding movement.Nonterminal symbol collection
N=MenuItem, BodyMenuSelect, PieMenuSelect, Waves, OneWave, PieMenuError },
Terminal symbol is shown in Table 2, and production collection P is as follows:
1)S→S MenuItem|MenuItem
2)MenuItem→BodyMenuSelect PieMenuSelect
3)BodyMenuSelect→SelectHead|SelectShoulder|SelectWaist
4)PieMenuSelect→FreezeSignal Waves LeafSelected
5)Waves→Waves OneWave|OneWave
6)OneWave→Up|Down|Right|Left
7)PieMenuError→FreezeSignal TimeOutError|Waves TimeOutError
P describes the syntax rule of system.Menu item carries out a pie after being defined as a physical feeling menu selection Menu selection;Physical feeling menu selection is defined as selection head, or selection shoulder, or selection waist;Pie menu selection is fixed Justice is waves after hovering (Freeze) signal, until menu leaf item is selected;Occur time-out in hovering or brandishing then to produce Raw mistake.System will execute relevant movement when the success of above-mentioned production reduction, such as to production 2), system becomes empty It measures and executes menucommand;Production 3), system generates pie menu feedback, the internal state of initialization menu identification;It generates Formula 6), system generates new pie menu feedback, and changes internal state;To production 7) it then empties internal control variable and returns To original state.
As shown in Fig. 2, in a specific embodiment, in (a) user touch forehead activate " program request " item, this with The pie menu being launched into (b) comprising " news " etc. 4, user a moment of hover herein indicates to start to wave, it is backward The right side waves to activate " video display " item, this is launched into the submenu comprising " film " etc. shown in (c) therewith, and two-stage pie menu is pressed Determine position according to aforementioned expansion rule, show the effect of level expansion, (d) in user continue selection " TV play " of waving downwards Cause menu to continue to be unfolded, (e) in the user right side wave and selected leaf item " American series ", one time menu operation terminates, and system issues then Operational order enters popular American series breviary interface.
Specific embodiment is given further below.
Following equipment can be used in more pie body-sensing menu grammar-guided recognition methods of the invention and technical parameter is implemented:
Somatosensory device: Kinect 1.0
Driver: the Kinect SDK of 1.8 versions
Operating system: Windows 7
Menu depth: 3
Activation starting time threshold: 3s
Pie menu first time hovering time threshold: 1.5s
Time-out error threshold values are as follows: 10s
Implementation framework is as shown in Fig. 3, and the human skeleton data of Kinect acquisition extract feature vector after pre-treatment (containing the real-time position information of hand and sensitive spot), the latter input a Circular buffer, and grammar-guided recognition methods is from annular Caching is inputted, and the location information of pie expansion is generated, and is controlled TV and shown.
Pre-treatment includes following functions:
1) data burr is removed;
2) device coordinate is converted human body coordinate by human body coordinate transform;
3) feature vector is extracted.Feature vector viIs defined as:
vi=(i, phead,pshoulder,pwaist,phand,uhand,t)
Wherein i is serial number, phead、pshoulder、pwaist、phandRespectively forehead, (control hand side) shoulder, waist With the position of hand, uhandIt is control hand from i-1 frame to the motion vector of i frame, t is system timestamp.
Vi is saved in the Circular buffer as shown in Fig. 3 upper right quarter.The caching is that direction is added in data clockwise, It needs to check backward counterclockwise in the calculating such as stroke, direction of motion.Desirable buffer memory capacity is 600, i.e., about 20 seconds numbers According to data earlier will be capped.
If the present frame of feature vector caching is vc, variable pos be used to record buffer memory historical position, aforementioned letter is mentioned The implementation of device is taken to be described as follows:
The judgement of a timer, stroke and the direction of motion need to check multiframe i.e. (v backwardpos..vc) data
The letter of b table 1 can regard the sampling to the relevant essential information of user movement as.
Contact of the c hand with target site asks friendship to realize by the outer bounding box centered on each position coordinate points.Due to Kinect skeleton data does not consider body thickness, and error ratio x and the direction y are much bigger in a z-direction for position data, so outsourcing It encloses box size and is greater than the direction x and y in a z-direction, can be used having a size of (20cm, 20cm, 30cm), box test is that timing will swash It is living corresponding alphabetical ' h ', ' s ' or ' w '.
D user is after selecting body sensitive spot, before working into pie menu, and a moment that need to hover is as commencing signal.
E will activation letter ' f ' when the movement velocity of hand is lower than threshold values.
F internal system has multiple timers, and timing is full will the corresponding letter of activation.
G hand is in (vpos..vc) between movement travel activation letter ' a ' when reaching threshold values.
H hand is in (vpos..vc) between the direction of motion pass through vector sumInclination angle judgement.
Morphological analysis can activate a series of actions, modify to the internal state of alphabetical extractor, and movement is as follows:
1)EnterHeadBox→0.000h.000
Movement: pos=c, body menu timer initiation.
(EnterShoulderBox, EnterWaistBox are similar)
2)SelectHead→EnterHeadBox(h.000)*(h.b00)+(0.b00)
Movement: pos=-1 (3,4,5 three types letters do not work), body menu timer stop.
(SelectShoulder, SelectWaist are similar)
3)FreezeOccur→0f000
Movement: pos=c starts pie menu timer.
4)FreezeFail→FreezeOccur(00100)
Movement: pos=-1 stops pie menu timer.
5)FreezeSignal→FreezeOccur(0f100)+(0fb00)
Movement: pos=c starts climb displacement, stops pie menu timer.
6)RunLength→(0.110)+(0.1a0)
Movement: restarting wrong timing device, starts direction calculating.
7)Up→RunLength(0.1au)
Movement: pos=c, closing direction calculate.
(Down, Right, Left are similar)
8)TimeOutError→(h|s|w|0)(f|0)e(a|1|0)(u|d|r|l|)
Movement: all control variables are cleared up.
The reduction of syntactic analysis part, production set P generates a series of actions, is described as follows:
Production 2), issue order associated by menu leaf option.
Production 3), pop up pie menu first layer.
Production 6), pie menu is unfolded according to preceding method.
Production 7), system fault clears up built-in variable, initialization.
The above description of the embodiment is only used to help understand the method for the present invention and its core ideas.It should be pointed out that pair For those skilled in the art, without departing from the principle of the present invention, the present invention can also be carried out Some improvements and modifications, these improvements and modifications also fall within the scope of protection of the claims of the present invention.

Claims (7)

1. it is a kind of configured with can level expansion more pie body-sensing menus device, it is characterised in that: the menu is by multiple cakes Type submenu is constituted, wherein each pie submenu includes 4 options of upper and lower, left and right arrangement, and each option is after activating It include: the new pie submenu of display or a certain scheduled function of execution;
It multiple submenus for successively activating while coming across in display area, the level exhibition that similar normal linear menu has is presented Open effect;Wherein, the multiple pie submenu is the set P={ p of the circle of r by one group of radius0,p1,p2,...ptConstitute, Wherein pi=(xi,yi), 0≤i≤t,
(xi,yi) it is piCentral coordinate of circle in display area, t are the number of menu expansion;If also, in display area Show s circle, s >=1, when there is new pie submenu in display area after menu option activation, for existing Pi∈{p0,p1,p2,...ps, position is
Emerging ps+1Position be
Wherein,Indicate activation vector, To activate the new sub- dish of pie Single activation vector, the pie submenu shown in display area after activation are { p0′,p1′,p2′,...ps′,ps+1}。
2. it is a kind of execution as described in claim 1 configured with can level expansion more pie body-sensing menus device more pie The grammar-guided recognition methods of body-sensing menu carries out selecting and activating for more pie body-sensing menus according to human action, It is characterized in that, includes the following steps:
Step 1 acquires skeleton data using somatosensory device;
Step 2 handles the skeleton data, extracts feature vector, and described eigenvector includes hand and human body sensitive spot Real-time position information;
Step 3, defines several atomic events, and each atomic event represents the positional relationship, movement velocity, fortune of hand and sensitive spot The specific change of dynamic stroke, the direction of motion and internal timer state;It assigns each atomic event one English alphabet, establishes word Matrix, and formulate accordingly from the rule of characteristic vector pickup letter;
Step 4 is extracted set of eigenvectors cooperation letter using alphabetical extractor, is produced according to the alphabet and extracting rule New word mother stream;
Step 5 defines the elemental motion of several menu operations, assigns one terminal symbol of each movement;Formulating regular expression will Elemental motion is described as the specific combination of aforementioned letter;By the letter stream input lexical analyzer, the lexical analyzer root Letter stream is filtered according to regular expression, generates terminal symbol stream;
Step 6, by the terminal symbol stream input syntax analyzer, the syntax analyzer is according to the terminal symbol stream and accordingly Rule identification human action, and judge whether it is executable menucommand;
Step 7, if not executable menucommand, then repeatedly step 4 is to six, until identifying an executable menu life It enables.
3. the grammar-guided recognition methods of more pie body-sensing menus as claimed in claim 2, it is characterised in that: the step 2 In, human body sensitive spot includes volume, shoulder and waist flank.
4. the grammar-guided recognition methods of more pie body-sensing menus as claimed in claim 2, it is characterised in that: the feature to Amount is stored in Circular buffer.
5. the grammar-guided recognition methods of more pie body-sensing menus as claimed in claim 2, which is characterized in that the step 2 In, the skeleton data are handled, feature vector is extracted and specifically comprises the following steps:
1) data burr is removed;
2) device coordinate is converted human body coordinate by human body coordinate transform;
3) feature vector is extracted.
6. the grammar-guided recognition methods of more pie body-sensing menus as claimed in claim 5, it is characterised in that: by device coordinate After being converted into human body coordinate, in order to eliminate influence of the individual difference to recognition accuracy of height, length adjustment coefficient δ is defined, Coordinate after point (x, y, z) is adjusted is (x/ δ, y/ δ, z/ δ), and δ is human body throat to the vertical range of abdomen and the water of two shoulders Flat sum of the distance.
7. such as the grammar-guided recognition methods of the described in any item more pie body-sensing menus of claim 2-6, it is characterised in that: institute It states in step 6, executable menucommand includes starting, hovering, activation and the upper and lower, left and right selection of more pie menus.
CN201510430265.XA 2015-07-21 2015-07-21 Can level expansion more pie body-sensing menus and its grammar-guided recognition methods Expired - Fee Related CN105404449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510430265.XA CN105404449B (en) 2015-07-21 2015-07-21 Can level expansion more pie body-sensing menus and its grammar-guided recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510430265.XA CN105404449B (en) 2015-07-21 2015-07-21 Can level expansion more pie body-sensing menus and its grammar-guided recognition methods

Publications (2)

Publication Number Publication Date
CN105404449A CN105404449A (en) 2016-03-16
CN105404449B true CN105404449B (en) 2019-04-16

Family

ID=55469962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510430265.XA Expired - Fee Related CN105404449B (en) 2015-07-21 2015-07-21 Can level expansion more pie body-sensing menus and its grammar-guided recognition methods

Country Status (1)

Country Link
CN (1) CN105404449B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951072A (en) * 2017-03-06 2017-07-14 南京航空航天大学 On-screen menu body feeling interaction method based on Kinect
CN117369649B (en) * 2023-12-05 2024-03-26 山东大学 Virtual reality interaction system and method based on proprioception

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271528A1 (en) * 2006-05-22 2007-11-22 Lg Electronics Inc. Mobile terminal and menu display method thereof
CN103324400A (en) * 2013-07-15 2013-09-25 天脉聚源(北京)传媒科技有限公司 Method and device for displaying menus in 3D model
CN103649897A (en) * 2011-07-14 2014-03-19 微软公司 Submenus for context based menu system
CN104765454A (en) * 2015-04-02 2015-07-08 吉林大学 Human muscle movement perception based menu selection method for human-computer interaction interface
CN104881213A (en) * 2015-06-12 2015-09-02 合肥市徽腾网络科技有限公司 Interaction control method based on gestures and actions

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070271528A1 (en) * 2006-05-22 2007-11-22 Lg Electronics Inc. Mobile terminal and menu display method thereof
CN103649897A (en) * 2011-07-14 2014-03-19 微软公司 Submenus for context based menu system
CN103324400A (en) * 2013-07-15 2013-09-25 天脉聚源(北京)传媒科技有限公司 Method and device for displaying menus in 3D model
CN104765454A (en) * 2015-04-02 2015-07-08 吉林大学 Human muscle movement perception based menu selection method for human-computer interaction interface
CN104881213A (en) * 2015-06-12 2015-09-02 合肥市徽腾网络科技有限公司 Interaction control method based on gestures and actions

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
The Limits Of Expert Performance Using Hierarchic Marking Menus;Gordon Kurtenbach等;《INTERACT"93》;19930429;第482-487页 *
基于Kinect的内容展示系统设计与实现;马源驵;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150315(第03期);第I138-738页 *
基于情感体验理论的体感游戏操作界面研究;薛蛟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150515(第05期);第I138-705页 *

Also Published As

Publication number Publication date
CN105404449A (en) 2016-03-16

Similar Documents

Publication Publication Date Title
JP6431120B2 (en) System and method for input assist control by sliding operation in portable terminal equipment
CN103038728B (en) Such as use the multi-mode text input system of touch-screen on a cellular telephone
EP2933709A2 (en) Haptic information management method and electronic device supporting the same
WO2015113503A1 (en) Ring-type wireless finger controller, control method and control system
JP2014535110A (en) Gesture-based search
WO2013139181A1 (en) User interaction system and method
CN104090652A (en) Voice input method and device
WO2013189290A1 (en) Touch screen keyboard and input method thereof
CN106796789A (en) Interacted with the speech that cooperates with of speech reference point
CN107980110A (en) Head-mounted display apparatus and its content input method
KR20160101605A (en) Gesture input processing method and electronic device supporting the same
WO2016095640A1 (en) Method for controlling mobile terminal, and mobile terminal
CN104866097A (en) Hand-held signal output apparatus and method for outputting signals from hand-held apparatus
CN104503576A (en) Computer operation method based on gesture recognition
CN105404449B (en) Can level expansion more pie body-sensing menus and its grammar-guided recognition methods
TWI721317B (en) Control instruction input method and input device
CN102147706A (en) Method for inputting full spellings of Chinese character in touching and sliding manner
CN104765475A (en) Wearable virtual keyboard and implementation method thereof
CN106095081A (en) Man-machine interaction method and device
CN108646910A (en) A kind of Three-Dimensional Dynamic finger text input system and method based on depth image
KR102094751B1 (en) Method and apparatus for providing user interface
Amma et al. Airwriting: Bringing text entry to wearable computers
CN101561716B (en) Method for inputting Chinese character
WO2009116079A2 (en) Character based input using pre-defined human body gestures
CN204740560U (en) Handheld signal output device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190416

Termination date: 20210721

CF01 Termination of patent right due to non-payment of annual fee