CN112181133B - Model evaluation method and system based on static and dynamic gesture interaction tasks - Google Patents

Model evaluation method and system based on static and dynamic gesture interaction tasks Download PDF

Info

Publication number
CN112181133B
CN112181133B CN202010857334.6A CN202010857334A CN112181133B CN 112181133 B CN112181133 B CN 112181133B CN 202010857334 A CN202010857334 A CN 202010857334A CN 112181133 B CN112181133 B CN 112181133B
Authority
CN
China
Prior art keywords
time
motion
action
interaction
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010857334.6A
Other languages
Chinese (zh)
Other versions
CN112181133A (en
Inventor
周小舟
贾乐松
肖玮烨
李佳芮
苗馨月
薛澄岐
牛亚峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010857334.6A priority Critical patent/CN112181133B/en
Publication of CN112181133A publication Critical patent/CN112181133A/en
Application granted granted Critical
Publication of CN112181133B publication Critical patent/CN112181133B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a model evaluation method and a system based on static and dynamic gesture interaction tasks, relates to the technical field of interaction task model evaluation, and solves the technical problem that the existing static and dynamic gesture interaction tasks cannot be quantitatively evaluated. The method comprises the steps of configuring interaction rules for basic motiles of users, analyzing and measuring interaction time and system action time of the basic motiles of the users, estimating total time for completing user interaction tasks according to the interaction time, the interaction rules and the system action time, and finally evaluating the interaction tasks according to the total time. The quantitative evaluation is realized for the interaction tasks based on the static and dynamic gestures, so that related practitioners are helped to design an interaction model more scientifically, and smoother and comfortable interaction experience is provided for the personnel performing the static and dynamic gesture interaction tasks under a natural man-machine interaction system.

Description

Model evaluation method and system based on static and dynamic gesture interaction tasks
Technical Field
The disclosure relates to the technical field of interactive task model evaluation, in particular to a model evaluation method and system based on static and dynamic gesture interactive tasks.
Background
The man-machine interaction system is a scientific technology for researching people, computers and communication action mechanisms between people and computers, and the research on the man-machine interaction system is helpful for helping people and computers to exert the maximum value in the interaction process, so that the man-machine interaction system is a power assistant for people in the aspects of learning, working and entertainment, and helps people to complete information processing, management, service and other functions at maximum efficiency. The development of man-machine interactive system successively goes through the first stage using keyboard and character display as interactive equipment and the second stage using keyboard, mouse and graphic user interface as interactive equipment. In recent years, a fourth stage of natural man-machine interaction using multi-modal information as input and output is started, wherein the multi-modal information includes inputs such as a keyboard, a mouse, characters, voices, gestures, expressions and the like, outputs such as graphics, characters, voices and the like, and natural man-machine interaction systems using gestures as input and feedback such as visual, auditory and tactile feedback as output as shown in fig. 1.
Natural human-computer interaction systems using gesture interactions as inputs are now divided into two types: bare hand gesture interactions and wearable device gesture interactions. Bare hand gesture interaction refers to gesture interaction without any medium, and is realized mainly by using a depth camera mainly based on an optical tracking technology, and common depth cameras are roughly divided into three types, namely an RGB binocular camera, a TOF camera and a structured light camera; wearable device interaction refers to gesture interaction requiring media, and is often implemented by using a touch type hand tracking kit, including but not limited to data glove, data bracelet, data wristband, etc.
In the evaluation research of the man-machine interaction system, the existing usability evaluation research about the man-machine interaction process is mainly performed in the man-machine interaction environment based on mouse, keyboard input and screen output, and the usability evaluation method of the man-machine interaction system is mainly evaluated, which includes but is not limited to usability test, heuristic evaluation, cognitive process browsing, task analysis and the like. Currently, in the research field of evaluation methods related to man-machine interaction systems, quantitative evaluation methods based on GOMS (Goals Operations Methods Selection rules, object, operation, method and selection rule) models based on keyboard and mouse inputs have been generated, and with the development of various technologies based on virtual environments, research significance related to man-machine interaction has been embodied not only in traditional man-machine interaction systems based on keyboard and mouse inputs, but also in natural man-machine interaction systems based on gesture inputs. Because a series of user interaction behaviors under a natural human-computer interaction system based on gesture input are generally difficult to quantify, the human-computer interaction system is more complicated than human-computer interaction based on keyboard and mouse input, the research of the existing human-computer interaction evaluation method based on gesture input is still in a primary stage, a natural interaction paradigm based on gesture input is not constructed at present, and a mature human-computer interaction evaluation framework based on gesture input does not exist.
Disclosure of Invention
The invention provides a model evaluation method and a system based on static and dynamic gesture interaction tasks, and the technical purpose of the model evaluation method and the system is to realize quantitative evaluation of the static and dynamic gesture interaction tasks.
The technical aim of the disclosure is achieved by the following technical scheme:
A model evaluation method based on static and dynamic gesture interaction tasks comprises the following steps:
Analyzing the user interaction task to obtain user interaction behavior;
analyzing the user interaction behavior to obtain a user basic kinetin;
Configuring interaction rules for the user basic kinetin;
Acquiring the interaction time and the system action time of the user basic motion element;
estimating the total time for completing the user interaction task according to the interaction time, the interaction rule and the system action time, and evaluating the user interaction task according to the total time;
The user basic motion element comprises a sensing motion and a hand motion, the sensing motion comprises a task thinking motion and a sensing reaction motion, the hand motion comprises a static gesture maintaining motion, a moving motion, a converting motion, a gesture execution position preparation motion and a homing motion, and the interaction time corresponding to the user basic motion element comprises a task thinking motion time, a sensing reaction motion time, a static gesture maintaining motion time, a moving motion time, a converting motion time, a gesture execution position preparation motion time and a homing motion time; the system action time comprises system operation time and system feedback time.
Further, any one of the user interaction tasks includes at least one set of gesture commands, any one of the gesture commands includes at least one perception action and one hand action, and two adjacent sets of gesture commands in the same one of the user interaction tasks are divided by homing actions.
Further, the interaction rule includes:
A first rule that inserts the task thinking action and the perceived reaction action in sequence before the static gesture maintaining action, the moving action, the converting action, and the gesture performing a position preparing action, and inserts the perceived reaction action after each set of the gesture commands;
A second rule that if the previous hand motion of the user can completely expect the next hand motion, the task thinking motion between the adjacent hand motions and the following perception reaction motion are deleted; if the adjacent hand actions are the same, deleting task thinking actions among the adjacent hand actions and the following perception reaction actions;
A third rule that inserts the static gesture maintenance action after the transition action;
And a fourth rule for inserting the system actions after each set of gesture commands, the system actions including system operations and system feedback.
Further, the interaction rule further includes a fifth rule, the fifth rule including: if the system feedback time is smaller than the sum of the sensing reaction action time and the homing action time, the system feedback time is ignored, otherwise, only the system feedback time is counted.
Further, the task thinking action time is 1.2s, the perception reaction action time is 0.24s, the static gesture maintaining action time is 0.15s, the moving action time is (aD+bN) s, the conversion action time is 0.15s, the gesture execution position preparation action time is 0.5s, the homing action time is 0.5s, the system operation time is 0.1s, and the system feedback time is 0.1s; where a and b are constants measured by a behavioural experiment, a=0.5, b=0.11, d is the total distance the movement motion needs to move in space, and N is the number of direction transitions needed by the movement motion.
A model evaluation system based on static and dynamic gesture interaction tasks, comprising:
The analysis module is used for analyzing the user interaction task to obtain user interaction behavior;
the decomposition module is used for analyzing the user interaction behavior to obtain a user basic kinetin;
the configuration module is used for configuring interaction rules for the user basic motiles;
the acquisition module acquires the interaction time and the system action time of the user basic motile element;
The evaluation module predicts the total time for completing the user interaction task according to the interaction time, the interaction rule and the system action time, and evaluates the user interaction task according to the total time;
The user basic motion element comprises a sensing motion and a hand motion, the sensing motion comprises a task thinking motion and a sensing reaction motion, the hand motion comprises a static gesture maintaining motion, a moving motion, a converting motion, a gesture execution position preparation motion and a homing motion, and the interaction time corresponding to the user basic motion element comprises a task thinking motion time, a sensing reaction motion time, a static gesture maintaining motion time, a moving motion time, a converting motion time, a gesture execution position preparation motion time and a homing motion time; the system action time comprises system operation time and system feedback time.
Further, the configuration module includes:
A first configuration unit configured to configure a first rule, the first rule including: inserting the task thinking action and the perceived reaction action in sequence before the static gesture maintaining action, the moving action, the converting action and the gesture execute the position preparing action, and inserting the perceived reaction action after each group of gesture commands;
A second configuration unit configured to configure a second rule including: if the previous hand action of the user can completely expect the next hand action, deleting the task thinking action between the adjacent hand actions and the following perception reaction action; if the adjacent hand actions are the same, deleting task thinking actions among the adjacent hand actions and the following perception reaction actions;
A third configuration unit configured to configure a third rule, the third rule including: inserting the static gesture maintenance action after the transition action;
a fourth configuration unit configured to configure a fourth rule including: the system actions are inserted after each set of the gesture commands, the system actions including system operations and system feedback.
Further, the configuration module further includes a fifth configuration unit, where the fifth configuration unit configures a fifth rule, and the fifth rule includes: if the system feedback time is smaller than the sum of the sensing reaction action time and the homing action time, the system feedback time is ignored, otherwise, only the system feedback time is counted.
The beneficial effects of the present disclosure are: according to the model evaluation method and system based on the static and dynamic gesture interaction tasks, firstly, the user interaction tasks are analyzed to obtain user interaction behaviors, and then the user interaction behaviors are analyzed to obtain user basic motiles. And configuring an interaction rule for the user basic motile, simultaneously acquiring interaction time and system action time of the user basic motile, estimating total time for completing the user interaction task according to the interaction time, the interaction rule and the system action time, and finally evaluating the user interaction task according to the total time. The method and the system realize quantitative evaluation on the model based on the static and dynamic gesture interaction tasks, expand the traditional model limited by keyboard and mouse input interaction into the natural interaction based on gestures, explore and innovate an availability evaluation method suitable for the model based on the static and dynamic gesture interaction tasks, provide technical guidance for quantifying the model based on the static and dynamic gesture interaction tasks in a natural man-machine interaction system, and make up for the technical gap studied by the availability evaluation method based on the model of the static and dynamic gesture interaction tasks.
Meanwhile, the method and the system can carry out scientific task analysis on the user of the natural man-machine interaction system, model the user behavior in the natural man-machine interaction system to predict the performance based on static and dynamic gesture interaction scenes/interfaces, thereby helping related practitioners to more scientifically design the whole process of static and dynamic gesture interaction of people under the natural man-machine interaction system, improving the interaction efficiency of the people in the static and dynamic gesture interaction process and providing smoother and comfortable interaction experience for the people carrying out static and dynamic gesture interaction under the natural man-machine interaction system.
Drawings
FIG. 1 is a schematic diagram of a natural human-computer interaction system based on gesture input;
FIG. 2 is a flow chart of the method of the present disclosure;
FIG. 3 is a schematic diagram of a system of the present disclosure;
FIG. 4 is a schematic diagram of a static gesture and a dynamic gesture;
FIG. 5 is an example schematic diagram of a static and dynamic gesture-based interaction task;
FIG. 6 is a flow chart of an embodiment of the present disclosure;
fig. 7 is a flowchart of an embodiment of the present disclosure.
Detailed Description
The technical scheme of the present disclosure will be described in detail below with reference to the accompanying drawings. In the description of the present disclosure, it should be understood that the terms "first," "second," "third," "fourth," "fifth" are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated, but are merely used to distinguish between different components.
Fig. 2 is a flowchart of a method of the disclosure, as shown in fig. 2, firstly, analyzing a user interaction task under an interaction model to obtain a user interaction behavior, then analyzing the user behavior to obtain a user basic kinetin, configuring an interaction rule for the user basic kinetin, then obtaining an interaction time and a system action time of the user basic kinetin, and finally estimating a total time for completing the user interaction task according to the interaction time, the interaction rule and the system action time, and evaluating the user interaction task according to the total time.
The user basic motion element comprises a sensing motion and a hand motion, the sensing motion comprises a task thinking motion and a sensing reaction motion, the hand motion comprises a static gesture maintaining motion, a moving motion, a converting motion, a gesture execution position preparation motion and a homing motion, and the interaction time corresponding to the user basic motion element comprises a task thinking motion time, a sensing reaction motion time, a static gesture maintaining motion time, a moving motion time, a converting motion time, a gesture execution position preparation motion time and a homing motion time; the system action time comprises system operation time and system feedback time.
The application can use CPM-GOMS modeling technology, C is Cognition cognition stage, P is permission Perception stage, M is Motor movement stage, and the system action, movement stage, cognition stage and user basic kinetin of Perception stage in the process of completing interactive task by user are all recorded in sequence. In the application, the perception phase and the cognition phase correspond to task thinking action time and perception reaction action time, and the motion phase comprises moving action time, conversion action time, homing action time, static gesture maintaining action time and gesture execution position preparation action time.
In the following description, task thinking action is represented by a, perceived reaction action is represented by E, static gesture maintenance action is represented by G, movement action is represented by M, conversion action is represented by S, homing action is represented by H, gesture execution position preparation action is represented by P, system action is represented by R, system operation is represented by R1, system feedback is represented by R2, and r=r1+r2. The interaction time corresponding to the basic motion factors of different users and the system actions is shown in table 1:
TABLE 1
The static gesture motion only considers the gesture of the hand, including the shape and direction of the hand, the bending degree of the finger, the relative position of the finger and the body, and the like; in addition to considering the hand gesture and its change, the dynamic gesture is classified into a tracked dynamic gesture (i.e., the movement motion) and a trackless dynamic gesture (i.e., the conversion motion), in which the gesture is intended by the track of the hand moving in space, for example, handwriting a letter a in air. Trackless dynamic gesture motion refers to imparting a gesture to a user through a change in hand posture and position, rather than through a track of spatial movement, for example, a dynamic stretching motion is represented from fist making to fist opening, and fig. 4 is a schematic diagram of a static gesture motion and a dynamic gesture motion.
FIG. 5 is a schematic diagram of an example of a task based on static and dynamic gesture interactions, as shown in FIG. 5 (a), wherein the user interaction task is a map viewing task in a two-dimensional interface, and the user first needs to open the map display interface through a gesture opened after making a fist; fig. 5 (b) shows a map displayed by action switching (waving switching) to left, and finally the user needs to turn on the movement function of the map by the gesture of pinching with the right hand as shown in fig. 5 (c), so that the user can control the position of the map by the movement of the right hand. Taking fig. 5 as an example to determine the operation sequence set of the interaction task, to evaluate the necessary time spent by the user to complete the interaction task, so as to guide the usability of the static gesture and the dynamic gesture interaction, and table 2 is a quantitative decomposition process based on the static gesture and the dynamic gesture input interaction:
Target interaction task operation flow decomposition Model action decomposition
1. Moving a hand to a gesture execution start position P
2. Performing trackless dynamic gesture conversion to open to grip a fist PS
3. Performing trackless dynamic gesture conversion of a fist to open PSS
4. Homing action PSSH
5. Moving a hand to a gesture execution start position PSSHP
6. Performing a hand waving gesture with a track dynamic gesture PSSHPM
7. Homing action PSSHPMH
8. Moving a hand to a gesture execution start position PSSHPMHP
9. Performing pinch gestures PSSHPMHPS
TABLE 2
The user interaction task illustrated in table 2 includes three sets of gesture commands, a first set of gesture commands being PSSH of 1 to 4, a second set of gesture commands being PMH of 5 to 7, and a third set of gesture commands being PS of 8 to 10, which together make up the user interaction task. Two adjacent groups of gesture commands in the same user interaction task are generally divided by a homing action, so that other gesture commands except the last group of gesture commands in the same user interaction task are followed by the homing action.
The interaction rules comprise a first rule, a second rule, a third rule, a fourth rule and a fifth rule, wherein the first rule comprises a step of inserting task thinking actions and perception reaction actions in sequence before static gesture maintaining actions, moving action actions, converting actions and gesture executing position preparation actions, and inserting perception reaction actions after each group of gesture commands; the second rule includes "if the previous hand motion of the user can completely expect the next hand motion, deleting the task thinking motion between the adjacent hand motions and the following perception reaction motion; if the adjacent hand actions are the same, deleting the task thinking action and the following perception reaction action between the adjacent hand actions; the third rule includes "insert static gesture maintenance action after transition action"; the fourth rule includes "insert system actions after each set of gesture commands, the system actions including system operations and system feedback"; the fifth rule includes "if the system feedback time is less than the sum of the perceived reaction action time and the homing action time, the system feedback time is ignored, otherwise only the system feedback time is counted.
According to the first rule, the model action decomposition of table 2 becomes: AEPAESAESEHAEPAEMEHAEPAESE; further, according to the second rule, since the gesture execution position preparation action P can expect the transition action S, the gesture execution position preparation action P can also expect the movement action M, and the task thinking action a and the perceived reaction action E between the same hand actions adjacent to each other can also be deleted, the model action is decomposed into: AEPSSEHAEPMEHAEPSE.
In the third rule, if the static gesture maintaining action G is inserted after the transition action S, the model action is decomposed into: AEPSGSGEHAEPMEHAEPSGE.
In the fourth rule, after each group of gesture commands, a system action R is inserted, where the system action R includes a system operation R1 and a system feedback R2, and the model action is decomposed into: AEPSGSGR 1A 1EHAEPMR1/>EHAEPSGR1/>E, here/>AndAll represent system feedback and are distinguished in that their times may be different from the default system feedback times.
In the fifth rule, if the system feedback time is less than the sum of the sensing response action time and the homing action time, the system feedback time is ignored, otherwise, only the system feedback time is counted. In this example, the first two systems feed back timeThe sensing response action time and the homing action time are larger than the sum of the sensing response action time and the homing action time, so that the sensing response action E and the homing action H are ignored, and only the feedback time of the system is counted; and system feedback time/>Less than the sum of the perceived reaction action time and the homing action time, and therefore ignores the system feedback time. Finally, the model action is decomposed into: AEPSGSGR 1/>AEPMR1/>AEPSGR1E。
The total time of the model actions is thus:
Where a=0.5, b=0.11, d=0.2, n=0.
It can be seen that, when the method described in the present disclosure is used to evaluate the user interaction task described in table 2, the predicted total time for executing the user interaction task is 10.41s, and the average total time for the skilled user to actually operate the user interaction task shown in table 2 is 10.45s, which is very close to the predicted total time, so that the accuracy of evaluating the static and dynamic gesture interaction models by the present application is ensured.
Fig. 3 is a schematic diagram of a system of the present disclosure, where the system includes an analysis module, a decomposition module, a configuration module, an acquisition module, and an evaluation module, and the configuration module includes a first configuration unit, a second configuration unit, a third configuration unit, a fourth configuration unit, and a fifth configuration unit, and functions of each module and unit refer to a method of the present disclosure, which is not repeated. In fig. 3, (a), (b), and (c) are three different modes of the evaluation system, the decomposition module decomposes the user interaction behavior to obtain the user basic element, then the configuration module and the acquisition module can work simultaneously, or the acquisition module acquires after the configuration is completed, or the configuration module performs configuration after the acquisition is completed, so that the evaluation of the overall model is not affected.
Fig. 6 is a flowchart of an embodiment of the present disclosure, after decomposing a user interaction behavior to obtain a user basic kinetin, firstly obtaining an interaction time and a system action time of the user basic kinetin, and then configuring an interaction rule for the user basic kinetin. Fig. 7 is a flowchart of an embodiment of the present disclosure, where after decomposing a user interaction behavior to obtain a user basic element, the interaction time and the system action time are obtained and the interaction rule is configured.
The foregoing is an exemplary embodiment of the disclosure, the scope of which is defined by the claims and their equivalents.

Claims (6)

1. A model evaluation method based on static and dynamic gesture interaction tasks is characterized by comprising the following steps:
Analyzing the user interaction task to obtain user interaction behavior;
analyzing the user interaction behavior to obtain a user basic kinetin;
Configuring interaction rules for the user basic kinetin;
Acquiring the interaction time and the system action time of the user basic motion element;
estimating the total time for completing the user interaction task according to the interaction time, the interaction rule and the system action time, and evaluating the user interaction task according to the total time;
Any one of the user interaction tasks comprises at least one group of gesture commands, any one group of gesture commands at least comprises a perception action and a hand action, and two groups of adjacent gesture commands in the same user interaction task are divided by homing actions;
The user basic motion element comprises a perception motion and a hand motion, the perception motion comprises a task thinking motion and a perception reaction motion, the hand motion comprises a static gesture maintaining motion, a moving motion, a conversion motion, a gesture execution position preparation motion and a homing motion, and the interaction time corresponding to the user basic motion element comprises a task thinking motion time, a perception reaction motion time, a static gesture maintaining motion time, a moving motion time, a conversion motion time, a gesture execution position preparation motion time and a homing motion time; the system action time comprises system operation time and system feedback time;
The interaction rule includes:
A first rule that inserts the task thinking action and the perceived reaction action in sequence before the static gesture maintenance action, the movement action, the conversion action, and the gesture perform a position preparation action, and inserts the perceived reaction action after each set of the gesture commands;
A second rule that if the previous hand motion of the user can completely expect the next hand motion, the task thinking motion between the adjacent hand motions and the following perception reaction motion are deleted; if the adjacent hand actions are the same, deleting task thinking actions among the adjacent hand actions and the following perception reaction actions;
A third rule that inserts the static gesture maintenance action after the transition action;
And a fourth rule for inserting the system actions after each set of gesture commands, the system actions including system operations and system feedback.
2. The model evaluation method based on static and dynamic gesture interaction tasks according to claim 1, wherein the interaction rules further comprise a fifth rule, the fifth rule comprising: if the system feedback time is smaller than the sum of the sensing reaction action time and the homing action time, the system feedback time is ignored, otherwise, only the system feedback time is counted.
3. The model evaluation method based on the static and dynamic gesture interaction task according to claim 2, wherein the task thinking action time is 1.2s, the perception reaction action time is 0.24s, the static gesture maintenance action time is 0.15s, the moving action time is (ad+bn) s, the conversion action time is 0.15s, the gesture execution position preparation action time is 0.5s, the homing action time is 0.5s, the system operation time is 0.1s, and the system feedback time is 0.1s; where a and b are constants measured by a behavioural experiment, a=0.5, b=0.11, d is the total distance the movement motion needs to move in space, and N is the number of direction transitions needed by the movement motion.
4. A model evaluation system based on static and dynamic gesture interaction tasks, comprising:
The analysis module is used for analyzing the user interaction task to obtain user interaction behavior;
the decomposition module is used for analyzing the user interaction behavior to obtain a user basic kinetin;
the configuration module is used for configuring interaction rules for the user basic motiles;
the acquisition module acquires the interaction time and the system action time of the user basic motile element;
The evaluation module predicts the total time for completing the user interaction task according to the interaction time, the interaction rule and the system action time, and evaluates the user interaction task according to the total time;
Any one of the user interaction tasks comprises at least one group of gesture commands, any one group of gesture commands at least comprises a perception action and a hand action, and two groups of adjacent gesture commands in the same user interaction task are divided by homing actions;
The user basic motion element comprises a perception motion and a hand motion, the perception motion comprises a task thinking motion and a perception reaction motion, the hand motion comprises a static gesture maintaining motion, a moving motion, a conversion motion, a gesture execution position preparation motion and a homing motion, and the interaction time corresponding to the user basic motion element comprises a task thinking motion time, a perception reaction motion time, a static gesture maintaining motion time, a moving motion time, a conversion motion time, a gesture execution position preparation motion time and a homing motion time; the system action time comprises system operation time and system feedback time;
The configuration module comprises:
A first configuration unit configured to configure a first rule, the first rule including: inserting the task thinking action and the perceived reaction action in sequence before the static gesture maintaining action, the moving action, the converting action and the gesture execute the position preparing action, and inserting the perceived reaction action after each group of gesture commands;
A second configuration unit configured to configure a second rule including: if the previous hand action of the user can completely expect the next hand action, deleting the task thinking action between the adjacent hand actions and the following perception reaction action; if the adjacent hand actions are the same, deleting task thinking actions among the adjacent hand actions and the following perception reaction actions;
A third configuration unit configured to configure a third rule, the third rule including: inserting the static gesture maintenance action after the transition action;
a fourth configuration unit configured to configure a fourth rule including: the system actions are inserted after each set of the gesture commands, the system actions including system operations and system feedback.
5. The model evaluation system based on static and dynamic gesture interaction tasks of claim 4, wherein the configuration module further comprises a fifth configuration unit that configures a fifth rule comprising: if the system feedback time is smaller than the sum of the sensing reaction action time and the homing action time, the system feedback time is ignored, otherwise, only the system feedback time is counted.
6. The model evaluation system based on static and dynamic gesture interaction tasks according to claim 5, wherein the task thinking action time is 1.2s, the perception reaction action time is 0.24s, the static gesture maintenance action time is 0.15s, the moving action time is (ad+bn) s, the conversion action time is 0.15s, the gesture execution position preparation action time is 0.5s, the homing action time is 0.5s, the system operation time is 0.1s, and the system feedback time is 0.1s; where a and b are constants measured by a behavioural experiment, a=0.5, b=0.11, d is the total distance the movement motion needs to move in space, and N is the number of direction transitions needed by the movement motion.
CN202010857334.6A 2020-08-24 2020-08-24 Model evaluation method and system based on static and dynamic gesture interaction tasks Active CN112181133B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010857334.6A CN112181133B (en) 2020-08-24 2020-08-24 Model evaluation method and system based on static and dynamic gesture interaction tasks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010857334.6A CN112181133B (en) 2020-08-24 2020-08-24 Model evaluation method and system based on static and dynamic gesture interaction tasks

Publications (2)

Publication Number Publication Date
CN112181133A CN112181133A (en) 2021-01-05
CN112181133B true CN112181133B (en) 2024-05-07

Family

ID=73924492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010857334.6A Active CN112181133B (en) 2020-08-24 2020-08-24 Model evaluation method and system based on static and dynamic gesture interaction tasks

Country Status (1)

Country Link
CN (1) CN112181133B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113961080B (en) * 2021-11-09 2023-08-18 南京邮电大学 Three-dimensional modeling software framework based on gesture interaction and design method
CN115907444B (en) * 2022-11-23 2023-12-05 中国航空综合技术研究所 Cockpit task flow evaluation method based on multichannel man-machine interaction technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955267A (en) * 2013-11-13 2014-07-30 上海大学 Double-hand man-machine interaction method in x-ray fluoroscopy augmented reality system
CN111258430A (en) * 2020-01-21 2020-06-09 哈尔滨拓博科技有限公司 Desktop interaction system based on monocular gesture control

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9395764B2 (en) * 2013-04-25 2016-07-19 Filippo Costanzo Gestural motion and speech interface control method for 3d audio-video-data navigation on handheld devices
US9785243B2 (en) * 2014-01-30 2017-10-10 Honeywell International Inc. System and method for providing an ergonomic three-dimensional, gesture based, multimodal interface for use in flight deck applications

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955267A (en) * 2013-11-13 2014-07-30 上海大学 Double-hand man-machine interaction method in x-ray fluoroscopy augmented reality system
CN111258430A (en) * 2020-01-21 2020-06-09 哈尔滨拓博科技有限公司 Desktop interaction system based on monocular gesture control

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Bai ; YC 等.A Skeleton Object Detection-Based Dynamic Gesture Recognition Method.IEEE.2019,212-217. *
基于 UNITY 交互式虚拟维修仿真系统的研究;赵洪利;《机械工程与自动化》;20160430;93-95 *
基于用户认知的大数据可视化视觉呈现方法研究;周小舟;全国博士学位论文全文数据库;20161215;全文 *

Also Published As

Publication number Publication date
CN112181133A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
Vuletic et al. Systematic literature review of hand gestures used in human computer interaction interfaces
US20220343689A1 (en) Detection of hand gestures using gesture language discrete values
US20110115702A1 (en) Process for Providing and Editing Instructions, Data, Data Structures, and Algorithms in a Computer System
Khan et al. Gesture and speech elicitation for 3D CAD modeling in conceptual design
CN112181133B (en) Model evaluation method and system based on static and dynamic gesture interaction tasks
Grijincu et al. User-defined interface gestures: Dataset and analysis
Xia et al. Iteratively designing gesture vocabularies: A survey and analysis of best practices in the HCI literature
Uva et al. A user-centered framework for designing midair gesture interfaces
CN110268375A (en) Configure the digital pen used across different application
Kryvonos et al. New tools of alternative communication for persons with verbal communication disorders
Duke Reasoning about gestural interaction
Rodriguez-Conde et al. Towards customer-centric additive manufacturing: making human-centered 3D design tools through a handheld-based multi-touch user interface
Baig et al. Qualitative analysis of a multimodal interface system using speech/gesture
Erdolu Lines, triangles, and nets: A framework for designing input technologies and interaction techniques for computer-aided design
Vatavu Gesture-based interaction
CN112181132B (en) Model evaluation method and system based on ray interaction task in virtual environment
Khan et al. 3D CAD modeling using gestures and speech: Investigating CAD legacy and non-legacy procedures
Zhou et al. H-GOMS: a model for evaluating a virtual-hand interaction system in virtual environments
Park et al. An analytical approach to creating multitouch gesture vocabularies in mobile devices: A case study for mobile web browsing gestures
Maher et al. Studying designers using a tabletop system for 3D design with a focus on the impact on spatial cognition
US9870063B2 (en) Multimodal interaction using a state machine and hand gestures discrete values
Kurosu Human-Computer Interaction. Multimodal and Natural Interaction: Thematic Area, HCI 2020, Held as Part of the 22nd International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Part II
Osorio-Gómez et al. An augmented reality tool to validate the assembly sequence of a discrete product
CN112181134B (en) Model evaluation method and system based on finger click interaction task in virtual environment
Yi et al. sEditor: A prototype for a sign language interfacing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant