CN115202530B - Gesture interaction method and system of user interface - Google Patents

Gesture interaction method and system of user interface Download PDF

Info

Publication number
CN115202530B
CN115202530B CN202210587715.6A CN202210587715A CN115202530B CN 115202530 B CN115202530 B CN 115202530B CN 202210587715 A CN202210587715 A CN 202210587715A CN 115202530 B CN115202530 B CN 115202530B
Authority
CN
China
Prior art keywords
gesture
interaction
recognition area
gesture recognition
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210587715.6A
Other languages
Chinese (zh)
Other versions
CN115202530A (en
Inventor
金凌琳
余锋
陈思屹
谭李慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dangqu Network Technology Hangzhou Co Ltd
Original Assignee
Dangqu Network Technology Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dangqu Network Technology Hangzhou Co Ltd filed Critical Dangqu Network Technology Hangzhou Co Ltd
Priority to CN202210587715.6A priority Critical patent/CN115202530B/en
Publication of CN115202530A publication Critical patent/CN115202530A/en
Application granted granted Critical
Publication of CN115202530B publication Critical patent/CN115202530B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Abstract

The application relates to a gesture interaction method and a gesture interaction system of a user interface, wherein the method comprises the following steps: dividing the user interface into areas to obtain a gesture recognition area and a non-gesture recognition area; according to the acquired user gestures, gesture interaction is carried out through a gesture recognition area, wherein the user gestures are acquired through human skeleton recognition; and displaying the flow node of the planning content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area. According to the method and the device, the problem that the common man-machine interaction needs to rely on physical media is solved, cost consumption caused by unnecessary physical equipment is reduced, user interface interaction based on gesture recognition is achieved, and an interaction form becomes more convenient and intelligent.

Description

Gesture interaction method and system of user interface
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a gesture interaction method and system for a user interface.
Background
At present, with the development of computer vision and the wide application in living practice, behavior detection and motion recognition projects based on various algorithms are increasingly applied in practice and are widely studied in the related fields. In the aspect of behavior monitoring, not only is the monitoring of the behavior of the bee colony carried out through information such as graphics, temperature and humidity, sound and the like, but also more applications are focused on the monitoring of the behavior of human beings. Human skeleton recognition is widely used as an important reference basis for behavior monitoring in the fields of video capturing, computer graphics and the like.
In a general man-machine interaction scenario, a user needs to interact with an intelligent electronic device through a specific physical medium, such as: the television remote controller is used for switching channels, adjusting volume and the like, the keyboard and the mouse are used for inputting and selecting information and the like, and the game handle is used for adjusting game parameters, controlling game characters and the like. The human skeleton recognition provides a brand new thought for solving the problem that human-computer interaction needs to be based on physical media.
At present, no effective solution is proposed for the problem that the common man-machine interaction in the related technology needs to rely on physical media.
Disclosure of Invention
The embodiment of the application provides a gesture interaction method and a gesture interaction system of a user interface, which are used for at least solving the problem that common man-machine interaction in the related art needs to rely on physical media.
In a first aspect, an embodiment of the present application provides a gesture interaction method of a user interface, where the method includes:
dividing the user interface into areas to obtain a gesture recognition area and a non-gesture recognition area;
according to the acquired user gestures, gesture interaction is carried out through the gesture recognition area, wherein the user gestures are acquired through human skeleton recognition;
and displaying the flow node of the planning content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area.
In some of these embodiments, the method comprises:
the first gesture recognition area and the second gesture recognition area have recognition priorities;
under the condition that the recognition priorities are the same, simultaneously executing gesture interaction of the first gesture recognition area and gesture interaction of the second gesture recognition area;
and under the condition that the recognition priorities are different, preferentially executing the gesture interaction of the second gesture recognition area, and then executing the gesture interaction of the first gesture recognition area. In some embodiments, according to the acquired gesture of the user, performing gesture interaction through the gesture recognition area includes:
dividing the gesture recognition area into a first gesture recognition area and a second gesture recognition area;
according to the acquired user gesture, gesture interaction is carried out on the branch options corresponding to the flow node through the first gesture recognition area;
and according to the acquired user gesture, performing gesture interaction based on the functional interaction gesture through the second gesture recognition area.
In some embodiments, according to the acquired gesture of the user, performing gesture interaction on the branch option corresponding to the flow node through the first gesture recognition area includes:
when detecting that a user makes a touch gesture to a branch option of a flow node, amplifying the branch option according to a preset size, displaying a corresponding time progress, and completing gesture interaction of the flow node after the execution of the time progress is completed.
In some embodiments, according to the acquired gesture of the user, performing gesture interaction based on the functional interaction gesture through the second gesture recognition area includes:
the functional interaction gesture comprises a return gesture, and in the process node which is not the first process node, the functional interaction gesture returns to the last process node from the current process node when the return gesture made by the user is detected.
In some of these embodiments, returning from the current flow node to the previous flow node upon detecting that the user makes a return gesture comprises:
and when the condition that the user makes the return gesture is detected, displaying the time progress of the return gesture, and returning to the previous flow node from the current flow node after the time progress is executed.
In some embodiments, according to the acquired gesture of the user, performing gesture interaction based on the functional interaction gesture through the second gesture recognition area includes:
the function interaction gesture further comprises an execution gesture, and the planning course is executed when the execution gesture made by the user is detected after the last flow node is completed.
In some embodiments, the determining the display state of the flow node according to the gesture interaction result of the gesture recognition area includes:
and displaying the current flow node of the planning content in the non-gesture recognition area in a first interaction state, and displaying the current flow node in a second interaction state if the gesture interaction is completed by the current flow node.
In some of these embodiments, the first interaction state is a gray icon state and the second interaction state is a highlighted and confirmatory marked icon state.
In a second aspect, an embodiment of the present application provides a gesture interaction system of a user interface, where the system includes a region dividing module, a gesture interaction module, and a display matching module;
the region dividing module is used for dividing the region of the user interface to obtain a gesture recognition region and a non-gesture recognition region;
the gesture interaction module is used for carrying out gesture interaction through the gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition;
the display matching module is used for displaying the flow node of the planning content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area.
Compared with the related art, the gesture interaction method and the gesture interaction system for the user interface provided by the embodiment of the application have the advantages that the gesture recognition area and the non-gesture recognition area are obtained by dividing the area of the user interface; according to the acquired user gestures, gesture interaction is carried out through a gesture recognition area, wherein the user gestures are acquired through human skeleton recognition; and displaying the flow node of the planning content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area. The method solves the problem that the common man-machine interaction needs to rely on physical media, reduces cost consumption caused by unnecessary physical equipment, realizes user interface interaction based on gesture recognition, and enables an interaction form to be more convenient and intelligent.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a flow chart of steps of a gesture interaction method of a user interface according to an embodiment of the present application;
FIG. 2 is a schematic diagram of user interface partitioning according to an embodiment of the present application;
FIG. 3 is a schematic illustration of gesture recognition area division according to an embodiment of the present application;
FIG. 4 is a schematic illustration one of user interface gesture interactions according to an embodiment of the present application;
FIG. 5 is a second schematic illustration of user interface gesture interactions according to an embodiment of the present application;
FIG. 6 is a schematic diagram III of user interface gesture interactions according to an embodiment of the present application;
FIG. 7 is a schematic diagram showing matching plan lessons in accordance with an embodiment of the present application;
FIG. 8 is a block diagram of a gesture interaction system of a user interface according to an embodiment of the present application;
fig. 9 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application.
Description of the drawings: 81. a region dividing module; 82. a gesture interaction module; 83. and displaying the matching module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described and illustrated below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on the embodiments provided herein, are intended to be within the scope of the present application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the embodiments described herein can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar terms herein do not denote a limitation of quantity, but rather denote the singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in this application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein refers to two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
An embodiment of the present application provides a gesture interaction method of a user interface, and fig. 1 is a flowchart of steps of the gesture interaction method of the user interface according to an embodiment of the present application, as shown in fig. 1, where the method includes the following steps:
step S102, dividing the user interface into areas to obtain a gesture recognition area and a non-gesture recognition area;
specifically, fig. 2 is a schematic diagram of user interface division according to an embodiment of the present application, and as shown in fig. 2, the user interface is divided into a gesture recognition area and a non-gesture recognition area. The gesture recognition area and the non-gesture recognition area are displayed on a display interface of a large screen, wherein the large screen can be a television screen or a projector screen, the non-gesture recognition area is used for displaying flow nodes or planning courses and the like, and the gesture recognition area is used for displaying an interaction process of a user;
step S104, performing gesture interaction through a gesture recognition area according to the acquired user gesture, wherein the user gesture is acquired through human skeleton recognition; the interaction process comprises the steps of acquiring skeleton movement information or hand gesture information of a user body, and the like, wherein the interaction process comprises the steps of acquiring user image or depth image information through a camera which is arranged on an external camera or a large screen terminal, identifying the skeleton movement information or the hand gesture information of the user through the user image or the depth image information, and acquiring the interaction information of the user according to the skeleton movement information or the hand gesture information and displaying the interaction information in a gesture identification area;
specifically, fig. 3 is a schematic diagram of gesture recognition area division according to an embodiment of the present application, and as shown in fig. 3, the gesture recognition area is further divided into a first gesture recognition area and a second gesture recognition area; according to the acquired user gestures, gesture interaction is carried out on branch options corresponding to the flow nodes through the first gesture recognition area; according to the acquired gesture of the user, gesture interaction based on the functional interaction gesture is performed through the second gesture recognition area, and further, the first gesture recognition area functions as follows: under the condition of interaction inside the flow node, displaying an interaction target and progress; the second gesture recognition area functions as: and under the condition of interaction among the flow nodes, displaying an interaction target and progress.
It should be noted that, the first gesture recognition area and the second gesture recognition area may have the same recognition priority and areas, or may be different, for example, when the recognition areas of the first gesture recognition area and the second gesture recognition area are different, the second gesture recognition area is for recognizing the whole area of the recognition image, after recognizing the gesture of the user, it is recognized whether the gesture is a functional interaction gesture, and it is determined whether the next step is performed, and the first gesture recognition area is used for recognizing whether the human skeleton of the user touches the touch area, and after recognizing the human skeleton touches the touch area, the next step is performed. Under the condition that the recognition priorities of the first gesture recognition area and the second gesture recognition area are the same, the gesture interaction of the first gesture recognition area and the gesture interaction of the second gesture recognition area are executed simultaneously, so that the execution efficiency of the recognition flow can be improved, and the consumed time of the gesture interaction can be reduced, for example: the first gesture recognition area recognizes a touch area corresponding to a branch option in a current flow node when a user touches the touch area, and the second gesture recognition area recognizes a functional interaction gesture made by the user, if the time progress of the branch option in the current flow node is finished earlier than the time progress of the functional interaction gesture, the functional interaction gesture is executed based on the interaction result of the previous flow node, otherwise, the functional interaction gesture is executed based on the current flow node; under the condition that the recognition priorities are different, the priority of the second gesture recognition area is larger than that of the first gesture recognition area, the background preferentially recognizes the user gesture in the recognition image, and after the gesture recognition is finished, whether the user skeleton in the recognition image is in touch with the touch area or not is recognized, so that the conflict of the execution of the recognition flow can be prevented. Fig. 4 is a schematic diagram of user interface gesture interaction according to an embodiment of the present application, as shown in fig. 4, two touch areas, namely, a "man" touch area and a "woman" touch area, are set in a first gesture recognition area, after a background detection recognition image is used for detecting that a user skeleton touches the touch area "woman", a corresponding display branch option of a flow node in a non-gesture recognition area is enlarged according to a preset size, the touch area "woman" corresponding to the branch option is displayed, a corresponding time progress is displayed, and after the execution of the time progress is completed, gesture interaction of the flow node is completed. If the "female" branch option of the "gender" of the flow node is touched, the branch option is enlarged, the corresponding time schedule is displayed, and after the execution of the time schedule is finished, gesture interaction of the "gender" is completed, so that the user can be effectively prevented from touching the branch option by mistake.
Preferably, fig. 5 is a schematic diagram two of user interface gesture interaction according to an embodiment of the present application, as shown in fig. 5, in performing gesture interaction in the second gesture recognition area, the functional interaction gesture includes a return gesture, in a process node other than the first process node, if the gesture of the user is recognized as the return gesture, displaying, on the second gesture recognition area, a time schedule of the return gesture for the recognized image, and after the time schedule is performed, returning to the previous process node from the current process node. If the user makes a return gesture, displaying the time progress of the return gesture, and returning to the previous process node (sex) from the current process node (exercise part) after the execution of the time progress is finished "
Preferably, fig. 6 is a schematic diagram three of user interface gesture interaction according to an embodiment of the present application, as shown in fig. 6, in performing gesture interaction in the second gesture recognition area, the function interaction gesture further includes performing a gesture, after the last flow node is completed, displaying a time progress of the performing gesture when it is detected that the user makes the performing gesture, and performing a planning course after the time progress is performed. For example, after the last process node finishes the 'immediate training', when the situation that the user makes the executing gesture is detected, the time progress of executing the gesture is displayed, and after the time progress is finished, the planning course is executed.
Step S106, displaying the flow node of the planning content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area.
It should be noted that, in the non-gesture recognition area, some or all of the flow nodes of the planned content may be displayed, the display state of the current flow node is determined by the gesture interaction result of the gesture recognition area, and the change of the display state indicates whether the gesture interaction of the user is completed. After gesture interaction of all the flow nodes of the plan content is completed, the background can generate corresponding plan courses according to the determined content of the flow nodes, for example, fitness plan content generates fitness exercise courses, and learning course plan content generates learning course arrangement.
Specifically, as shown in fig. 4, the display state of the current flow node of the gesture interaction is displayed in the first interaction state through the non-gesture recognition area, and if the current flow node has completed the gesture interaction, the current flow node is displayed in the second interaction state, where the first interaction state is a gray icon state, and the second interaction state is a highlighted icon state with a confirmation mark.
Further, fig. 7 is a schematic diagram of displaying a matching plan course according to an embodiment of the present application, where a hover interface is generated and a matching plan course is displayed for a user in the hover interface when all flow nodes displayed in the non-gesture recognition area have completed gesture interaction.
It should be noted that, by displaying the gesture-interacted flow nodes in the non-gesture-recognition area, the user may generate the planned course of the exercise when the partial flow nodes are completed (the partial nodes are displayed in the second interaction state, and the partial nodes are displayed in the second interaction state). The user may also generate a planning course of the workout upon completion of all of the flow nodes (all nodes displayed in the second interactive state). Through the steps S102 to S106 in the embodiment of the application, the problem that the common man-machine interaction needs to rely on physical media is solved, cost consumption caused by unnecessary physical equipment is reduced, user interface interaction based on gesture recognition is realized, and an interaction form becomes more convenient and intelligent.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
An embodiment of the present application provides a gesture interaction system of a user interface, and fig. 8 is a structural block diagram of the gesture interaction system of the user interface according to the embodiment of the present application, as shown in fig. 8, the system includes a region dividing module 81, a gesture interaction module 82, and a display matching module 83;
the area dividing module 81 is configured to divide an area of the user interface to obtain a gesture recognition area and a non-gesture recognition area;
the gesture interaction module 82 is configured to perform gesture interaction through a gesture recognition area according to the obtained gesture of the user, where the gesture of the user is obtained through human skeleton recognition;
the display matching module 83 is configured to display a flow node of the planned content in a non-gesture recognition area, and determine a display state of the flow node according to a gesture interaction result of the gesture recognition area.
Through the regional division module 81, the gesture interaction module 82 and the display matching module 83 in the embodiment of the application, the problem that the common man-machine interaction needs to rely on physical media is solved, cost consumption caused by unnecessary physical equipment is reduced, user interface interaction based on gesture recognition is realized, and an interaction form becomes more convenient and intelligent.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In addition, in combination with the gesture interaction method of the user interface in the above embodiment, the embodiment of the application may provide a storage medium for implementation. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements a gesture interaction method for any of the user interfaces of the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a gesture interaction method for a user interface. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 9 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, as shown in fig. 9, and an electronic device, which may be a server, may be provided, and an internal structure diagram thereof may be shown in fig. 9. The electronic device includes a processor, a network interface, an internal memory, and a non-volatile memory connected by an internal bus, where the non-volatile memory stores an operating system, computer programs, and a database. The processor is used for providing computing and control capabilities, the network interface is used for communicating with an external terminal through a network connection, the internal memory is used for providing an environment for the operation of an operating system and a computer program, the computer program is executed by the processor to realize a gesture interaction method of a user interface, and the database is used for storing data.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the electronic device to which the present application is applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that the technical features of the above embodiment may be combined arbitrarily, and for simplicity of description, all possible combinations of the technical features of the above embodiment are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (9)

1. A method of gesture interaction for a user interface, the method comprising:
dividing the user interface into areas to obtain a gesture recognition area and a non-gesture recognition area;
dividing the gesture recognition area into a first gesture recognition area and a second gesture recognition area; according to the acquired user gestures, gesture interaction is carried out on branch options corresponding to the flow nodes through the first gesture recognition area; according to the acquired user gestures, gesture interaction based on the functional interaction gestures is carried out through the second gesture recognition area, wherein the user gestures are acquired through human skeleton recognition;
and displaying the flow node of the planning content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area.
2. The method according to claim 1, characterized in that the method comprises:
the first gesture recognition area and the second gesture recognition area have recognition priorities;
under the condition that the recognition priorities are the same, simultaneously executing gesture interaction of the first gesture recognition area and gesture interaction of the second gesture recognition area;
and under the condition that the recognition priorities are different, preferentially executing the gesture interaction of the second gesture recognition area, and then executing the gesture interaction of the first gesture recognition area.
3. The method of claim 1, wherein performing gesture interaction on the branch options corresponding to the flow node through the first gesture recognition area according to the acquired gesture of the user comprises:
when detecting that a user makes a touch gesture to a branch option of a flow node, amplifying the branch option according to a preset size, displaying a corresponding time progress, and completing gesture interaction of the flow node after the execution of the time progress is completed.
4. The method of claim 1, wherein performing gesture interactions based on functional interaction gestures through the second gesture recognition region based on the acquired user gestures comprises:
the functional interaction gesture comprises a return gesture, and in the process node which is not the first process node, the functional interaction gesture returns to the last process node from the current process node when the return gesture made by the user is detected.
5. The method of claim 4, wherein returning from the current flow node to the previous flow node upon detecting a user making a return gesture comprises:
and when the condition that the user makes the return gesture is detected, displaying the time progress of the return gesture, and returning to the previous flow node from the current flow node after the time progress is executed.
6. The method of claim 1, wherein performing gesture interactions based on functional interaction gestures through the second gesture recognition region based on the acquired user gestures comprises:
the functional interaction gesture further comprises an execution gesture, and after the last flow node is finished, the planning course is executed under the condition that the execution gesture made by the user is detected.
7. The method of claim 1, wherein the determining the display state of the flow node according to the gesture interaction result of the gesture recognition region comprises:
and displaying the current flow node of the planning content in the non-gesture recognition area in a first interaction state, and displaying the current flow node in a second interaction state if the gesture interaction is completed by the current flow node.
8. The method of claim 7, wherein the first interaction state is a gray icon state and the second interaction state is a highlighted and confirmatory marked icon state.
9. A gesture interaction system of a user interface, which is characterized by comprising a region dividing module, a gesture interaction module and a display matching module;
the region dividing module is used for dividing the region of the user interface to obtain a gesture recognition region and a non-gesture recognition region;
the gesture interaction module is used for further dividing the gesture recognition area into a first gesture recognition area and a second gesture recognition area; according to the acquired user gestures, gesture interaction is carried out on branch options corresponding to the flow nodes through the first gesture recognition area; according to the acquired user gestures, gesture interaction based on the functional interaction gestures is carried out through the second gesture recognition area, wherein the user gestures are acquired through human skeleton recognition;
the display matching module is used for displaying the flow node of the planning content in the non-gesture recognition area, and determining the display state of the flow node according to the gesture interaction result of the gesture recognition area.
CN202210587715.6A 2022-05-26 2022-05-26 Gesture interaction method and system of user interface Active CN115202530B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210587715.6A CN115202530B (en) 2022-05-26 2022-05-26 Gesture interaction method and system of user interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210587715.6A CN115202530B (en) 2022-05-26 2022-05-26 Gesture interaction method and system of user interface

Publications (2)

Publication Number Publication Date
CN115202530A CN115202530A (en) 2022-10-18
CN115202530B true CN115202530B (en) 2024-04-09

Family

ID=83575423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210587715.6A Active CN115202530B (en) 2022-05-26 2022-05-26 Gesture interaction method and system of user interface

Country Status (1)

Country Link
CN (1) CN115202530B (en)

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184021A (en) * 2011-05-27 2011-09-14 华南理工大学 Television man-machine interaction method based on handwriting input and fingertip mouse
EP2610722A2 (en) * 2011-12-29 2013-07-03 Apple Inc. Device, method and graphical user interface for configuring restricted interaction with a user interface
CN104735544A (en) * 2015-03-31 2015-06-24 上海摩软通讯技术有限公司 Video guidance method for mobile terminal
CN104869469A (en) * 2015-05-19 2015-08-26 乐视致新电子科技(天津)有限公司 Method and apparatus for displaying program contents
AU2016100653A4 (en) * 2015-06-07 2016-06-16 Apple Inc. Devices and methods for navigating between user interfaces
CN105915977A (en) * 2015-06-30 2016-08-31 乐视致新电子科技(天津)有限公司 Method for controlling electronic equipment and device thereof
CN106327934A (en) * 2015-07-01 2017-01-11 陆雨竹 Network terminal-based learning guidance device
CN108235091A (en) * 2018-01-25 2018-06-29 青岛海信电器股份有限公司 Smart television and the method that upper content is applied based on access homepage in display equipment
CN108429927A (en) * 2018-02-08 2018-08-21 聚好看科技股份有限公司 The method of virtual goods information in smart television and search user interface
CN108853946A (en) * 2018-07-10 2018-11-23 燕山大学 A kind of exercise guide training system and method based on Kinect
CN108885525A (en) * 2016-11-04 2018-11-23 华为技术有限公司 Menu display method and terminal
CN110780743A (en) * 2019-11-05 2020-02-11 聚好看科技股份有限公司 VR (virtual reality) interaction method and VR equipment
CN111178348A (en) * 2019-12-09 2020-05-19 广东小天才科技有限公司 Method for tracking target object and sound box equipment
CN112351325A (en) * 2020-11-06 2021-02-09 惠州视维新技术有限公司 Gesture-based display terminal control method, terminal and readable storage medium
CN112348942A (en) * 2020-09-18 2021-02-09 当趣网络科技(杭州)有限公司 Body-building interaction method and system
CN112383805A (en) * 2020-11-16 2021-02-19 四川长虹电器股份有限公司 Method for realizing man-machine interaction at television end based on human hand key points
CN112487844A (en) * 2019-09-11 2021-03-12 华为技术有限公司 Gesture recognition method, electronic device, computer-readable storage medium, and chip
CN112612393A (en) * 2021-01-05 2021-04-06 杭州慧钥医疗器械科技有限公司 Interaction method and device of interface function
CN113076836A (en) * 2021-03-25 2021-07-06 东风汽车集团股份有限公司 Automobile gesture interaction method
CN113596590A (en) * 2020-04-30 2021-11-02 聚好看科技股份有限公司 Display device and play control method
CN113760131A (en) * 2021-08-05 2021-12-07 当趣网络科技(杭州)有限公司 Projection touch processing method and device and computer readable storage medium
CN113794917A (en) * 2021-09-15 2021-12-14 海信视像科技股份有限公司 Display device and display control method
CN114489331A (en) * 2021-12-31 2022-05-13 上海米学人工智能信息科技有限公司 Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8514251B2 (en) * 2008-06-23 2013-08-20 Qualcomm Incorporated Enhanced character input using recognized gestures
KR20100101389A (en) * 2009-03-09 2010-09-17 삼성전자주식회사 Display apparatus for providing a user menu, and method for providing ui applied thereto
KR102035134B1 (en) * 2012-09-24 2019-10-22 엘지전자 주식회사 Image display apparatus and method for operating the same
US9575562B2 (en) * 2012-11-05 2017-02-21 Synaptics Incorporated User interface systems and methods for managing multiple regions
US9298266B2 (en) * 2013-04-02 2016-03-29 Aquifi, Inc. Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
KR102209354B1 (en) * 2014-06-20 2021-01-29 엘지전자 주식회사 Video display device and operating method thereof
CN107493495B (en) * 2017-08-14 2019-12-13 深圳市国华识别科技开发有限公司 Interactive position determining method, system, storage medium and intelligent terminal
US10852915B1 (en) * 2019-05-06 2020-12-01 Apple Inc. User interfaces for sharing content with other electronic devices

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184021A (en) * 2011-05-27 2011-09-14 华南理工大学 Television man-machine interaction method based on handwriting input and fingertip mouse
EP2610722A2 (en) * 2011-12-29 2013-07-03 Apple Inc. Device, method and graphical user interface for configuring restricted interaction with a user interface
CN104735544A (en) * 2015-03-31 2015-06-24 上海摩软通讯技术有限公司 Video guidance method for mobile terminal
CN104869469A (en) * 2015-05-19 2015-08-26 乐视致新电子科技(天津)有限公司 Method and apparatus for displaying program contents
AU2016100653A4 (en) * 2015-06-07 2016-06-16 Apple Inc. Devices and methods for navigating between user interfaces
CN105915977A (en) * 2015-06-30 2016-08-31 乐视致新电子科技(天津)有限公司 Method for controlling electronic equipment and device thereof
CN106327934A (en) * 2015-07-01 2017-01-11 陆雨竹 Network terminal-based learning guidance device
CN108885525A (en) * 2016-11-04 2018-11-23 华为技术有限公司 Menu display method and terminal
CN108235091A (en) * 2018-01-25 2018-06-29 青岛海信电器股份有限公司 Smart television and the method that upper content is applied based on access homepage in display equipment
CN108429927A (en) * 2018-02-08 2018-08-21 聚好看科技股份有限公司 The method of virtual goods information in smart television and search user interface
CN108853946A (en) * 2018-07-10 2018-11-23 燕山大学 A kind of exercise guide training system and method based on Kinect
CN112487844A (en) * 2019-09-11 2021-03-12 华为技术有限公司 Gesture recognition method, electronic device, computer-readable storage medium, and chip
CN110780743A (en) * 2019-11-05 2020-02-11 聚好看科技股份有限公司 VR (virtual reality) interaction method and VR equipment
CN111178348A (en) * 2019-12-09 2020-05-19 广东小天才科技有限公司 Method for tracking target object and sound box equipment
CN113596590A (en) * 2020-04-30 2021-11-02 聚好看科技股份有限公司 Display device and play control method
CN112348942A (en) * 2020-09-18 2021-02-09 当趣网络科技(杭州)有限公司 Body-building interaction method and system
CN112351325A (en) * 2020-11-06 2021-02-09 惠州视维新技术有限公司 Gesture-based display terminal control method, terminal and readable storage medium
CN112383805A (en) * 2020-11-16 2021-02-19 四川长虹电器股份有限公司 Method for realizing man-machine interaction at television end based on human hand key points
CN112612393A (en) * 2021-01-05 2021-04-06 杭州慧钥医疗器械科技有限公司 Interaction method and device of interface function
CN113076836A (en) * 2021-03-25 2021-07-06 东风汽车集团股份有限公司 Automobile gesture interaction method
CN113760131A (en) * 2021-08-05 2021-12-07 当趣网络科技(杭州)有限公司 Projection touch processing method and device and computer readable storage medium
CN113794917A (en) * 2021-09-15 2021-12-14 海信视像科技股份有限公司 Display device and display control method
CN114489331A (en) * 2021-12-31 2022-05-13 上海米学人工智能信息科技有限公司 Method, apparatus, device and medium for interaction of separated gestures distinguished from button clicks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于NUI的智能车载助理系统人机界面设计研究;戴一康;《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》(第5期);全文 *

Also Published As

Publication number Publication date
CN115202530A (en) 2022-10-18

Similar Documents

Publication Publication Date Title
JP5784141B2 (en) Handwriting input method by superimposed writing
US8866781B2 (en) Contactless gesture-based control method and apparatus
US8965049B2 (en) Function extension device, function extension method, computer-readable recording medium, and integrated circuit
EP2733629A1 (en) System for associating tag information with images supporting image feature search
CN106201166A (en) A kind of multi-screen display method and terminal
CN112068698A (en) Interaction method and device, electronic equipment and computer storage medium
TW201604719A (en) Method and apparatus of controlling a smart device
CN108932057A (en) Method of controlling operation thereof, device, storage medium and electronic equipment
CN108492349B (en) Processing method, device and equipment for writing strokes and storage medium
CN115202530B (en) Gesture interaction method and system of user interface
CN109085983A (en) Method of controlling operation thereof, device, storage medium and electronic equipment
WO2023231860A1 (en) Input method and apparatus, and device and storage medium
CN108876713B (en) Mapping method and device of two-dimensional template image, terminal equipment and storage medium
CN112148171B (en) Interface switching method and device and electronic equipment
CN114518859A (en) Display control method, display control device, electronic equipment and storage medium
CN114360047A (en) Hand-lifting gesture recognition method and device, electronic equipment and storage medium
CN104951211A (en) Information processing method and electronic equipment
CN114581535A (en) Method, device, storage medium and equipment for marking key points of user bones in image
CN114981769A (en) Information display method and device, medical equipment and storage medium
CN110069126B (en) Virtual object control method and device
CN108307044B (en) A kind of terminal operation method and equipment
CN109725712A (en) Visual Report Forms methods of exhibiting, device, equipment and readable storage medium storing program for executing
CN111523387B (en) Method and device for detecting key points of hands and computer device
CN112925270B (en) Parameter setting method, mobile terminal and computer readable storage medium
CN112837426A (en) Human body information display method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant