CN113625878B - Gesture information processing method, device, equipment, storage medium and program product - Google Patents

Gesture information processing method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN113625878B
CN113625878B CN202110934968.1A CN202110934968A CN113625878B CN 113625878 B CN113625878 B CN 113625878B CN 202110934968 A CN202110934968 A CN 202110934968A CN 113625878 B CN113625878 B CN 113625878B
Authority
CN
China
Prior art keywords
gesture
function button
source
operation instruction
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110934968.1A
Other languages
Chinese (zh)
Other versions
CN113625878A (en
Inventor
李健龙
张茜
石磊
蒋祥涛
贾振超
曹洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110934968.1A priority Critical patent/CN113625878B/en
Publication of CN113625878A publication Critical patent/CN113625878A/en
Application granted granted Critical
Publication of CN113625878B publication Critical patent/CN113625878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a gesture information processing method, a gesture information processing device, electronic equipment, a computer readable storage medium and a computer program product, and relates to the technical fields of gesture interaction, intelligent equipment control, image recognition and the like. One embodiment of the method comprises the following steps: determining gesture actions and source limbs according to the collected gesture information; determining a gesture effective region corresponding to the source limb on the smart mirror; and executing an operation instruction corresponding to the gesture action on the function buttons in the gesture effective area. According to the embodiment, a gesture-based control mode is provided for the intelligent mirror, the position relation of the source limb in the whole limb is fully utilized to map the position relation of different gesture effective areas in all mirror surface areas, and the control convenience of the intelligent mirror is improved by means of the gesture control mode.

Description

Gesture information processing method, device, equipment, storage medium and program product
Technical Field
The disclosure relates to the technical field of data processing, in particular to the technical field of artificial intelligence such as gesture interaction, intelligent device control and image recognition, and especially relates to a gesture information processing method, a gesture information processing device, electronic equipment, a computer readable storage medium and a computer program product.
Background
With the development of electronic informatization and intelligent degree, the intelligent requirements of users on equipment are not limited to common mobile phones and flat plates, and more equipment is required to be intelligent, such as intelligent switches, intelligent mirrors, intelligent televisions and the like.
Common small-screen devices such as smart phones and smart flat panels are usually held by a user in a hand during control and are directly controlled by touching, while large-screen devices such as smart mirrors and smart televisions are difficult to control by touching.
How to provide a reasonable control mode for large-screen intelligent equipment is a problem to be solved by the person skilled in the art.
Disclosure of Invention
The embodiment of the disclosure provides a gesture information processing method, a gesture information processing device, electronic equipment, a computer readable storage medium and a computer program product.
In a first aspect, an embodiment of the present disclosure provides a gesture information processing method, including: determining gesture actions and source limbs according to the collected gesture information; determining a gesture effective region corresponding to the source limb on the smart mirror; and executing an operation instruction corresponding to the gesture action on the function buttons in the gesture effective area.
In a second aspect, an embodiment of the present disclosure provides a gesture information processing apparatus, including: the gesture information processing unit is configured to determine gesture actions and source limbs according to the acquired gesture information; a gesture validation region determination unit configured to determine a gesture validation region corresponding to the source limb on the smart mirror; and the function button selecting and executing unit is configured to execute operation instructions corresponding to the gesture actions on the function buttons in the gesture effective area.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to implement a gesture information processing method as described in any one of the implementations of the first aspect when executed.
In a fourth aspect, embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement a gesture information processing method as described in any one of the implementations of the first aspect when executed.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, is capable of implementing a gesture information processing method as described in any one of the implementations of the first aspect.
According to the gesture information processing method provided by the embodiment of the disclosure, firstly, gesture actions and source limbs are determined according to the collected gesture information; then, determining a gesture effective region corresponding to the source limb on the smart mirror; and finally, executing an operation instruction corresponding to the gesture action on the function buttons in the gesture effective area.
The method and the device provide a gesture-based control mode for the smart mirror, namely fully utilize source limbs used by gesture actions made by a user of the smart mirror, execute operation instructions corresponding to the gesture actions on the function buttons of the gesture effective area by utilizing the corresponding relation between the source limbs and the gesture effective area on the smart mirror, and are particularly suitable for simultaneously giving a plurality of selectable function buttons and distributing the selectable function buttons in scenes of different gesture effective areas, namely fully utilizing the position relation of the source limbs in the whole limbs to map the position relation of different gesture effective areas in all mirror areas, and improve the control convenience of the smart mirror by virtue of the gesture control mode.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture in which the present disclosure may be applied;
FIG. 2 is a flowchart of a gesture information processing method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another gesture information processing method according to an embodiment of the present disclosure;
FIG. 4 is a flow chart of a method for handling the presence of multiple selectable function buttons in the same gesture effective region provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart of another method provided by an embodiment of the present disclosure for handling the presence of multiple selectable function buttons in the same gesture effective region;
FIG. 6 is a block diagram of a gesture information processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device adapted to perform a gesture information processing method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related personal information of the user accord with the regulations of related laws and regulations, necessary security measures are taken, and the public order harmony is not violated.
FIG. 1 illustrates an exemplary system architecture 100 in which embodiments of gesture information processing methods, apparatus, electronic devices, and computer readable storage media of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include a terminal device 101 and a user who makes a gesture 102 in front of the terminal device 101 (gesture 102 is used to refer to the corresponding user).
The terminal device 101 may interact with other terminal devices and servers through a network or the like, so that more functions are provided for the terminal device 101 locally by using the other terminal devices and servers, and a user may also make different gestures to realize interaction between the terminal devices 101.
The terminal device 101 may be hardware or software. When the terminal device 101 is hardware, it may be a smart mirror or other various electronic devices having a display screen and functioning as a mirror equivalent to the smart mirror; when the terminal device 101 is software, it may be installed in the above-described electronic device, and it may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not particularly limited herein.
The terminal device 101 may provide various services through various built-in applications, for example, a gesture control type application that may provide a response based on a gesture made by a user, and the terminal device 101 may achieve the following effects when running the gesture control type application: firstly, the terminal equipment 101 collects gesture information of a user before a mirror by utilizing a shooting component on the terminal equipment; then, determining gesture actions and source limbs according to the collected gesture information; next, determining a gesture effective region corresponding to the source limb on the smart mirror; and finally, executing an operation instruction corresponding to the gesture action on the function buttons in the gesture effective area.
In view of the actual use situation, the user needs to obtain real-time gesture operation feedback, so the gesture information processing method provided in the subsequent embodiments of the present disclosure is generally performed by the terminal device 101 that is directly used by the user and presents gesture operation feedback to the user, and accordingly, the gesture information processing apparatus is also generally disposed in the terminal device 101. However, it should be noted that when the performance of the terminal device 101 is low, part or all of the calculation tasks may be transferred to the server at the back end, and the terminal device 101 only needs to receive the calculation result returned by the server.
It should be understood that the number, size of the terminal devices in fig. 1 are merely illustrative. The adaptation may be made as needed for the implementation.
Referring to fig. 2, fig. 2 is a flowchart of a gesture information processing method according to an embodiment of the disclosure, where the flowchart 200 includes the following steps:
step 201: determining gesture actions and source limbs according to the collected gesture information;
this step is intended to determine the gesture motion and source limb from the acquired gesture information by the execution subject of the gesture information processing method (e.g., the terminal device 101 shown in fig. 1, or a specific smart mirror).
The gesture motion refers to specific "content" of a gesture made by a user, for example, a gesture motion of one hand such as Zhang Zhang, fist making, forward pointing, standing, and the like, or a gesture motion of two hands such as two hands hooking each other, two hands crossing each other and holding arms, the source limb refers to the gesture motion made by the user by which hand, the corresponding gesture motion for one hand can be specific to a left hand and a right hand, and the gesture motion for two hands can directly determine the two hands as the source limb.
The gesture information can be acquired by a camera shooting assembly directly integrated on the execution main body, or by an independent camera shooting assembly controlled by the execution main body, and the camera shooting assembly can also comprise a monocular camera, a binocular camera, an infrared/laser scanner and other assemblies capable of acquiring gesture parameters of various types. The binocular camera and the infrared/laser scanner can simultaneously extract more accurate spatial information, so that relevant parameters such as gesture height, gesture scrolling degree, tilting degree and the like can be determined by means of the spatial information.
Step 202: determining a gesture effective region corresponding to the source limb on the smart mirror;
on the basis of step 201, this step aims at determining a gesture effective area corresponding to the source limb on the smart mirror by the execution subject. All the areas on the smart mirror for displaying the function buttons or the information content can be called a complete information display area, and the gesture effective area belongs to the area where the optional function buttons exist in the complete information display area, namely, the function buttons usually positioned in different directions are scattered to different gesture effective areas, so that the corresponding gesture effective areas are determined by fully utilizing the source limbs of the gesture actions.
For example, a common function trigger is to ensure that a confirmation popup will usually appear, and two function buttons of "confirmation" and "cancellation" are respectively presented on a left side and a right side at the lower part of the confirmation popup, and the two buttons of "confirmation" and "cancellation" are dispersed in different gesture effective areas according to their relative positional relationships under alternative choices, for example, when a gesture motion originates from a left hand, the left area where "confirmation" is located is determined as the corresponding gesture effective area based on the left-left correspondence, and of course, the right area where "cancellation" is located may also be determined as the corresponding gesture effective area based on the preset left-right correspondence.
Similarly, in some scenarios where it is desired to convey the latest content that must be understood to the user, only a unique "known" function button may be presented at the bottom centered position under the pop-up window, and then since only one function button is centered on the uppermost interface at this time, all gestures characterizing the centered position may correspond to the gesture effective area to the area where the "known" function button is located, including one-hand gestures or two-hand gestures.
Step 203: and executing an operation instruction corresponding to the gesture action on the function buttons in the gesture effective area.
On the basis of step 202, this step aims at executing the operation instruction corresponding to the gesture action by the execution subject on the function button of the gesture effective region. Since different gesture actions may characterize different operating instructions, such as short presses, long presses, short interval successive presses, etc.
According to the gesture information processing method provided by the embodiment of the invention, a gesture-based control mode is provided for the intelligent mirror, namely, source limbs used for gesture actions made by a user of the intelligent mirror are fully utilized, and a corresponding relation between the source limbs and gesture effective areas on the intelligent mirror is utilized, operation instructions corresponding to the gesture actions are executed on the function buttons of the gesture effective areas, so that the gesture information processing method is particularly suitable for simultaneously providing a plurality of selectable function buttons and distributing the selectable function buttons in scenes of different gesture effective areas, namely, the position relation between all mirror areas of different gesture effective areas is mapped by fully utilizing the position relation between the source limbs and the whole limbs, and the control convenience of the intelligent mirror is improved by means of the gesture control mode.
Referring to fig. 3, fig. 3 is a flowchart of another gesture information processing method according to an embodiment of the disclosure, where the flowchart 300 includes the following steps:
step 301: determining gesture actions and source limbs according to the collected gesture information;
step 302: judging whether the source limb is a single hand, if so, executing step 303, otherwise, executing step 304;
the method aims at judging whether the source limb is a single hand or two hands so as to better determine the gesture effective area corresponding to the source limb in the current scene through the number of the source limbs.
Step 303: determining a side area corresponding to the source limb on the intelligent mirror as a gesture effective area of the single-hand gesture;
in this step, based on the determination result in step 302 that the source limb is a single hand, in a scenario in which different functional buttons exist in different side areas displayed on the top display interface of the smart mirror, a side area of the smart mirror corresponding to the source limb is determined as a gesture effective area of the single-hand gesture, which can be referred to as the examples of "confirm" and "cancel" in step 202 in the previous embodiment.
In particular, if only a unique function button exists on the top layer display interface of the smart mirror, in this scenario, the selection operation of selecting different side areas as gesture effective areas by different side source limbs can be eliminated, that is, the area where the unique function button is located is determined as the gesture effective area no matter from which hand.
Step 304: determining a central area on the intelligent mirror as a gesture effective area of the double-hand gesture;
this step is based on the determination that the source limb is two hands in step 302, and the two-hand gesture loses the selection information relative to the one-hand gesture, so that the two-hand gesture is generally directed to only one function button (for example, the "known" example given in the previous embodiment) on the top display interface of the smart mirror, and the only function button is generally located in the center, so that this step determines the center area on the smart mirror as the gesture effective area of the two-hand gesture.
In particular, in some scenarios, a unique function button may be disposed on the lower right or lower left side, but as long as no other function button is present on the same horizontal plane, the two-hand gesture may correspond to the area of the unique function button.
Step 305: and executing an operation instruction corresponding to the gesture action on the function buttons in the gesture effective area.
Compared with the embodiment shown in the flow 200, the embodiment provides the steps 302-304 of distinguishing whether the source limb is two hands or one hand, and provides different corresponding relations and processing modes so as to combine the actual application scene and the number of hands, so that the provided gesture control mode is more reasonable and accords with the habit of the user.
In the use scenario of smart mirrors, although most of the scenarios only give no more than two function buttons for the user to choose one, it cannot be denied that in some scenarios there may be multiple selectable function buttons on the same side, for example, in the case of rights requested by the user, there may be four function buttons "always allowed", "only allowed once", "not allowed" and "allowed only during application running" at the same time, where "always allowed", "only allowed once" is arranged on the left side in a top-down manner, and "allowed" is also arranged on the right side in a top-down manner only during application running "and thus forms a 2×2 arrangement of function buttons, then since the left and right hands can only provide left or right selection information, how to correctly select a unique target function button in the two function buttons in the left or right area is also provided by the following two different implementations in fig. 4 and 5:
the flow 400 shown in fig. 4 includes the steps of:
step 401: determining height information of a gesture action according to gesture information;
step 402: determining a function button corresponding to the height information in the gesture effective area as a target function button;
step 403: and executing an operation instruction corresponding to the gesture action on the target function button.
That is, the embodiment illustrated in flow 400 provides for selecting the correct function button in the same gesture effective region based on different elevations based on the elevation information obtained from the gesture information.
The process 400 shown in fig. 5 includes the following steps:
step 501: generating an input prompt of button selection information;
step 502: receiving button selection information input according to an input prompt;
step 503: determining a function button corresponding to the button selection information in the gesture effective area as a target function button;
step 504: and executing an operation instruction corresponding to the gesture action on the target function button.
That is, when the smart mirror finds that a plurality of optional function buttons exist in the same area, an input prompt of button selection information is generated and sent to a user, button selection information additionally transmitted by the user according to the input prompt is received, and then a target function button is determined according to the additionally received button selection information.
Compared with the implementation manner shown in fig. 4, it can be known that when the executing body can determine the height information according to the collected gesture information and the height difference exists among the plurality of function buttons in the same gesture effective area, the purpose of selecting can be achieved by fully utilizing the height information of the gesture action; if the executing body cannot determine the height information or the height difference does not exist among the plurality of function buttons in the same gesture effective area according to the acquired gesture information, the user can be required to input additional button selection information to more accurately determine the target function button.
On the basis of any of the above embodiments, considering that the execution body may often receive multiple types of operation instructions, for example, at least one of gesture operation, voice input, and physical key input, if two operation instructions acquired within a preset duration belong to the same operation instruction, the repetition instruction may be eliminated by not processing the operation instruction acquired later, for example, the user issues an operation instruction of selecting the function button to be "confirmed" on the left through both a left hand gesture and voice, and then the operation instruction issued later is discarded because the operation instruction issued earlier has been executed, so as to avoid misoperation.
If the two operation instructions acquired within the preset time period belong to different and conflicting operation instructions, for example, the user sends out operation instructions of selecting the left 'confirm' and the right 'cancel' function buttons through left and right gestures and voices respectively, and an instruction inquiry prompt can be generated to obtain operation instructions of reconfirming in order to avoid misoperation.
Further, considering that the executing bodies may be set in a scenario such as a gym or nearby with other users for synchronous activities, each executing body may be set to match a unique user when the executing body is in a state of receiving operation instructions, or identify the confidence level (such as voiceprint, sound intensity, whether from a preset range, etc.) of the collected various operation instructions from the correct user, so as to prevent misoperation caused by surrounding people or other unintentional operations.
For deepening understanding, the intelligent mirror for body building, which is arranged in a user's home, is used as a specific execution main body, and a specific implementation scheme is provided by combining the use scene of the user during body building:
1) The intelligent mirror responds to a voice starting-up instruction sent by a user, is switched to a working state, and presents a multifunctional selection interface for the user to select;
2) The intelligent mirror responds to the voice selected by the gymnastics function sent by the user, and triggers a first screen interface of the gymnastics function;
3) The method comprises the steps that a motion prompt and a signal are presented to a user in a mode of an information popup window, wherein the motion prompt and signal are given by a first screen interface triggered by an intelligent mirror gymnastics function, prompt and signal contents capable of sliding down and browsing are displayed on the information popup window, and a unique read function button is presented at the middle position of the lower part of the information popup window;
4) After the user reads the prompt and the content is needed to be known, making a double-hand gesture of making a double-hand relative fist;
5) The intelligent mirror collects gesture actions of the user, which are made by the user, of making a fist by the hands relatively through the camera shooting assembly;
6) The intelligent mirror determines a confirmation operation corresponding to the fist making according to a preset gesture motion recognition rule, determines an area at the lower central position as a current gesture effective area according to a rule of an area where a preset double-hand gesture corresponds to a central unique button, and performs a click operation on a read function button in the gesture effective area;
7) The smart mirror presents the user with exercise actions requiring the user to simulate after responding to the "read" function button being clicked;
8) After the user moves for 20 minutes following the displayed exercise operation, the user wants to finish the exercise, so that the user sends out an operation instruction for finishing the exercise by voice;
9) The intelligent mirror pops up a prompt popup window for confirming the end according to the received voice command for ending, the left side of the bottom of the prompt popup window is a function button for confirming, and the right side of the bottom of the prompt popup window is a function button for canceling;
10 At this time, the user turns up the single-hand gesture of the thumb through the left hand;
11 The intelligent mirror determines that the thumb is tilted to correspond to the confirmation operation according to a preset gesture action recognition rule, determines the left side area at the bottom as a current gesture effective area according to a rule that the preset single-hand gesture corresponds to a corresponding side area, and performs click operation on a 'confirmation' function button in the gesture effective area to realize an interface for exiting the gymnastics function.
It should be noted that, although the examples of the above description refer to groups of two hands and left and right hands, it is not limited whether the gesture is a true hand or a false hand, or a specially customized object capable of making a similar gesture, and when the corresponding relationship is set in the region, the groups having only one hand or more gesture-capable limbs can be fully considered, and the scheme can be adaptively adjusted.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a gesture information processing apparatus, where the apparatus embodiment corresponds to the method embodiment shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the gesture information processing apparatus 600 of the present embodiment may include: the gesture information processing unit 601, the gesture effective region determining unit 602, the function button selecting and executing unit 603. The gesture information processing unit 601 is configured to determine a gesture action and a source limb according to the collected gesture information; a gesture effective region determination unit 602 configured to determine a gesture effective region corresponding to the source limb on the smart mirror; the function button selecting and executing unit 603 is configured to execute an operation instruction corresponding to the gesture operation on the function buttons in the gesture effective region.
In the present embodiment, in the gesture information processing apparatus 600: the specific processing and technical effects of the gesture information processing unit 601, the gesture effective region determining unit 602, the function button selecting and executing unit 603 may refer to the relevant descriptions of steps 201 to 203 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of the present embodiment, the gesture effective region determination unit 602 may be further configured to:
and in response to the source limb being a single hand, determining a side area corresponding to the source limb on the smart mirror as a gesture effective area of the single-hand gesture.
In some optional implementations of the present embodiment, the gesture effective region determination unit is further configured to:
responsive to the source limb being a two-handed gesture, a centered region on the smart mirror is determined as a gesture validation region of the two-handed gesture.
In some optional implementations of the present embodiment, the gesture information processing apparatus 600 may further include:
a height information determining unit configured to determine height information of the gesture motion from the gesture information;
correspondingly, the function button selection and execution unit 603 may be further configured to:
determining a function button corresponding to the height information in the gesture effective area as a target function button;
and executing an operation instruction corresponding to the gesture action on the target function button.
In some optional implementations of the present embodiment, the gesture information processing apparatus 600 may further include:
an input prompt generation unit configured to generate an input prompt of button selection information in response to at least two selectable function buttons existing in the gesture effective region;
a button selection information receiving unit configured to receive button selection information input according to an input prompt;
correspondingly, the function button selection and execution unit 603 may be further configured to:
determining a function button corresponding to the button selection information in the gesture effective area as a target function button;
and executing an operation instruction corresponding to the gesture action on the target function button.
In some optional implementations of the present embodiment, the gesture information processing apparatus 600 may further include:
the same operation instruction processing unit is configured to respond to the fact that two operation instructions acquired within a preset time period belong to the same operation instruction, and does not process the operation instructions acquired later; the source of the operation instruction comprises at least one of gesture action, voice input and physical key input;
and the conflict operation instruction processing unit is configured to respond to the operation instructions which are acquired in the preset time period and belong to different and conflicting operation instructions, and generate an instruction inquiry prompt.
The gesture information processing device provided by the embodiment provides a gesture-based control mode for the smart mirror, namely, fully utilizes source limbs used by gesture actions made by a smart mirror user, and executes operation instructions corresponding to the gesture actions on the functional buttons of the gesture effective area by utilizing the corresponding relation between the source limbs and the gesture effective area on the smart mirror, and is particularly suitable for providing a plurality of selectable functional buttons simultaneously and being distributed in scenes of different gesture effective areas, namely, fully utilizing the position relation of the source limbs in the whole limbs to map the position relation of different gesture effective areas in all mirror surface areas.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, such that the at least one processor, when executed, implements the gesture information processing method described in any of the above embodiments.
According to an embodiment of the present disclosure, there is also provided a readable storage medium storing computer instructions for enabling a computer to implement the gesture information processing method described in any of the above embodiments when executed.
The disclosed embodiments provide a computer program product that, when executed by a processor, enables the gesture information processing method described in any of the above embodiments to be implemented.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, for example, a gesture information processing method. For example, in some embodiments, the gesture information processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When a computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the gesture information processing method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the gesture information processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and virtual private server (VPS, virtual Private Server) service.
According to the technical scheme of the embodiment of the disclosure, a gesture-based control mode is provided for the smart mirror, namely, source limbs used by gesture actions made by a user of the smart mirror are fully utilized, and a corresponding relation between the source limbs and gesture effective areas on the smart mirror is utilized, so that operation instructions corresponding to the gesture actions are executed on the function buttons of the gesture effective areas, and the gesture-based control mode is particularly suitable for simultaneously providing a plurality of selectable function buttons and distributing the selectable function buttons in scenes of different gesture effective areas, namely, mapping the position relation between all mirror surface areas of different gesture effective areas by fully utilizing the position relation of the source limbs on the whole limbs, and improves the control convenience of the smart mirror by means of the gesture control mode.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (12)

1. A gesture information processing method is applied to an intelligent mirror and comprises the following steps:
determining gesture actions and source limbs according to the collected gesture information;
determining a gesture validation region corresponding to the source limb on the smart mirror;
executing an operation instruction corresponding to the gesture action on the function button in the gesture effective area;
responding to the gesture effective area, wherein at least two selectable function buttons with different heights exist, and determining the height information of the gesture action according to the gesture information; correspondingly, the executing the operation instruction corresponding to the gesture action on the function button in the gesture effective region includes: determining a function button corresponding to the height information in the gesture effective region as a target function button; executing an operation instruction corresponding to the gesture on the target function button;
responding to the scene that the body-building intelligent mirror is set in a preset range and has synchronous activities of other users, setting a unique user matched with each body-building intelligent mirror when the body-building intelligent mirror is in a receivable operation instruction, or identifying the confidence of various collected operation instructions from correct users; the confidence level is obtained through calculation of at least one of voiceprint, sound intensity and whether the confidence level is from a preset range.
2. The method of claim 1, wherein the determining, on the smart mirror, a gesture validation region corresponding to the source limb comprises:
and determining a side area corresponding to the source limb on the intelligent mirror as a gesture effective area of a single-hand gesture in response to the source limb being a single hand.
3. The method of claim 1, wherein the determining, on the smart mirror, a gesture validation region corresponding to the source limb comprises:
responsive to the source limb being a two-handed hand, a centered region on the smart mirror is determined as a gesture validation region of the two-handed gesture.
4. The method of claim 1, wherein there are at least two selectable function buttons responsive to the gesture validation area, further comprising:
generating an input prompt of button selection information;
receiving button selection information input according to the input prompt;
correspondingly, the executing the operation instruction corresponding to the gesture action on the function button in the gesture effective region includes:
determining a function button corresponding to the button selection information in the gesture effective region as a target function button;
and executing an operation instruction corresponding to the gesture on the target function button.
5. The method of any of claims 1-4, further comprising:
responding to the fact that two operation instructions acquired within a preset time period belong to the same operation instruction, and not processing the operation instructions acquired later; the source of the operation instruction comprises at least one of gesture action, voice input and physical key input;
and generating an instruction inquiry prompt in response to the two operation instructions acquired in the preset time period, wherein the two operation instructions belong to different and conflicting operation instructions.
6. A gesture information processing apparatus applied to a smart mirror, comprising:
the gesture information processing unit is configured to determine gesture actions and source limbs according to the acquired gesture information;
a gesture validation region determination unit configured to determine a gesture validation region corresponding to the source limb on the smart mirror;
a function button selecting and executing unit configured to execute an operation instruction corresponding to the gesture motion on the function button in the gesture effective region;
a height information determining unit configured to determine height information of the gesture motion according to the gesture information in response to at least two selectable and height-different function buttons existing in the gesture effective region; correspondingly, the function button selection and execution unit is further configured to: determining a function button corresponding to the height information in the gesture effective region as a target function button; executing an operation instruction corresponding to the gesture on the target function button;
the anti-misoperation capturing unit is configured to respond to the situation that the body-building intelligent mirror is set in a preset range and has other users to synchronously move, set a unique user matched when the body-building intelligent mirror is in a receivable operation instruction for each body-building intelligent mirror, or identify the confidence level of various collected operation instructions from correct users; the confidence level is obtained through calculation of at least one of voiceprint, sound intensity and whether the confidence level is from a preset range.
7. The apparatus of claim 6, wherein the gesture validation area determination unit is further configured to:
and determining a side area corresponding to the source limb on the intelligent mirror as a gesture effective area of a single-hand gesture in response to the source limb being a single hand.
8. The apparatus of claim 6, wherein the gesture validation area determination unit is further configured to:
responsive to the source limb being a two-handed hand, a centered region on the smart mirror is determined as a gesture validation region of the two-handed gesture.
9. The apparatus of claim 6, further comprising:
an input prompt generation unit configured to generate an input prompt of button selection information in response to at least two selectable function buttons existing in the gesture effective region;
a button selection information receiving unit configured to receive button selection information input according to the input prompt;
correspondingly, the function button selection and execution unit is further configured to:
determining a function button corresponding to the button selection information in the gesture effective region as a target function button;
and executing an operation instruction corresponding to the gesture on the target function button.
10. The apparatus of any of claims 6-9, further comprising:
the same operation instruction processing unit is configured to respond to the fact that two operation instructions acquired within a preset time period belong to the same operation instruction, and does not process the operation instructions acquired later; the source of the operation instruction comprises at least one of gesture action, voice input and physical key input;
and the conflict operation instruction processing unit is configured to respond to the operation instructions which are acquired in the preset time period and belong to different and conflicting operation instructions, and generate an instruction inquiry prompt.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the gesture information processing method of any of claims 1-5.
12. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the gesture information processing method of any one of claims 1-5.
CN202110934968.1A 2021-08-16 2021-08-16 Gesture information processing method, device, equipment, storage medium and program product Active CN113625878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110934968.1A CN113625878B (en) 2021-08-16 2021-08-16 Gesture information processing method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110934968.1A CN113625878B (en) 2021-08-16 2021-08-16 Gesture information processing method, device, equipment, storage medium and program product

Publications (2)

Publication Number Publication Date
CN113625878A CN113625878A (en) 2021-11-09
CN113625878B true CN113625878B (en) 2024-03-26

Family

ID=78385527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110934968.1A Active CN113625878B (en) 2021-08-16 2021-08-16 Gesture information processing method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN113625878B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461006A (en) * 2014-12-17 2015-03-25 卢晨华 Internet intelligent mirror based on natural user interface
CN105681859A (en) * 2016-01-12 2016-06-15 东华大学 Man-machine interaction method for controlling smart TV based on human skeletal tracking
KR20160130085A (en) * 2015-04-30 2016-11-10 모다정보통신 주식회사 Exercising Method and System Using a Smart Mirror
CN106569596A (en) * 2016-10-20 2017-04-19 努比亚技术有限公司 Gesture control method and equipment
CN112639714A (en) * 2020-03-20 2021-04-09 华为技术有限公司 Method, device and system for executing gesture instruction and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI475496B (en) * 2012-10-16 2015-03-01 Wistron Corp Gesture control device and method for setting and cancelling gesture operating region in gesture control device
WO2019104519A1 (en) * 2017-11-29 2019-06-06 Entit Software Llc Gesture buttons

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461006A (en) * 2014-12-17 2015-03-25 卢晨华 Internet intelligent mirror based on natural user interface
KR20160130085A (en) * 2015-04-30 2016-11-10 모다정보통신 주식회사 Exercising Method and System Using a Smart Mirror
CN105681859A (en) * 2016-01-12 2016-06-15 东华大学 Man-machine interaction method for controlling smart TV based on human skeletal tracking
CN106569596A (en) * 2016-10-20 2017-04-19 努比亚技术有限公司 Gesture control method and equipment
CN112639714A (en) * 2020-03-20 2021-04-09 华为技术有限公司 Method, device and system for executing gesture instruction and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"互联网+"家居生活促进家庭互联网蓬勃发展;敖立;;信息通信技术(03);全文 *

Also Published As

Publication number Publication date
CN113625878A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
KR102649254B1 (en) Display control method, storage medium and electronic device
CN106845335B (en) Gesture recognition method and device for virtual reality equipment and virtual reality equipment
EP3086275A1 (en) Numerical value transfer method, terminal, cloud server, computer program and recording medium
WO2015188614A1 (en) Method and device for operating computer and mobile phone in virtual world, and glasses using same
US9965039B2 (en) Device and method for displaying user interface of virtual input device based on motion recognition
EP3293620A1 (en) Multi-screen control method and system for display screen based on eyeball tracing technology
CN109240576A (en) Image processing method and device, electronic equipment, storage medium in game
CN108616712B (en) Camera-based interface operation method, device, equipment and storage medium
CN113792278A (en) Method and device for displaying application and picture and electronic equipment
CN104238726A (en) Intelligent glasses control method, intelligent glasses control device and intelligent glasses
US20210072818A1 (en) Interaction method, device, system, electronic device and storage medium
CN107479710B (en) Intelligent mirror and control method, device, equipment and storage medium thereof
WO2022222510A1 (en) Interaction control method, terminal device, and storage medium
JP2012238293A (en) Input device
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN113655929A (en) Interface display adaptation processing method and device and electronic equipment
CN113628239B (en) Display optimization method, related device and computer program product
CN104866194B (en) Image searching method and device
CN105824534B (en) A kind of information processing method and electronic equipment
CN109753148A (en) A kind of control method, device and the controlling terminal of VR equipment
WO2021115097A1 (en) Pupil detection method and related product
WO2016131181A1 (en) Fingerprint event processing method, apparatus, and terminal
CN113559501A (en) Method and device for selecting virtual units in game, storage medium and electronic equipment
CN113625878B (en) Gesture information processing method, device, equipment, storage medium and program product
CN114756162B (en) Touch system and method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant