CN111813321A - Gesture control method and related device - Google Patents

Gesture control method and related device Download PDF

Info

Publication number
CN111813321A
CN111813321A CN202010809762.1A CN202010809762A CN111813321A CN 111813321 A CN111813321 A CN 111813321A CN 202010809762 A CN202010809762 A CN 202010809762A CN 111813321 A CN111813321 A CN 111813321A
Authority
CN
China
Prior art keywords
gesture
action
indication domain
target
judging whether
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010809762.1A
Other languages
Chinese (zh)
Inventor
崔永明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010809762.1A priority Critical patent/CN111813321A/en
Publication of CN111813321A publication Critical patent/CN111813321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The embodiment of the application discloses a gesture control method and a related device, comprising the following steps: if the first indication domain is in an initialization state, setting the first indication domain according to the collected control action; if the first indication domain is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time; if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture; if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1; if the gesture action B1 and the control action meet the preset association relationship, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, and only resetting the second indication domain; and if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain. The embodiment of the application is favorable for improving the convenience and flexibility of the terminal for gesture control.

Description

Gesture control method and related device
Technical Field
The present application relates to the field of gesture control technologies, and in particular, to a gesture control method and a related device.
Background
At present, in the air-insulated gesture control technology supported by mobile phones and the like, generally, a mobile phone is prompted to execute target operation by detecting air-insulated gesture actions of a user, if the target operation needs to be continuously executed, the user needs to repeatedly execute the gesture actions, the mobile phone can repeatedly detect the gesture actions and execute the target operation, and under the condition that the air-insulated gesture actions are complex, the continuous control process consumes a long time, obvious pause and pause feeling exists, user use experience is affected, and use requirements are difficult to meet.
Disclosure of Invention
The embodiment of the application provides a gesture control method and a related device, so as to improve the fluency and flexibility of gesture recognition control of a terminal.
In a first aspect, an embodiment of the present application provides a gesture control method, including:
judging whether the first indication domain is in an initialization state or not;
if the first indication domain is in the initialization state, setting the first indication domain according to the collected control action;
if the first indication domain is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time, and judging whether the gesture detection result is a gesture without gesture or an invalid gesture or a valid gesture, wherein the gesture image acquired at the current time is a gesture image acquired at the last time;
if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture;
if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1;
if the gesture action B1 and the control action are detected to meet the preset association relationship, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, only resetting the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state;
if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state.
In a second aspect, an embodiment of the present application provides a gesture control method, including:
displaying first page content on a current interface of a screen of local equipment;
judging whether the gesture A1 of the user is detected;
if the gesture action A1 is not detected, setting the first indication field according to the collected gesture action A1;
if the gesture motion A1 is detected, judging whether a target gesture is detected or not in the process of acquiring a gesture motion B1;
if the target gesture is not detected, setting a second indication field according to the collected gesture action B1;
if the gesture action B1 and the gesture action A1 are detected to meet the preset association relationship, executing preset operation on the first page content according to the gesture action A1 and the gesture action B1, only resetting the gesture action B1, and returning to the step of judging whether the gesture action A1 of the user is detected;
if the target gesture is detected, resetting the gesture motion A1 and the gesture motion B1, displaying prompt information for indicating that gesture motion recognition is carried out again, and returning to the step of judging whether the gesture motion A1 of the user is detected.
In a third aspect, an embodiment of the present application provides a gesture control apparatus, including
The judging unit is used for judging whether the first indication domain is in an initialization state or not;
the first setting unit is used for setting the first indication domain according to the acquired control action if the first indication domain is in the initialization state;
a second setting unit for performing the following operations:
if the first indication domain is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time, and judging whether the gesture detection result is a gesture without gesture or an invalid gesture or a valid gesture, wherein the gesture image acquired at the current time is a gesture image acquired at the last time;
if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture;
if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1;
if the gesture action B1 and the control action are detected to meet the preset association relationship, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, only resetting the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state;
if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state.
In a fourth aspect, an embodiment of the present application provides a gesture control apparatus, including
The display unit is used for displaying first page content on a current interface of a screen of the home terminal equipment;
a judging unit, configured to judge whether a gesture action a1 of a user is detected;
a first setting unit, configured to set the first indication field according to the collected gesture motion a1 if the gesture motion a1 is detected;
a second setting unit for performing the following operations:
if the gesture motion A1 is detected, judging whether a target gesture is detected or not in the process of acquiring a gesture motion B1;
if the target gesture is not detected, setting a second indication field according to the collected gesture action B1;
if the gesture action B1 and the gesture action A1 are detected to meet the preset association relationship, executing preset operation on the first page content according to the gesture action A1 and the gesture action B1, only resetting the gesture action B1, and returning to the step of judging whether the gesture action A1 of the user is detected;
if the target gesture is detected, resetting the gesture motion A1 and the gesture motion B1, displaying prompt information for indicating that gesture motion recognition is carried out again, and returning to the step of judging whether the gesture motion A1 of the user is detected.
In a fifth aspect, embodiments of the present application provide a terminal, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the steps of any of the methods of the first aspect of the embodiments of the present application.
In a sixth aspect, an embodiment of the present application provides a chip, including: and the processor is used for calling and running the computer program from the memory so that the device provided with the chip executes part or all of the steps described in any method of the first aspect of the embodiment of the application.
In a seventh aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform part or all of the steps as described in any one of the methods of the first aspect of this application.
In an eighth aspect, the present application provides a computer program, wherein the computer program is operable to cause a computer to perform some or all of the steps as described in any of the methods of the first aspect of the embodiments of the present application. The computer program may be a software installation package.
It can be seen that, in the embodiment of the present application, the terminal first determines whether the first indication domain is in an initialization state; if the first indication domain is in an initialization state, setting the first indication domain according to the collected control action; if the first indication field is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time, and judging whether the gesture detection result is a gesture without gesture or an invalid gesture or a valid gesture, wherein the gesture image acquired at the current time is a gesture image acquired at the last time; if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture; if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1; if the gesture action B1 and the control action are detected to meet the preset incidence relation, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, only resetting the second indication domain, and returning to the step of judging whether the first indication domain is in the initialization state or not; and if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state. Therefore, in the continuous control process of the terminal for carrying out the target operation instruction through the control action and the gesture action B1, the gesture control process can be quickly reset only by the target gesture, the steps are simple, the time consumption is short, and the convenience and the flexibility of the terminal for carrying out gesture control are improved.
Drawings
Reference will now be made in brief to the drawings that are needed in describing embodiments or prior art.
Fig. 1A is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 1B is a schematic architecture diagram of a software and hardware system provided with an Android system according to an embodiment of the present application;
fig. 1C is a schematic structural diagram of another terminal provided in the embodiment of the present application;
fig. 2A is a schematic flowchart of a gesture control method according to an embodiment of the present disclosure;
FIG. 2B is an illustration of an example of a spaced apart gesture provided by an embodiment of the present application;
fig. 2C is a schematic view of an application scenario of a slide-up operation according to an embodiment of the present application;
fig. 2D is a schematic view of an application scenario of an open wallet function provided in an embodiment of the present application;
FIG. 3A is a schematic flowchart of another gesture control method provided in the embodiments of the present application;
fig. 3B is a schematic diagram illustrating an instruction entry of an upward-sliding operation performed by turning a palm to a back of a hand according to an embodiment of the present application;
FIG. 3C is a schematic diagram illustrating a control process of a reset gesture performed by a fist-making gesture according to an embodiment of the present disclosure;
FIG. 3D is a logic diagram of an implementation of a state machine for implementing gesture recognition according to an embodiment of the present application;
fig. 4 is a block diagram illustrating functional units of a gesture control apparatus according to an embodiment of the present disclosure;
FIG. 5 is a block diagram illustrating functional units of another gesture control apparatus according to an embodiment of the present disclosure;
fig. 6 is a block diagram illustrating functional units of a gesture control apparatus according to an embodiment of the present disclosure;
fig. 7 is a block diagram illustrating functional units of another gesture control apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to better understand the scheme of the embodiments of the present application, the following first introduces the related terms and concepts that may be involved in the embodiments of the present application.
Gesture recognition
Gesture recognition is an issue of recognizing human gestures through mathematical algorithms. Gesture recognition may come from the movement of various parts of a person's body, but generally refers to the movement of the face and hands. A user can use simple gestures to control or interact with the device, letting the computer understand human behavior. The core technology of the method is gesture segmentation, gesture analysis and gesture recognition.
State machine
The state machine is a short name of finite state automata and is a mathematical model formed by abstracting operation rules of real things. The control center is composed of a state register and a combinational logic circuit, can perform state transition according to a preset state according to a control signal, coordinates the action of the related signal and completes a specific operation.
Image front end IFE
IFE is a unit of Mipi RAW image data in ISP.
Lightweight image front end IFE _ lite
IFE lite is a lightweight IFE interface in ISPs.
Referring to fig. 1A, a block diagram of a terminal 10 according to an exemplary embodiment of the present application is shown. The terminal 10 may be a communication-capable electronic device that may include various wireless communication-capable handheld devices, vehicle-mounted devices, wearable devices, computing devices, or other processing devices connected to a wireless modem, as well as various forms of User Equipment (UE), Mobile Station (MS), terminal Equipment (terminal device), and so forth. The terminal 10 in the present application may include one or more of the following components: a processor 110, a memory 120, and an input-output device 130.
Processor 110 may include one or more processing cores. The processor 110 interfaces with various components throughout the terminal 10 using various interfaces and lines to perform various functions of the terminal 10 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and invoking data stored in the memory 120. Processor 110 may include one or more processing units, such as: the processor 110 may include a Central Processing Unit (CPU), an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The controller may be, among other things, the neural center and the command center of the terminal 10. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the terminal 10, for example: image recognition, face recognition, speech recognition, text understanding, and the like. A memory may be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses, reducing the latency of the processor 110, and increasing system efficiency.
It is to be understood that the processor 110 may be mapped to a System On Chip (SOC) in an actual product, and the processing unit and/or the interface may not be integrated into the processor 110, and the corresponding functions may be implemented by a communication Chip or an electronic component alone. The above-described interfacing relationship between the modules is merely illustrative, and does not constitute a unique limitation on the structure of the terminal 10.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system (including a system based on Android system depth development), an IOS system developed by apple inc (including a system based on IOS system depth development), or other systems. The stored data area may also store data created by the terminal 10 during use, such as a phonebook, audiovisual data, chat log data, and the like.
The software system of the terminal 10 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present application, a software architecture of the terminal 10 is exemplarily described by taking an Android system and an IOS system of a hierarchical architecture as examples.
As shown in fig. 1B, the memory 120 may store a Linux kernel layer 220, a system runtime library layer 240, an application framework layer 260, and an application layer 280, wherein the layers communicate with each other through a software interface, and the Linux kernel layer 220, the system runtime library layer 240, and the application framework layer 260 belong to an operating system space.
The application layer 280 belongs to a user space, and at least one application program runs in the application layer 280, and the application programs may be native application programs carried by an operating system, or third-party application programs developed by third-party developers, and specifically may include application programs such as passwords, eye tracking, cameras, gallery, calendar, call, map, navigation, WLAN, bluetooth, music, video, short messages, and the like.
The application framework layer 260 provides various APIs that may be used by applications that build the application layer, and developers may also build their own applications by using these APIs, such as a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, a message manager, an activity manager, a package manager, and a location manager. The window manager is used for managing window programs.
The system runtime library layer 240 provides the main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system Runtime layer 240 is an Android Runtime library (Android Runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language.
The Linux kernel layer 220 provides the underlying drivers for the various hardware of the terminal 10, such as a display driver, an audio driver, a camera driver, a Bluetooth driver, a Wi-Fi driver, power management, and the like.
It should be understood that the interface display method described in the embodiment of the present application may be applied to an android system, and may also be applied to other operating systems, such as an IOS system, and the interface display method is only described by taking the android system as an example, but is not limited thereto.
A currently-used terminal configuration will be described in detail with reference to fig. 1C, and it should be understood that the configuration illustrated in the embodiment of the present application is not intended to specifically limit the terminal 10. In other embodiments of the present application, the terminal 10 may include more or fewer components than illustrated, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
As shown in fig. 1C, the terminal 10 includes a first image sensor 100, a camera serial interface decoder 200, an image signal processor 300, and a digital signal processor 400, wherein the image signal processor 300 includes a lightweight image front end 310 and an image front end 320, the first image sensor 100 is connected to the camera serial interface decoder 200, the camera serial interface decoder 200 is connected to the lightweight image front end 310 of the image signal processor 300, and the lightweight image front end 310 is connected to the digital signal processor 400;
the digital signal processor 400 is configured to receive first raw image data acquired by the first image sensor 100 through the camera serial interface decoder 200 and the lightweight image front end 310, and call a first image processing algorithm to perform a first preset process on the first raw image data to obtain first reference image data, where the image front end 320 is configured to transmit second raw image data acquired by a second image sensor 500 of the terminal 10 or the first raw image data acquired by the first image sensor 100.
Wherein the first original image data and the second original image data may be MIPI RAW image data, and the first reference image data may be YUV image data. The first image sensor 100 may be a low power consumption camera.
The first image processing algorithm is used for realizing a data processing effect equivalent to that of the image signal processor in a software algorithm mode, namely, an operation corresponding to first preset processing, and the first preset processing comprises at least one of the following steps: automatic exposure control, lens attenuation compensation, brightness improvement, black level correction, lens shading correction, dead pixel correction, color interpolation, automatic white balance and color correction. It should be noted that although the first image sensor 100 transmits the first raw image data through the lightweight image front end 310 of the image signal processor 300, the image signal processor 300 does not further process the first raw image data, and the image signal processor 300 only performs the same or different processing as the first preset processing on the second raw image data or the first raw data transmitted through the image front end 320 through the local hardware module. Also, since the lightweight image front end 310 is only responsible for interfacing inputs and does not do anything else, its power consumption is relatively low relative to prior solutions that enable the image front end 320 to transfer image data (which would require enabling other modules of the image signal processor 300 for processing of the image data).
The first image sensor 100 may be a low-power image sensor, the second image sensor may be an image sensor in a front camera, and the application functions based on context awareness that can be implemented by the terminal through the first image sensor 100 include at least one of:
1. privacy protection, for example, a social application APP receives a new message from a girl friend, a bank sends a wage to account a new short message, the privacy information in the new short message is not expected to be seen by others, and the terminal can detect that the screen is dark when the eyes of a stranger watch the screen of a mobile phone of a user owner through the first image sensor 100.
2. And (3) performing an air-separating operation, namely, a user is cooking, and places a mobile phone beside to check a menu, wherein an important call is called in, the hand of the user is full of oil stain, the mobile phone is inconvenient to directly operate, and the terminal can detect the air-separating gesture of the user and execute an operation corresponding to the air-separating gesture through the first image sensor 100.
3. The terminal can detect that the user still watches the screen through the first image sensor 100, and then the automatic screen-off function is not started.
4. The user lies and does not rotate, and when the user lies and leads to the screen direction of electronic equipment to change, if become the horizontal direction by vertical direction, electronic equipment can detect user's eyes through predetermineeing sensor 100 and look the direction and follow the change, then the screen does not take place the rotation.
At present, in gesture control technologies supported by mobile phones and the like, generally, a mobile phone is prompted to execute a target operation by detecting an air-spaced gesture motion of a user, if the target operation needs to be continuously executed, the user needs to repeatedly execute the gesture motion, the mobile phone can repeatedly detect the gesture motion and execute the target operation, under the condition that the air-spaced gesture motion is complex, for example, in a control process of realizing a slide-up operation through a gesture motion combination of a first gesture motion and a second gesture motion, a terminal needs to repeatedly detect the first gesture motion and the second gesture motion to be able to realize the slide-up operation for multiple times, and the process needs to avoid false response of an illegal gesture, so that the continuous control process is long in time consumption, has an obvious click-pause feeling, affects user experience, and is difficult to meet use requirements.
In view of the above problem, an embodiment of the present application provides a gesture control method, which is described in detail below with reference to the accompanying drawings.
Referring to fig. 2A, fig. 2A is a schematic flowchart illustrating a gesture control method according to an embodiment of the present disclosure, where as shown in the figure, the method includes:
step 201, judging whether the first indication domain is in an initialization state;
step 202, if the first indication domain is in the initialization state, setting the first indication domain according to the collected control action, and returning to execute the step of judging whether the first indication domain is in the initialization state;
in this possible example, the control action comprises a gesture action, the first indication field being associated with a gesture action a1 of the user.
The gesture motion may be various, and is not limited herein.
For example, the gesture actions may be characterized by an isolated gesture, including a static gesture or a dynamic gesture. The static gesture refers to a posture of the hand at a certain time, such as a situation that the fingers are bent, folded, and the like, as shown in fig. 2B, the static gesture may be, for example, a palm gesture (a palm with respect to the terminal and a back of the hand with respect to the human eyes of the user) shown in fig. 0, a back of the hand gesture (a back of the hand with respect to the terminal and a palm with respect to the human eyes of the user) shown in fig. 1, a fist making gesture shown in fig. 2, and the like, which are not illustrated here by way of example. Dynamic gestures refer to different gesture types such as up and down waving, left and right waving, pressing, drawing a Z-shape, and the like.
There may be some differences in the operating behavior of different users for the same spaced gesture, but the characteristics of the gesture are substantially consistent.
In one possible example, the control action may also include a voice action.
For example, a first preparatory motion of the slide down is entered by voice control, which in combination with the gesture motion B1, i.e. the second preparatory motion, forms a control for the target operating instruction.
In a specific implementation, a legal control action combination can be prestored through a predefined control action combination set.
For example, the first indication field direction 1-0 is set by voice, the direction 1-0 is used for determining the slide-down operation, the first indication field direction 1-1 is set by voice, and the direction 1-1 is used for determining the slide-up operation.
Therefore, in this example, the control action covers a voice action or a gesture action, the application range is expanded, and the application comprehensiveness is improved.
Step 203, if the first indication domain is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time, and judging whether the gesture detection result is a gesture without gesture or an invalid gesture or an effective gesture, wherein the gesture image acquired at the current time is a gesture image acquired at the latest time;
the gesture action B1 is an air gesture action.
Step 204, if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture;
wherein the active gesture includes the target gesture and other gestures other than the target gesture;
step 205, if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1;
step 206, if it is detected that the gesture action B1 and the control action satisfy a preset association relationship, determining a target operation instruction according to the first indication field and the second indication field, executing the target operation instruction, resetting only the second indication field, and returning to the step of determining whether the first indication field is in an initialization state;
the target operation instruction may be various, and is not limited herein.
For example, the target operation instruction may be a system-level functional instruction or an application-level functional instruction. The system-level function instruction refers to an operation instruction of a basic function supported by the system, and as shown in fig. 2C, the target operation instruction may be a slide-up instruction. The function instruction at the application level refers to an operation instruction of a dedicated function inside the application, and as shown in fig. 2D, the target operation instruction may be an operation instruction of opening a wallet function in the application.
In this possible example, the specific implementation manner of detecting that the gesture motion B1 and the control motion satisfy the preset association relationship in step 206 may be: and detecting that the second indication domain and the first indication domain meet a preset association relationship.
The preset association relationship may be user-defined, or defined by a developer before the terminal leaves a factory, or set through the association relationship pushed by the cloud, and is not uniquely defined here.
As shown in table 1, the association relationship between the gesture action B1 and the control action is not limited to the specific action type between the control action and the gesture action B1.
TABLE 1
Control actions Gesture action B1 Description of relationships
Palm gesture motion Gesture movement of palm turning to back of hand Satisfy the preset incidence relation
Hand back gesture motion Gesture movement of turning back of hand into palm Satisfy the preset incidence relation
Voice 'slide down' action Gesture movement of palm turning to back of hand Satisfy the preset incidence relation
"OK" gesture motion Gesture movement for turning back of hand into fist Satisfy the preset incidence relation
Step 207, if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state.
In this possible example, if the gesture detection result is the target gesture, the method further includes: the gesture action queue B2 is reset.
Since the gesture action B1 is characterized by a plurality of elements in the gesture action queue B2, and the gesture action queue B2 performs element addition and quantity judgment if the gesture detection result determined at the present time is detected to be a gesture other than the target gesture, and sets the second indication field by the element currently contained in the gesture action queue B2 if the quantity constraint condition is satisfied, there may be the following cases.
Firstly, the gesture detection result of the gesture image acquired at the current time is a target gesture, the gesture detection result of the gesture image acquired at the previous time is an effective gesture except the target gesture, the effective gesture is recorded in the gesture action queue B2, and the gesture action queue B2 only has a single element as a target, namely, the element number constraint condition is not met, and the target is not reset.
Secondly, the gesture detection result of the gesture image acquired at the current time is a target gesture, the gesture detection result of the gesture image acquired at the previous time is an effective gesture except the target gesture, the effective gesture is recorded in the gesture action queue B2, the target of the gesture action queue B2 has a plurality of elements, the constraint condition of the number of the elements is met, and the target is reset, so that the gesture action queue B2 does not need to be reset to be completely initialized.
And thirdly, the gesture detection result of the gesture image collected at the current time is a target gesture, the gesture detection result of the gesture image collected at the previous time is an illegal gesture or an invalid gesture, the situation of the gesture action queue B2 needs to be traced back at the moment, and the situation can still correspond to the first type and the second type.
In summary, in view of the fact that the gesture motion queue B2 is not reset, when the gesture detection result of the currently acquired gesture image is detected as the target gesture, the reset gesture motion queue B2 can ensure thorough initialization, and it is avoided that the detection accuracy is affected by the history.
In this possible example, if the gesture detection result is the target gesture, the method further includes: and displaying prompt information for indicating the re-operation recognition.
The prompt information may be at least one of various forms such as a figure, a text, an animation, a sound, and the like, which is not limited herein.
Therefore, in the example, the terminal can intuitively prompt the current equipment control state of the user through displaying the prompt information, and the interface interaction is more friendly.
In this possible example, after the setting the first indication field according to the collected control action, the method further includes: and displaying control information and/or instruction information associated with the control action, wherein the control information refers to visualized information of the control action, and the instruction information refers to visualized information of a reference operation instruction associated with the control action.
The visualized information includes various modes such as a picture, a text, an animation and the like, and is not limited uniquely. The control information includes voice control information and/or gesture control information.
In a specific implementation, the displayed control information and/or instruction information may last for a preset time duration, and the preset time duration may be, for example, 100 milliseconds, 200 milliseconds, and the like, which is not limited herein.
Therefore, in this example, the control information and/or the instruction information associated with the control action is displayed in real time after the first indication domain is set, so that the user can intuitively know whether the control action is accurate, the error action is avoided being continuously performed, and the control efficiency is improved.
In this possible example, when it is determined that the gesture detection result is the target gesture, the method further includes: hiding and displaying control information and/or instruction information associated with the control action; alternatively, the gesture information and/or instruction information associated with the gesture action a1 is hidden and displayed, and prompt information indicating that action recognition is to be performed again is displayed.
In a specific implementation, the hiding may be no longer displaying, for example, setting the transparency of the control information and/or the instruction information to 100%, and the specific implementation mechanism is not limited uniquely.
Therefore, in the example, the target gesture has the function of initializing the gesture control process of this time, and quickly entering the next gesture control process, and the page can achieve the effect of synchronous initialization by hiding historical display information, so that misunderstanding caused by misunderstanding of a user is avoided.
In this possible example, the setting the first indication field according to the collected control action includes:
judging whether the gesture image acquired at the current time is a gesture-free gesture, an invalid gesture or an effective gesture, wherein the gesture image acquired at the current time is the gesture image acquired at the last time;
if the gesture image is an effective gesture, adding the gesture detection result of the gesture image into a gesture action queue A2, and judging whether M continuous and same gesture detection results exist in the gesture action queue A2, wherein M is a positive integer;
if there are M consecutive and identical gesture detection results in the gesture action queue a2, the first indication field is set according to the M consecutive and identical gesture detection results, and only the gesture action queue a2 is reset.
Therefore, in the example, the terminal can accurately position the active control will of the user by continuously detecting a plurality of same gestures, so that the detection accuracy is prevented from being influenced by noise caused by illegal gestures, and the detection success rate is improved.
In this possible example, the method further comprises:
if M continuous and same gesture detection results do not exist in the gesture action queue A2, returning to the step of judging whether the first indication domain is in the initialization state or not;
if the gesture image is a gesture-free gesture or an invalid gesture, acquiring a detection duration A3, and judging whether the detection duration A3 is greater than a preset duration A4, wherein the detection duration A3 is a duration for continuously detecting a gesture-free gesture or an invalid gesture when the first indication domain is in an initialization state;
if the detection duration A3 is greater than the preset duration A4, resetting the detection duration A3 and judging whether the current sampling mode is a second frame rate mode;
if the current sampling mode is the second frame rate mode, setting the sampling mode to be a first frame rate mode;
if the current sampling mode is not the second frame rate mode, returning to the step of judging whether the first indication domain is in the initialization state or not;
and if the detection duration A3 is less than or equal to the preset duration A4, updating the detection duration A3, and returning to the step of judging whether the first indication domain is in the initialization state.
The preset time period a4 may be 15 seconds, etc., and is not limited herein.
As can be seen, in this example, by the constraint of the preset duration a4, the power consumption in the second frame rate mode is avoided, and the cruising ability is not affected.
In this possible example, the gesture action B1 is characterized by at least two gesture detection results, which are gestures of the user determined from the gesture image; the setting of a second indication field according to the collected gesture action B1 includes:
adding the gesture detection results of the gesture images into a gesture action queue B2, and judging whether the number of the gesture detection results in the gesture action queue B2 is N, wherein N is a positive integer;
if the number of the gesture detection results in the gesture action queue B2 is N, the second indication field is set according to the N gesture detection results in the gesture action queue B2, and only the gesture action queue B2 is reset.
It can be seen that, in this example, the gesture action queue B2 restricts the gesture action B1 to be characterized by at least 1 gesture detection result, and since a plurality of gesture detection results can more accurately and comprehensively correspond to the actual gesture action of the user, noise influence caused by interference gestures, such as illegal gestures, of a single gesture detection result can be avoided, and the accuracy of the detection result can be effectively improved.
In this possible example, the method further comprises:
if the gesture action B1 and the gesture action A1 are detected not to meet the preset association relationship, only resetting the second indication domain, and returning to the step of judging whether the first indication domain is in the initialization state.
It can be seen that in this example, in the case of gesture action B1 and the gesture action a1, the terminal only resets the second indication field and continues to detect gesture action B1, so that the control process remains continued.
In this possible example, if it is detected that the gesture action B1 and the gesture action a1 do not satisfy the preset association relationship, only resetting the second indication field and returning to the step of determining whether the first indication field is in the initialization state includes:
if the gesture action B1 and the gesture action A1 are detected not to meet the preset association relationship and different gestures exist in the N gesture detection results, only resetting the second indication domain and returning to the step of judging whether the first indication domain is in the initialization state;
if the gesture action B1 and the gesture action A1 are detected not to meet the preset association relationship and the N gesture detection results are the same as the gesture of the gesture action A1, only resetting the second indication domain and returning to the step of judging whether the first indication domain is in the initialization state or not;
if the gesture action B1 and the gesture action A1 are detected not to meet the preset association relationship, the gestures of the N gesture detection results are the same, and the gestures of the N gesture detection results are different from the gestures of the gesture action A1, judging whether the detection time length B3 is greater than the preset time length B4 or not, wherein the detection time length B3 is the duration of the gesture indicated by the second indication domain;
if the detection duration B3 is greater than the preset duration B4, resetting the first indication domain, the second indication domain and the detection duration B3, and returning to the step of judging whether the first indication domain is in the initialization state;
if the detection duration B3 is less than or equal to the preset duration B4, only the second indication field and the detection duration B3 are reset, and the step of judging whether the first indication field is in the initialization state is returned to.
For the branch, the gesture action B1 may be a return gesture of a reference gesture action corresponding to the gesture action a1, for example, the gesture action a1 corresponding to the downward sliding operation instruction is a palm gesture action, and the corresponding reference gesture action is a palm turning over to a back of the hand (the turning direction is relative to the direction from top to bottom of the screen), then the gesture action B1 in the branch may be a return gesture of a back of the hand turning over to a palm, and the terminal resets the second indication field and repeatedly executes the second indication field, and jumps to execute the step of setting the second indication field according to the collected gesture action B1 because the first indication field is not charged with money, that is, repeatedly detects the gesture action B1, thereby implementing that no effective response is made to an illegal gesture, and improving the control accuracy and intelligence.
For the branch two, the gesture action B1 may be the gesture action a1 has not changed, in which case the terminal resets the second indication field and repeats the execution, so that the validity of the gesture action a1 may be maintained until the user cancels or executes the valid operation, thereby improving the real-time performance of the control interaction.
For the branch c, the gesture B1 may be a gesture in a scene of an operation instruction for changing, for example, a downward sliding operation instruction is changed to an upward sliding operation instruction, if the downward sliding operation instruction corresponds to a palm operation and a palm inversion is a back-of-hand operation, and the upward sliding corresponds to a back-of-hand operation and a back-of-hand inversion is a palm operation, after the user first controls downward sliding, and after the user keeps the back-of-hand gesture unchanged and exceeds a preset duration, the terminal is triggered to simultaneously reset the first and second indication fields and the detection duration B3, so as to re-detect, subsequently, the first indication field may be set according to the detected back-of-hand gesture, the second indication field may be set according to the detected back-of-hand gesture operation, and then the upward sliding operation is determined and executed according to the preset association relationship. Flexible and coherent operation instruction switching is realized.
In this possible example, the method further comprises:
if the number of the gesture detection results in the gesture action queue B2 is not N, returning to the step of determining whether the first indication field is in the initialization state.
The gesture-free state means that image information of a human hand does not exist, the invalid gesture means that a detected gesture image does not meet a preset association relationship to form an effective gesture type, the valid gesture means that a detected gesture image meets a preset association relationship to form an effective gesture type, for example, a gesture in a predefined reference gesture action corresponding to gesture action a1, and a predefined valid operation instruction exists in a gesture information combination formed by the reference gesture action and gesture action a 1. As the palm is taken as gesture a1, the corresponding reference gesture action may be the palm turning to the back of the hand, i.e. the active gesture may include the palm and the back of the hand.
Wherein N may be 2 or 3, etc. greater than 1, which is not limited herein.
It can be seen that, in this example, the gesture action queue B2 restricts the gesture action B1 to be characterized by at least 2 gesture detection results, and since a plurality of gesture detection results can more accurately and comprehensively correspond to the actual gesture action of the user, noise influence caused by interference gestures, such as illegal gestures, of a single gesture detection result can be avoided, and the accuracy of the detection result can be effectively improved.
In this possible example, the method further comprises:
if the gesture image is a gesture-free gesture or an invalid gesture, judging whether a detection time length C1 is greater than a preset time length C2, wherein the detection time length C1 is a time length for continuously detecting the gesture-free gesture or the invalid gesture when the first indication domain is not in the initialization state;
if the detection duration C1 is greater than the preset duration C2, only resetting the second indication field and the detection duration C1, adjusting the sampling frequency to the first frame rate mode, and returning to the step of determining whether the first indication field is in the initialization state;
if the detection duration C1 is less than or equal to the preset duration C2, the detection duration C1 is updated, and the step of determining whether the first indication field is in the initialization state is returned to.
The preset time duration C2 may be, for example, 15 seconds, which is not limited herein.
As can be seen, in this example, the gesture-free or invalid gesture recognition scenario is terminated through the duration constraint mechanism, and gesture control is performed again, so that the duration is prevented from being affected by the consistent maintenance in the second frame rate mode.
In this possible example, the first indication field is characterized by any one of the following: representing through a gesture identifier and representing through an instruction identifier;
the second indication domain is characterized by any one of the following ways: the representation is identified through gestures and the representation is identified through instructions.
In this possible example, if the first indication domain is characterized by a first gesture identifier and the second indication domain is characterized by a second gesture identifier, the determining a target operation instruction according to the first indication domain and the second indication domain includes: inquiring a preset operation instruction set according to a target gesture information combination formed by the first gesture identification and the second gesture identification, and acquiring the target operation instruction corresponding to the target gesture information combination, wherein the operation instruction set comprises a corresponding relation between a gesture information combination and an operation instruction;
the gesture identification refers to sign information of a gesture type, for example, a palm is represented by sign information 0, the instruction identification refers to sign information of an operation instruction, for example, a sliding operation instruction is identified by sign information X, and the like.
If the first indication domain is characterized by a first instruction identifier and the second indication domain is characterized by a second instruction identifier, the determining a target operation instruction according to the first indication domain and the second indication domain includes: and determining the target operation instruction according to the first instruction identification and the second instruction identification.
The instruction identifier in the first instruction identifier is determined by the following method: determining a first gesture information combination according to the gesture action A1, inquiring a preset operation instruction set according to the first gesture information combination, determining a corresponding first reference operation instruction, and determining a first instruction identifier according to the first reference operation instruction; the second instruction identification is determined by: determining a second gesture information combination according to the gesture action B1, querying a preset operation instruction set according to the second gesture information combination, determining a corresponding second reference operation instruction, and determining a second instruction identifier according to the second reference operation instruction.
The first reference operation instruction and the second reference operation instruction are operation instructions in an operation instruction set.
In this possible example, if the first indication field is characterized by a first gesture identifier (for example, the characterization manner of the first indication field column in table 2), after the setting the first indication field according to the collected control action, the method further includes:
and displaying prompt information of the first gesture identification.
In this possible example, if the first indication domain is characterized by a first instruction identifier, after the setting the first indication domain according to the collected control action, the method further includes:
displaying prompt information of the first instruction identification; and/or the presence of a gas in the gas,
and displaying the information of the gesture detection result corresponding to the acquired gesture image.
The prompt message of the first gesture identifier may be a prompt message of the first gesture, such as a picture, a text, an animation, and the like, and the prompt message of the first instruction identifier may be a prompt message of the first reference operation instruction, such as a picture, a text, an animation, and the like.
Therefore, in the example, the interactivity is enhanced through the information display mode, the control process is more visual and intuitive, and the user experience is better.
It can be seen that, in the embodiment of the present application, the terminal first determines whether the first indication domain is in an initialization state; if the first indication domain is in an initialization state, setting the first indication domain according to the collected control action; if the first indication field is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time, and judging whether the gesture detection result is a gesture without gesture or an invalid gesture or a valid gesture, wherein the gesture image acquired at the current time is a gesture image acquired at the last time; if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture; if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1; if the gesture action B1 and the control action are detected to meet the preset incidence relation, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, only resetting the second indication domain, and returning to the step of judging whether the first indication domain is in the initialization state or not; and if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state. Therefore, in the continuous control process of the terminal for carrying out the target operation instruction through the control action and the gesture action B1, the gesture control process can be quickly reset only by the target gesture, the steps are simple, the time consumption is short, and the convenience and the flexibility of the terminal for carrying out gesture control are improved.
In addition, it should be noted that the gesture action a1 described herein is not limited to the gesture actions determined by the same gesture detection result, and may also be gesture actions determined by different gesture detection results and according to predefined hand action rules, such as gesture actions with different gesture changes, for example, a palm changes to a fist.
In a specific implementation mechanism, the terminal needs to pre-store predefined valid gesture actions, and needs to compare detection on elements in the gesture action queue a2 according to the predefined gesture actions, where the predefined gesture actions include different gestures, and the terminal needs to detect whether the elements in the gesture action queue a2 include corresponding different gestures.
Similarly, the gesture motion B1 is not limited to a hand motion including different gestures, and may include the same gesture.
For example: keeping the hand gesture unchanged, but changing the direction and/or position of the hand relative to the terminal screen.
In a specific implementation mechanism, the terminal records a detected gesture detection result through a gesture action queue B2, after detecting that the gesture action queue B2 includes 2 gesture detection results, sets a second indication domain according to the 2 valid gesture detection results, and if the first indication domain and the second indication domain record gesture types, the terminal may directly search a pre-stored operation instruction set according to a gesture information combination formed by the second indication domain and the first indication domain, and find out a corresponding target operation instruction, that is, the operation instruction set includes a corresponding relationship between the gesture information combination and operation execution.
Further, if the gesture detection result further includes a hand position and/or a hand direction, a movement direction and a distance (relative to the terminal screen) of the same gesture may be further determined according to the gesture detection result, for example, if an application scenario that a palm gesture is kept for dialing, the detection of the gesture action a1 may still be based on the gesture detection, and the detection mechanism of the gesture action B1 may add the position and/or the direction, so as to locate the target operation instruction.
Table 2 is used as an example to describe the corresponding relationship between the gesture action a1 (represented by an element in the gesture action queue a 2), the first indication field, the gesture action B1 (represented by an element in the gesture action queue B2), the second indication field, and the target operation command.
TABLE 2
Figure BDA0002629760960000131
Figure BDA0002629760960000141
Wherein the downward movement direction in the table can be calculated from position 1 and position 2.
Referring to fig. 3A, fig. 3A is a schematic flowchart of a gesture control method according to an embodiment of the present disclosure, where as shown in the figure, the method includes:
step 301, displaying a first page content on a current interface of a screen of a home terminal device;
the first page content may be a web page content, a chat interface content, and the like, which is not limited herein.
Step 302, judging whether a gesture action A1 of a user is detected;
the gesture motion a1 may be a static gesture or a dynamic gesture, and is not limited herein. In a specific implementation, the gesture motion detection of the user can be performed through the mechanism of the gesture motion queue a2 and the first indication field.
Step 303, if the gesture motion a1 is not detected, acquiring the gesture motion a1, and setting a first indication domain according to the acquired gesture motion a 1;
step 304, if the gesture motion A1 is detected, judging whether a target gesture is detected or not in the process of acquiring a gesture motion B1;
the gesture motion B1 is characterized by at least two gesture detection results, the gesture motion A1 and the gesture motion B1 are both blank gestures, the blank gestures refer to non-contact gesture control operations, and the gesture detection results refer to gestures of the user determined according to detected gesture images.
Step 305, if the target gesture is not detected, setting a second indication domain according to the collected gesture action B1;
step 306, if it is detected that the gesture action B1 and the gesture action a1 satisfy a preset association relationship, executing a preset operation on the first page content according to the gesture action a1 and the gesture action B1, only resetting the gesture action B1, and returning to the step of executing the step of determining whether the gesture action a1 of the user is detected;
wherein the preset operation comprises any one of the following operations:
sliding up and down, capturing a picture, returning to a desktop, returning to a previous menu, going to a next menu, pausing and playing.
For example, as shown in fig. 3B, the terminal displays a news page for browsing a scene, the gesture action a1 is a hand back gesture action, the gesture action B1 is a hand back turning to palm gesture action, the target operation command is to perform a slide-up operation on the content of the news page, and the visual information of the slide-up command is an icon sliding up.
Step 307, if the target gesture is detected, resetting the gesture motion a1 and the gesture motion B1, displaying prompt information for indicating that gesture motion recognition is performed again, and returning to the step of judging whether the gesture motion a1 of the user is detected.
The prompt information for indicating that the gesture action recognition is performed again may be any one of characters, images and animations.
For example, as shown in fig. 3C, in the scenario of fast initialization of the terminal for the gesture control mechanism, the gesture action a1 is a palm gesture action, the target gesture is a fist-making gesture action, and the prompt information for indicating that gesture action recognition is performed again is the text information "gesture control is performed again".
In this possible example, after the capturing the gesture action a1 and before the setting the first indication field according to the captured gesture action a1, the method further comprises: displaying control information and/or instruction information associated with the gesture action A1, wherein the control information refers to visualized information of the gesture action A1, and the instruction information refers to visualized information of a reference operation instruction associated with the gesture action A1.
The visualized information includes various modes such as a picture, a text, an animation and the like, and is not limited uniquely.
In a specific implementation, the displayed gesture information and/or instruction information may last for a preset duration, and the preset duration may be, for example, 100 milliseconds, 200 milliseconds, and the like, and is not limited herein.
In this possible example, before displaying the prompt information indicating that the action recognition is to be performed again, the method further includes: and hiding and displaying the control information and/or instruction information associated with the gesture action A1.
In a specific implementation, the hiding may be no longer displaying, for example, setting the transparency of the control information and/or the instruction information to 100%, and the specific implementation mechanism is not limited uniquely.
Therefore, in the example, the target gesture has the function of initializing the gesture control process of this time, and quickly entering the next gesture control process, and the page can achieve the effect of synchronous initialization by hiding historical display information, so that misunderstanding caused by misunderstanding of a user is avoided.
In this possible example, the prompt information for indicating that gesture motion recognition is performed again includes any one of the following: text, image, animation.
It can be seen that, in the embodiment of the application, the terminal firstly displays the content of the first page on the current interface of the screen of the home terminal device, secondly, determines whether the first indication domain is in the initialization state, and if the first indication domain is in the initialization state, sets the first indication domain according to the collected gesture action a 1; if the first indication domain is not in the initialization state, judging whether a target gesture is detected or not in the process of acquiring gesture action B1; if the target gesture is not detected, setting a second indication field according to gesture action B1; if the gesture action B1 and the gesture action A1 are detected to meet the preset association relationship, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, only resetting the second indication domain, and returning to the step of judging whether the first indication domain is in the initialization state; if the target gesture is detected, resetting the first indication domain and the second indication domain, displaying prompt information for indicating that gesture action recognition is carried out again, and returning to execute the step of judging whether the first indication domain is in the initialization state or not. It can be seen that, in the continuous control process of the terminal for carrying out the target operation instruction on the first page content through the gesture A1 and the gesture B1, the gesture control process can be reset quickly only by the target gesture, the steps are simple and short in time consumption, and the prompt information for representing gesture recognition is displayed, so that the control process is more visual, and the convenience, the flexibility and the intuitiveness of gesture control of the terminal are improved.
The gesture control method described in the embodiment of the present application may be specifically implemented by a state machine.
For example, as shown in fig. 3D, a logic diagram for implementing a state machine for implementing gesture recognition is shown, where FastFPS corresponds to the second frame rate mode, FastFPS corresponds to the first frame rate mode, the first indication field is Direction1, gesture action queue a2 is action list1, the second indication field is Direction2, gesture action queue B2 is action list2, gesture actions of consecutive 3 palms correspond to Direction1 ═ 0, gesture actions of palm turning to back of hand correspond to Direction2 ═ 0,1, gesture actions of consecutive 3 backs of hands correspond to Direction1 ═ 1, gesture actions of palm turning to back of hand correspond to Direction2 ═ 1,0, detection duration a3 is Time1, preset duration a4 is 15 seconds, detection duration B2 is preset duration 2, gesture actions of palm turning to back of hand correspond to Direction2 ═ 1, duration detection duration a3 is Time duration 19, duration detection duration is preset duration C1, and gesture actions are combined as Time instructions shown in a table 4C 19, a gesture duration corresponding to a gesture duration detection target command.
TABLE 3
Direction1 Direction2 Operation instruction
0 [0,1] Lower slide
1 [1,0] Upper slide
The implementation process is as follows:
the terminal starts a gesture control function, firstly, the operation is initialized, FastFPS is false, and Direction1 is-1;
secondly, the terminal collects gesture images and judges whether the Direction1 is equal to-1;
if the numerical value of the Direction1 is equal to-1, judging whether the gesture image acquired at the current time is a no gesture or an invalid gesture or an effective gesture;
if the gesture image is a no gesture or an invalid gesture, judging whether the Time1 is greater than 15 seconds;
if Time1 is greater than 15 seconds, then Time1 is reset and it is determined whether FastFPS is equal to true;
if the FastFPS is equal to true, resetting the FastFPS to false, returning to the step of collecting the gesture image and judging whether the Direction1 is equal to-1;
if the FastFPS is not equal to true, returning to the step of executing gesture image acquisition and judging whether the Direction1 is equal to-1;
if the Time1 is less than or equal to 15 seconds, updating the Time1, returning to the step of acquiring the gesture images and judging whether the Direction1 is equal to-1;
if the gesture image is an effective gesture, adding the gesture image into a gesture action queue A2 according to the gesture detection result, specifically, if the gesture image is a palm, adding 0 to ActionList1, and if the gesture image is a back of the hand, adding 1 to ActionList1, and determining whether 3 continuous identical elements exist in ActionList 1;
if there are 3 consecutive identical elements in ActionList1, setting a first indication field according to the 3 consecutive identical elements, specifically if there are 3 palm elements, setting Direction1 to 0, if there are 3 back elements, setting Direction1 to 1, resetting ActionList1, and setting FastFPS to true, and returning to execute capturing gesture images, and determining whether Direction1 is equal to-1;
if the ActionList1 has no 3 continuous same elements, returning to execute the step of collecting gesture images and judging whether the Direction1 is equal to-1;
if the numerical value of the Direction1 is not equal to-1, judging whether the gesture image acquired at the current time is a gesture without gesture or an invalid gesture or a valid gesture;
if the gesture image is a no gesture or an invalid gesture, judging whether the Time2 is greater than 15 seconds;
if the Time2 is greater than 15 seconds, setting FastFPS (fast FPS) to be false, resetting the Direction1 to be-1, resetting the Time2, returning to the step of acquiring the gesture image and judging whether the Direction1 is equal to-1;
if the Time2 is less than 15 seconds, updating the Time2, returning to the step of executing gesture image acquisition and judging whether the Direction1 is equal to-1;
if the gesture image is an effective gesture, judging whether the effective gesture is a fist-making gesture;
if the valid gesture is not a fist-making gesture, adding 1 element 0 to the ActionList2 according to the gesture detection result in the gesture action queue B2, specifically, if the gesture is a palm, adding 1 element 1 to the ActionList2, and judging whether 2 elements exist in the ActionList 2;
if action 2 has 2 elements, resetting action 2 according to 2 consecutive identical elements, specifically, Direction2 ═ 11 or [00] or [01] or [10], and judging whether Direction1 and Direction2 satisfy a preset association relation;
if the Direction1 and the Direction2 meet a preset association relationship, determining a target operation instruction according to the Direction1 and the Direction2, specifically determining a downslide operation according to the Direction2 being 0,1 and the Direction1 being 0, determining a upslide operation according to the Direction2 being 1,0 and the Direction1 being 1, resetting the Direction2, returning to execute a gesture collecting image, and judging whether the Direction1 is equal to-1;
if the Direction1 and the Direction2 do not meet the preset association relationship, detecting whether the gestures indicated by all the elements in the Direction2 are the same;
if the gestures indicated by the 2 gesture detection results in the Direction2 are not the same, resetting the Direction2, returning to execute the steps of acquiring gesture images and judging whether the Direction1 is equal to-1 or not;
if the gestures indicated by the 2 gesture detection results in the Direction2 are the same, determining whether the gesture actions associated with the directions 2 and 1 are the same;
if the gesture motions associated with Direction2 and Direction1 are different, then determine whether Time3 exceeds 1.2 s;
if the Time3 exceeds 1.2s, resetting the Direction1, the Direction2 and the Time3, returning to the step of collecting the gesture images and judging whether the Direction1 is equal to-1 or not;
if the Time3 does not exceed 1.2s, updating the Time3, returning to the step of executing acquisition of the gesture image and judging whether the Direction1 is equal to-1;
if the gesture motions associated with the Direction2 and the Direction1 are the same, resetting the Direction2, returning to the step of collecting the gesture images and judging whether the Direction1 is equal to-1;
if the ActionList2 does not have 2 elements, the step of collecting the gesture image is returned to, and whether the Direction1 is equal to-1 is judged.
If the effective gesture is a fist-making gesture, resetting the Direction1 and the Direction2, returning to execute the steps of collecting gesture images and judging whether the Direction1 is equal to-1.
The embodiment of the application provides a gesture control device, which can be a terminal. Specifically, the gesture control device is used for executing steps executed by the terminal in the gesture control method. The gesture control device provided by the embodiment of the application can comprise modules corresponding to the corresponding steps.
In the embodiment of the present application, the gesture control apparatus may be divided into the functional modules according to the above method example, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The division of the modules in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 4 shows a schematic structural diagram of a possible gesture control device according to the above embodiment, in a case where functional modules are divided according to respective functions. As shown in fig. 4, the gesture control apparatus 4 includes a determination unit 40, a first setting unit 41, and a second setting unit 42, wherein,
a judging unit 40, configured to judge whether the first indication domain is in an initialization state;
a first setting unit 41, configured to set the first indication domain according to the collected control action if the first indication domain is in the initialization state;
a second setting unit 42 for performing the following operations:
if the first indication domain is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time, and judging whether the gesture detection result is a gesture without gesture or an invalid gesture or a valid gesture, wherein the gesture image acquired at the current time is a gesture image acquired at the last time;
if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture;
if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1;
if the gesture action B1 and the control action are detected to meet the preset association relationship, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, only resetting the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state;
if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state.
In this possible example, the active gesture includes the target gesture and a gesture other than the target gesture;
the gesture action B1 is an air gesture action.
In this possible example, the control action comprises a gesture action, the first indication field being associated with a gesture action a1 of the user.
In this possible example, the gesture control apparatus further includes a reset unit, configured to reset the gesture action queue B2 if the gesture detection result is the target gesture.
In this possible example, the gesture control apparatus further includes a first display unit, configured to display prompt information for indicating to perform motion recognition again if the gesture detection result is the target gesture.
In this possible example, the gesture control apparatus further includes a second display unit, and after the first setting unit 41 sets the first indication field according to the collected control action, the second display unit is further configured to: and displaying control information and/or instruction information associated with the control action, wherein the control information refers to visualized information of the control action, and the instruction information refers to visualized information of a reference operation instruction associated with the control action.
In this possible example, when the second setting unit 42 determines that the gesture detection result is the target gesture, the second display unit is further configured to hide and display control information and/or instruction information associated with the control action; alternatively, the first and second electrodes may be,
and hiding and displaying gesture information and/or instruction information associated with the gesture action A, and displaying prompt information for representing the action recognition to be carried out again.
In this possible example, in terms of setting the first indication field according to the collected control action, the first setting unit 41 is specifically configured to: judging whether the gesture image acquired at the current time is a gesture-free gesture, an invalid gesture or an effective gesture, wherein the gesture image acquired at the current time is the gesture image acquired at the last time;
if the gesture image is an effective gesture, adding the gesture detection result of the gesture image into a gesture action queue A2, and judging whether M continuous and same gesture detection results exist in the gesture action queue A2, wherein M is a positive integer;
if there are M consecutive and identical gesture detection results in the gesture action queue a2, the first indication field is set according to the M consecutive and identical gesture detection results, and only the gesture action queue a2 is reset.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the case of using an integrated unit, a schematic structural diagram of a gesture control apparatus provided in an embodiment of the present application is shown in fig. 5. In fig. 5, the gesture control device 5 includes: a processing module 50 and a communication module 51. The processing module 50 is used for controlling and managing the actions of the gesture control device, such as the steps performed by the determination unit 40, the first setting unit 41, the second setting unit 42, and/or other processes for performing the techniques described herein. The communication module 51 is used to support interaction between the gesture control apparatus and other devices. As shown in fig. 5, the gesture control apparatus may further include a storage module 52, and the storage module 52 is used for storing program codes and data of the gesture control apparatus.
The processing module 50 may be a Processor or a controller, and may be, for example, a Central Processing Unit (CPU), a general-purpose Processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication module 51 may be a transceiver, an RF circuit or a communication interface, etc. The storage module may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The gesture control apparatus can perform the steps performed by the terminal in the gesture control method shown in fig. 2A.
Fig. 6 shows another possible structural schematic diagram of the gesture control device in the above embodiment, in the case of dividing each function module according to each function. As shown in fig. 6, the gesture control apparatus 6 includes a display unit 60, a determination unit 61, a first setting unit 62, and a second setting unit 63, wherein,
a display unit 60, configured to display first page content on a current interface of a screen of the home device;
a judging unit 61, configured to judge whether a gesture action a1 of the user is detected;
the first setting unit 62 is configured to, if the gesture action a1 is not detected, set the first indication field according to the collected gesture action a 1;
a second setting unit 63 for performing the following operations:
if the gesture motion A1 is detected, judging whether a target gesture is detected or not in the process of acquiring a gesture motion B1;
if the target gesture is not detected, setting a second indication field according to the collected gesture action B1;
if the gesture action B1 and the gesture action A1 are detected to meet the preset association relationship, executing preset operation on the first page content according to the gesture action A1 and the gesture action B1, only resetting the gesture action B1, and returning to the step of judging whether the gesture action A1 of the user is detected;
if the target gesture is detected, resetting the gesture motion A1 and the gesture motion B1, displaying prompt information for indicating that gesture motion recognition is carried out again, and returning to the step of judging whether the gesture motion A1 of the user is detected.
In this possible example, after the first setting unit 61 collects the gesture action a1, and before the first indication field is set according to the collected gesture action a1, the display unit 60 is further configured to display control information and/or instruction information associated with the gesture action a1, where the control information is visualized information of the gesture action a1, and the instruction information is visualized information of a reference operation instruction associated with the gesture action a 1.
In this possible example, the display unit 60 is further configured to hide and display the control information and/or the instruction information associated with the gesture action a1 after the first setting unit 61 collects the gesture action a1 and before the first indication field is set according to the collected gesture action a 1.
In this possible example, the prompt information for indicating that gesture motion recognition is performed again includes any one of the following:
text, image, animation.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the case of using an integrated unit, a schematic structural diagram of a gesture control apparatus provided in an embodiment of the present application is shown in fig. 7. In fig. 7, the gesture control device 7 includes: a processing module 70 and a communication module 71. The processing module 70 is used for controlling and managing the actions of the gesture control device, such as the steps performed by the display unit 60, the judgment unit 61, the first setting unit 62 and the second setting unit 63, and/or other processes for performing the techniques described herein. The communication module 71 is used to support interaction between the gesture control apparatus and other devices. As shown in fig. 7, the gesture control apparatus may further include a storage module 72, and the storage module 72 is used for storing program codes and data of the gesture control apparatus.
The processing module 70 may be a Processor or a controller, and may be, for example, a Central Processing Unit (CPU), a general-purpose Processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication module 71 may be a transceiver, an RF circuit or a communication interface, etc. The storage module 72 may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The gesture control apparatus can perform the steps performed by the terminal in the gesture control method shown in fig. 3A.
The embodiment of the present application further provides a chip, where the chip includes a processor, configured to call and run a computer program from a memory, so that a device in which the chip is installed performs some or all of the steps described in the terminal in the above method embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the terminal in the above method embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to perform some or all of the steps described in the above method embodiment for a network-side device.
The present application further provides a computer program product, where the computer program product includes a computer program operable to make a computer perform some or all of the steps described in the terminal in the above method embodiments. The computer program product may be a software installation package.
The steps of a method or algorithm described in the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an access network device, a target network device, or a core network device. Of course, the processor and the storage medium may reside as discrete components in an access network device, a target network device, or a core network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functionality described in the embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (16)

1. A gesture control method, comprising:
judging whether the first indication domain is in an initialization state or not;
if the first indication domain is in the initialization state, setting the first indication domain according to the collected control action;
if the first indication domain is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time, and judging whether the gesture detection result is a gesture without gesture or an invalid gesture or a valid gesture, wherein the gesture image acquired at the current time is a gesture image acquired at the last time;
if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture;
if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1;
if the gesture action B1 and the control action are detected to meet the preset association relationship, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, only resetting the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state;
if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state.
2. The method of claim 1, wherein the active gesture comprises the target gesture and a gesture other than the target gesture;
the gesture action B1 is an air gesture action.
3. The method of claim 2, wherein the control action comprises a gesture action, and wherein the first indication field is associated with a gesture action A1 of the user.
4. The method of claim 3, wherein if the gesture detection result is the target gesture, the method further comprises:
the gesture action queue B2 is reset.
5. The method according to any one of claims 1-4, wherein if the gesture detection result is the target gesture, the method further comprises:
and displaying prompt information for indicating the re-operation recognition.
6. The method according to any one of claims 1-4, wherein after setting the first indication field according to the collected control action, the method further comprises:
and displaying control information and/or instruction information associated with the control action, wherein the control information refers to visualized information of the control action, and the instruction information refers to visualized information of a reference operation instruction associated with the control action.
7. The method according to claim 6, wherein when the gesture detection result is determined to be the target gesture, the method further comprises:
hiding and displaying control information and/or instruction information associated with the control action; alternatively, the first and second electrodes may be,
the gesture information and/or instruction information related to the gesture action a1 is hidden and displayed, and prompt information indicating that action recognition is to be performed again is displayed.
8. The method according to any one of claims 1-7, wherein the setting the first indication field according to the collected control action comprises:
judging whether the gesture image acquired at the current time is a gesture-free gesture, an invalid gesture or an effective gesture, wherein the gesture image acquired at the current time is the gesture image acquired at the last time;
if the gesture image is an effective gesture, adding the gesture detection result of the gesture image into a gesture action queue A2, and judging whether M continuous and same gesture detection results exist in the gesture action queue A2, wherein M is a positive integer;
if there are M consecutive and identical gesture detection results in the gesture action queue a2, the first indication field is set according to the M consecutive and identical gesture detection results, and only the gesture action queue a2 is reset.
9. A gesture control method, comprising:
displaying first page content on a current interface of a screen of local equipment;
judging whether the gesture A1 of the user is detected;
if the gesture action A1 is not detected, acquiring the gesture action A1, and setting a first indication field according to the acquired gesture action A1;
if the gesture motion A1 is detected, judging whether a target gesture is detected or not in the process of acquiring a gesture motion B1;
if the target gesture is not detected, setting a second indication field according to the collected gesture action B1;
if the gesture action B1 and the gesture action A1 are detected to meet the preset association relationship, executing preset operation on the first page content according to the gesture action A1 and the gesture action B1, only resetting the gesture action B1, and returning to the step of judging whether the gesture action A1 of the user is detected;
if the target gesture is detected, resetting the gesture motion A1 and the gesture motion B1, displaying prompt information for indicating that gesture motion recognition is carried out again, and returning to the step of judging whether the gesture motion A1 of the user is detected.
10. The method of claim 9, wherein after the capturing the gesture action a1, after the capturing the gesture action a1, and before the setting the first indication field according to the captured gesture action a1, the method further comprises:
displaying control information and/or instruction information associated with the gesture action A1, wherein the control information refers to visualized information of the gesture action A1, and the instruction information refers to visualized information of a reference operation instruction associated with the gesture action A1.
11. The method of claim 10, wherein prior to displaying the prompt indicating to resume motion recognition, the method further comprises:
and hiding and displaying the control information and/or instruction information associated with the gesture action A1.
12. The method according to any one of claims 9 to 11, wherein the prompt message for indicating that gesture motion recognition is performed again comprises any one of the following:
text, image, animation.
13. A gesture control device is characterized by comprising
The judging unit is used for judging whether the first indication domain is in an initialization state or not;
the first setting unit is used for setting the first indication domain according to the acquired control action if the first indication domain is in the initialization state;
a second setting unit for performing the following operations:
if the first indication domain is not in the initialization state, determining a gesture detection result of the gesture image acquired at the current time, and judging whether the gesture detection result is a gesture without gesture or an invalid gesture or a valid gesture, wherein the gesture image acquired at the current time is a gesture image acquired at the last time;
if the gesture detection result is an effective gesture, judging whether the gesture detection result is a target gesture;
if the gesture detection result is not the target gesture, setting a second indication domain according to the collected gesture action B1;
if the gesture action B1 and the control action are detected to meet the preset association relationship, determining a target operation instruction according to the first indication domain and the second indication domain, executing the target operation instruction, only resetting the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state;
if the gesture detection result is the target gesture, resetting the first indication domain and the second indication domain, and returning to execute the step of judging whether the first indication domain is in the initialization state.
14. A gesture control device is characterized by comprising
The display unit is used for displaying first page content on a current interface of a screen of the home terminal equipment;
a judging unit, configured to judge whether a gesture action a1 of a user is detected;
the first setting unit is used for setting the first indication field according to the collected gesture A1 if the gesture A1 is not detected;
a second setting unit for performing the following operations:
if the gesture motion A1 is detected, judging whether a target gesture is detected or not in the process of acquiring a gesture motion B1;
if the target gesture is not detected, setting a second indication field according to the collected gesture action B1;
if the gesture action B1 and the gesture action A1 are detected to meet the preset association relationship, executing preset operation on the first page content according to the gesture action A1 and the gesture action B1, only resetting the gesture action B1, and returning to the step of judging whether the gesture action A1 of the user is detected;
if the target gesture is detected, resetting the gesture motion A1 and the gesture motion B1, displaying prompt information for indicating that gesture motion recognition is carried out again, and returning to the step of judging whether the gesture motion A1 of the user is detected.
15. A terminal comprising a processor, memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-12.
16. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-12.
CN202010809762.1A 2020-08-12 2020-08-12 Gesture control method and related device Pending CN111813321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010809762.1A CN111813321A (en) 2020-08-12 2020-08-12 Gesture control method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010809762.1A CN111813321A (en) 2020-08-12 2020-08-12 Gesture control method and related device

Publications (1)

Publication Number Publication Date
CN111813321A true CN111813321A (en) 2020-10-23

Family

ID=72860410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010809762.1A Pending CN111813321A (en) 2020-08-12 2020-08-12 Gesture control method and related device

Country Status (1)

Country Link
CN (1) CN111813321A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113696850A (en) * 2021-08-27 2021-11-26 上海仙塔智能科技有限公司 Vehicle control method and device based on gestures and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221891A (en) * 2011-07-13 2011-10-19 广州视源电子科技有限公司 Method and system for realizing optical image gesture recognition
EP3130998A1 (en) * 2015-08-11 2017-02-15 Advanced Digital Broadcast S.A. A method and a system for controlling a touch screen user interface
CN106485132A (en) * 2016-09-30 2017-03-08 上海林果实业股份有限公司 A kind of Password Input detection method and terminal
US20190114044A1 (en) * 2015-11-17 2019-04-18 Samsung Electronics Co., Ltd. Touch input method through edge screen, and electronic device
CN110825296A (en) * 2019-11-07 2020-02-21 深圳传音控股股份有限公司 Application control method, device and computer readable storage medium
CN111078099A (en) * 2019-05-29 2020-04-28 广东小天才科技有限公司 Learning function switching method based on gesture recognition and learning equipment
CN111158467A (en) * 2019-12-12 2020-05-15 青岛小鸟看看科技有限公司 Gesture interaction method and terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221891A (en) * 2011-07-13 2011-10-19 广州视源电子科技有限公司 Method and system for realizing optical image gesture recognition
EP3130998A1 (en) * 2015-08-11 2017-02-15 Advanced Digital Broadcast S.A. A method and a system for controlling a touch screen user interface
US20190114044A1 (en) * 2015-11-17 2019-04-18 Samsung Electronics Co., Ltd. Touch input method through edge screen, and electronic device
CN106485132A (en) * 2016-09-30 2017-03-08 上海林果实业股份有限公司 A kind of Password Input detection method and terminal
CN111078099A (en) * 2019-05-29 2020-04-28 广东小天才科技有限公司 Learning function switching method based on gesture recognition and learning equipment
CN110825296A (en) * 2019-11-07 2020-02-21 深圳传音控股股份有限公司 Application control method, device and computer readable storage medium
CN111158467A (en) * 2019-12-12 2020-05-15 青岛小鸟看看科技有限公司 Gesture interaction method and terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113696850A (en) * 2021-08-27 2021-11-26 上海仙塔智能科技有限公司 Vehicle control method and device based on gestures and storage medium

Similar Documents

Publication Publication Date Title
US11604660B2 (en) Method for launching application, storage medium, and terminal
CN111885265B (en) Screen interface adjusting method and related device
CN109753326B (en) Processing method, device, equipment and machine readable medium
CN111767554B (en) Screen sharing method and device, storage medium and electronic equipment
CN110119727B (en) Fingerprint identification method, fingerprint identification device, terminal and storage medium
KR20160103398A (en) Method and apparatus for measuring the quality of the image
US11138956B2 (en) Method for controlling display of terminal, storage medium, and electronic device
CN107608550B (en) Touch operation response method and device
CN111866393B (en) Display control method, device and storage medium
CN107450838B (en) Response method and device of black screen gesture, storage medium and mobile terminal
CN107577415A (en) Touch operation response method and device
CN112541450A (en) Context awareness function control method and related device
WO2019019818A1 (en) Method and apparatus for accelerating black screen gesture processing, storage medium, and mobile terminal
CN112364799A (en) Gesture recognition method and device
CN107608551A (en) Touch operation response method and device
US20230359283A1 (en) Method for gesture control and related devices
CN111596971A (en) Application cleaning method and device, storage medium and electronic equipment
CN110968362A (en) Application running method and device and storage medium
US20230384925A1 (en) Method, terminal for acquiring gesture data, and storage medium
CN111813321A (en) Gesture control method and related device
CN111880714B (en) Page control method and related device
CN109871848B (en) Character recognition method and device for mobile terminal
CN109753205B (en) Display method and device
CN110853643A (en) Method, device, equipment and storage medium for voice recognition in fast application
US20230386109A1 (en) Content layout systems and processes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination