CN112965602A - Gesture-based human-computer interaction method and device - Google Patents

Gesture-based human-computer interaction method and device Download PDF

Info

Publication number
CN112965602A
CN112965602A CN202110303009.XA CN202110303009A CN112965602A CN 112965602 A CN112965602 A CN 112965602A CN 202110303009 A CN202110303009 A CN 202110303009A CN 112965602 A CN112965602 A CN 112965602A
Authority
CN
China
Prior art keywords
human body
gesture
body image
frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110303009.XA
Other languages
Chinese (zh)
Inventor
李海斌
胡勇兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Huixian Intelligent Technology Co ltd
Original Assignee
Suzhou Huixian Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Huixian Intelligent Technology Co ltd filed Critical Suzhou Huixian Intelligent Technology Co ltd
Priority to CN202110303009.XA priority Critical patent/CN112965602A/en
Publication of CN112965602A publication Critical patent/CN112965602A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The method comprises the steps of obtaining at least two frames of human body images in a time period shot by a camera device; respectively carrying out gesture recognition on each frame of human body image to obtain a gesture corresponding to each frame of human body image and a position of the gesture in the human body image; determining a gesture moving direction according to the gesture corresponding to each frame of the human body image and the position of the gesture in the human body image; determining a corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image; based on control command control electronic screen carries out corresponding screen operation, has realized just can accomplishing some simple and effectual screen operations through simple gesture, realizes the operation of remote control screen, improves user experience degree and operability.

Description

Gesture-based human-computer interaction method and device
Technical Field
The application relates to the technical field of computer vision processing, in particular to a human-computer interaction method and device based on gestures.
Background
In the prior art, electronic whiteboards are very popular in classrooms and meeting rooms, but the human-computer interaction modules of the existing electronic whiteboards are only electronic pens and touch screens. If the touch screen is used, only contact operation can be performed, the touch screen needs to be contacted in a short distance for operation, the electronic pen needs to be handed to an operator for transmission by using the electronic pen, the electronic pen can be remotely operated but needs to be held by the hand, and therefore the operation can be completed only by moving a human body in a classroom or a meeting, and inconvenience is caused.
Disclosure of Invention
An object of the present application is to provide a gesture-based human-computer interaction method and device, which can accomplish some simple and effective screen operations only through simple gestures, realize remote control screen operations, and improve user experience and operability.
According to one aspect of the application, a gesture-based human-computer interaction method is provided, wherein the method comprises the following steps:
acquiring at least two frames of human body images in a time period shot by the camera equipment;
respectively carrying out gesture recognition on each frame of human body image to obtain a gesture corresponding to each frame of human body image and a position of the gesture in the human body image;
determining a gesture moving direction according to the gesture corresponding to each frame of the human body image and the position of the gesture in the human body image;
determining a corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image;
and controlling the electronic screen to perform corresponding screen operation based on the control instruction.
Further, in the above method, the performing gesture recognition on each frame of the human body image to obtain the gesture corresponding to each frame of the human body image and the position of the gesture in the human body image includes:
respectively carrying out hand detection on each frame of human body image to obtain a hand region image corresponding to each frame of human body image;
and respectively carrying out gesture recognition on the hand region image corresponding to each frame of human body image to obtain the gesture corresponding to each frame of human body image and the position of the gesture in the human body image.
Further, in the above method, the performing hand detection on each frame of the human body image to obtain a hand region image corresponding to each frame of the human body image includes:
and respectively performing hand detection on each frame of human body image based on an algorithm combining Haar features and contour features to obtain a hand region image corresponding to each frame of human body image.
Further, in the above method, the method further includes:
training and determining a classifier for gesture recognition;
the gesture recognition is respectively performed on the hand region images corresponding to each frame of the human body image to obtain the gestures corresponding to each frame of the human body image and the positions of the gestures in the human body image, and the gesture recognition method comprises the following steps:
and respectively inputting the hand area image corresponding to each frame of human body image into the classifier for gesture recognition to obtain the gesture corresponding to each frame of human body image and the position of the gesture in the human body image.
Further, in the above method, the method further includes:
presetting at least one gesture and a mapping relation between the moving direction of the gesture and a preset control instruction;
wherein, the determining the corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image comprises:
and according to the gesture moving direction and the gesture corresponding to each frame of the human body image, performing instruction matching in the mapping relation to obtain a corresponding control instruction.
According to another aspect of the present application, there is also provided a non-volatile storage medium having computer-readable instructions stored thereon, which, when executed by a processor, cause the processor to implement the gesture-based human-machine interaction method as described above.
According to another aspect of the present application, there is also provided a gesture-based human-computer interaction device, wherein the device includes:
one or more processors;
a computer-readable medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a gesture-based human-machine interaction method as described above.
Compared with the prior art, the method and the device have the advantages that at least two frames of human body images within a time period shot by the camera equipment are obtained; respectively carrying out gesture recognition on each frame of human body image to obtain a gesture corresponding to each frame of human body image and a position of the gesture in the human body image; determining a gesture moving direction according to the gesture corresponding to each frame of the human body image and the position of the gesture in the human body image; determining a corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image; based on control command control electronic screen carries out corresponding screen operation, has realized just can accomplishing some simple and effectual screen operations through simple gesture, realizes the operation of remote control screen, improves user experience degree and operability.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a gesture-based human-machine interaction method in accordance with an aspect of the subject application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 is a schematic flowchart illustrating a gesture-based human-computer interaction method according to an aspect of the present application, where the method includes steps S11, S12, S13, S14, and S15, and specifically includes the following steps:
step S11, acquiring at least two frames of human body images in a time period shot by the camera equipment; here, the image pickup apparatus includes, but is not limited to, a camera, a video camera, and the like that can take a picture or pick up an image; the at least two frames of human body images are continuous human body images acquired in the time period, so that the continuous human body images can be conveniently analyzed subsequently to obtain data such as gesture movement reversal of the human body.
Step S12, respectively performing gesture recognition on each frame of human body image to obtain a gesture corresponding to each frame of human body image and a position of the gesture in the human body image; because the human body is movable, after each frame of human body image is analyzed, each frame of human body image corresponds to one gesture, and the gestures may be the same or different, or different moving forms using one gesture may exist, so that, in order to analyze the gestures better, the positions of the analyzed gestures in the current human body image also need to be determined while the gestures corresponding to each frame of human body image are detected.
Step S13, determining the gesture moving direction according to the gesture corresponding to each frame of the human body image and the position of the gesture in the human body image; because the gesture movement of the user is continuous, in order to determine the gesture movement trend, the gesture movement track needs to be determined by using Kalman filtering according to the detected gesture corresponding to each frame of human body image and the calculated and recorded position of the gesture in the current corresponding human body image, so that the gesture movement trend and the gesture movement direction are judged and determined.
And step S14, determining a corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image.
And step S15, controlling the electronic screen to perform corresponding screen operations based on the control instruction, wherein the screen operations include but are not limited to PPT control, page turning up, down, left and right, determining, returning, music control, exhibition and display software screen operations performed on the screen.
Through the steps S11 to S15, simple and effective screen operations can be completed only through simple gestures, remote screen control operations are achieved, and user experience and operability are improved.
Next to the above embodiment of the present application, the step S12 performs gesture recognition on each frame of the human body image respectively to obtain a gesture corresponding to each frame of the human body image and a position thereof in the human body image, and specifically includes:
respectively carrying out hand detection on each frame of human body image to obtain a hand region image corresponding to each frame of human body image;
and respectively carrying out gesture recognition on the hand region image corresponding to each frame of human body image to obtain the gesture corresponding to each frame of human body image and the position of the gesture in the human body image.
For example, when analyzing the human body image, the step of performing hand detection on each frame of the obtained human body image continuously in the time period is performed, where a motion region in the human body image can be separated by object motion detection due to movement of the hand of the user when displaying the screen, so that a hand region image corresponding to each frame of the human body image can be analyzed, and hand detection and region determination in the human body image are realized; then, due to the mobility of the user, gestures in each frame of hand region image may have different positions or positions of the gestures, after the hand region image is separated, gesture recognition needs to be performed on the hand region image corresponding to each frame of human body image respectively to obtain the gestures corresponding to each frame of human body image and the positions of the gestures in the current human body image, so that the detection of the gestures in each frame of human body image and the determination of the positions of the gestures in the human body image where the gestures are located are realized, and the gesture movement trend is analyzed subsequently.
Next to the above embodiment of the present application, the step S12 of performing hand detection on each frame of the human body image respectively to obtain a hand region image corresponding to each frame of the human body image specifically includes:
and respectively performing hand detection on each frame of human body image based on an algorithm combining Haar features and contour features to obtain a hand region image corresponding to each frame of human body image. For example, after at least two frames of human body images in the time period are acquired from the camera equipment, the motion areas in the human body images are separated by using object movement detection, and the hand detection is respectively carried out on the motion areas in each frame of human body images based on an algorithm combining Haar features and contour features, so that the hand area images corresponding to each frame of human body images are detected, the influence of ambient light can be effectively reduced, the identification accuracy and the identification effect are ensured, and the robustness is good.
Following the foregoing embodiments of the present application, the method for gesture-based human-computer interaction according to an aspect of the present application further includes:
training and determining a classifier for gesture recognition;
in step S13, gesture recognition is performed on each frame of hand region image corresponding to the human body image, so as to obtain a gesture corresponding to each frame of the human body image and a position of the gesture in the human body image, which specifically includes:
and respectively inputting the hand area image corresponding to each frame of human body image into the classifier for gesture recognition to obtain the gesture corresponding to each frame of human body image and the position of the gesture in the human body image.
For example, before performing gesture recognition, in order to facilitate recognition and classification of a gesture, in this embodiment of the application, it is necessary to acquire continuous multi-frame human body sample images within a plurality of preset time periods as an image training set, and perform model training and determination of a classifier for gesture recognition based on the image training set, where the classifier is a cascade classifier, for example, training with an Adaboost classifier is used to obtain the first several stages of the cascade classifier, then performing contour matching with a Hausdorff distance to generate a weak classifier, and finally obtaining a strong classifier with an Adaboost algorithm as the last stage of the cascade classifier to complete training of the cascade classifier, so that a gesture type can be detected subsequently through the cascade classifier. In an actual application scenario, for example, when gesture recognition and classification need to be performed on hand region images corresponding to each frame of human body image within the acquired time period in step S13, the hand region images corresponding to each frame of human body image are respectively input to the classifier for gesture recognition, and a result output by the classifier is a gesture corresponding to each frame of human body image and a position thereof in the human body image, so that gesture recognition and determination of each frame of acquired human body image through the classifier for gesture recognition are realized.
Following the foregoing embodiment of the present application, a gesture-based human-computer interaction method according to an aspect of the present application further includes:
presetting at least one gesture and a mapping relation between the moving direction of the gesture and a preset control instruction;
wherein, the step S14 determines a corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image, and specifically includes:
and according to the gesture moving direction and the gesture corresponding to each frame of the human body image, performing instruction matching in the mapping relation to obtain a corresponding control instruction.
For example, in order to ensure that a gesture action performed by a user can trigger a control instruction for operating a screen, a mapping relationship between one or more gestures and a movement direction thereof and a corresponding preset control instruction needs to be preset, so that a real-time gesture and a gesture movement direction thereof are analyzed in a subsequent actual application scene, and then the corresponding preset control instruction can be matched from the mapping relationship. In an actual application scenario, in step S14, according to the analyzed gesture moving direction and the gesture corresponding to each frame of the human body image, matching preset control instructions in the mapping relationship is performed to obtain control instructions corresponding to the gesture moving direction and the gesture corresponding to each frame of the human body image in all the preset instructions in the mapping relationship, and based on the corresponding control instructions, touch actions and the like of the system are simulated to complete control over the electronic screen, so that the electronic screen performs corresponding screen operations based on the control instructions, and therefore, not only is contactless control and operation over the screen completed, but also the purpose of human-computer interaction is achieved, and user experience is improved.
In all embodiments of the application, the gesture recognition is mainly divided into three main features, namely, the gesture (hand shape), the gesture moving direction and the motion track, and in the whole process of gesture recognition, a human body image containing the gesture is collected firstly, and then the collected human body image containing the gesture is subjected to image preprocessing to obtain a preprocessed human body image; then, feature extraction and selection are carried out on the preprocessed human body image, the extracted and selected features are used for designing and training a classifier to obtain the classifier, the gesture has abundant deformation, movement and texture features, and reasonable features are selected to be crucial to gesture recognition, for example, the gesture features are as follows: contours, edges, image moments, image feature vectors, and region histogram features, among others; and performing gesture recognition on the human body image in the actual application scene based on the classifier in the actual application scene to obtain a gesture recognition result corresponding to the human body image in the actual application scene. The process of recognizing the gesture can be decomposed into: the method comprises the following steps of hand tracking and positioning, gesture segmentation, hand feature extraction and preprocessing, gesture feature vector parameters and parameter training to generate a model for recognizing gestures; after the trained model is obtained, performing parameter recognition on the image in the actual application scene, inputting the image into the model for gesture recognition, and obtaining a gesture recognition result, thereby completing gesture recognition on the image in the actual application scene.
According to another aspect of the present application, there is also provided a non-volatile storage medium having computer-readable instructions stored thereon, which, when executed by a processor, cause the processor to implement the gesture-based human-machine interaction method as described above.
According to another aspect of the application, a gesture-based human-computer interaction device is also provided, wherein the device
The method comprises the following steps:
one or more processors;
a computer-readable medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a gesture-based human-machine interaction method as described above.
Here, for details of each embodiment of the gesture-based human-computer interaction device, reference may be made to corresponding parts of the embodiment of the gesture-based human-computer interaction method, and details are not described here again.
In summary, the method includes acquiring at least two frames of human body images within a time period shot by the camera equipment; respectively carrying out gesture recognition on each frame of human body image to obtain a gesture corresponding to each frame of human body image and a position of the gesture in the human body image; determining a gesture moving direction according to the gesture corresponding to each frame of the human body image and the position of the gesture in the human body image; determining a corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image; based on control command control electronic screen carries out corresponding screen operation, has realized just can accomplishing some simple and effectual screen operations through simple gesture, realizes the operation of remote control screen, improves user experience degree and operability.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (7)

1. A gesture-based human-computer interaction method, wherein the method comprises the following steps:
acquiring at least two frames of human body images in a time period shot by the camera equipment;
respectively carrying out gesture recognition on each frame of human body image to obtain a gesture corresponding to each frame of human body image and a position of the gesture in the human body image;
determining a gesture moving direction according to the gesture corresponding to each frame of the human body image and the position of the gesture in the human body image;
determining a corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image;
and controlling the electronic screen to perform corresponding screen operation based on the control instruction.
2. The method according to claim 1, wherein the performing gesture recognition on each frame of the human body image respectively to obtain the gesture corresponding to each frame of the human body image and the position of the human body image in the human body image comprises:
respectively carrying out hand detection on each frame of human body image to obtain a hand region image corresponding to each frame of human body image;
and respectively carrying out gesture recognition on the hand region image corresponding to each frame of human body image to obtain the gesture corresponding to each frame of human body image and the position of the gesture in the human body image.
3. The method of claim 2, wherein the performing hand detection on each frame of the human body image to obtain a hand region image corresponding to each frame of the human body image comprises:
and respectively performing hand detection on each frame of human body image based on an algorithm combining Haar features and contour features to obtain a hand region image corresponding to each frame of human body image.
4. The method of claim 2, wherein the method further comprises:
training and determining a classifier for gesture recognition;
the gesture recognition is respectively performed on the hand region images corresponding to each frame of the human body image to obtain the gestures corresponding to each frame of the human body image and the positions of the gestures in the human body image, and the gesture recognition method comprises the following steps:
and respectively inputting the hand area image corresponding to each frame of human body image into the classifier for gesture recognition to obtain the gesture corresponding to each frame of human body image and the position of the gesture in the human body image.
5. The method of any of claims 1-4, wherein the method further comprises:
presetting at least one gesture and a mapping relation between the moving direction of the gesture and a preset control instruction;
wherein, the determining the corresponding control instruction according to the gesture moving direction and the gesture corresponding to each frame of the human body image comprises:
and according to the gesture moving direction and the gesture corresponding to each frame of the human body image, performing instruction matching in the mapping relation to obtain a corresponding control instruction.
6. A non-transitory storage medium having stored thereon computer readable instructions which, when executed by a processor, cause the processor to implement the method of any one of claims 1 to 5.
7. A gesture-based human-computer interaction device, wherein the device comprises:
one or more processors;
a computer-readable medium for storing one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
CN202110303009.XA 2021-03-22 2021-03-22 Gesture-based human-computer interaction method and device Pending CN112965602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110303009.XA CN112965602A (en) 2021-03-22 2021-03-22 Gesture-based human-computer interaction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110303009.XA CN112965602A (en) 2021-03-22 2021-03-22 Gesture-based human-computer interaction method and device

Publications (1)

Publication Number Publication Date
CN112965602A true CN112965602A (en) 2021-06-15

Family

ID=76279467

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110303009.XA Pending CN112965602A (en) 2021-03-22 2021-03-22 Gesture-based human-computer interaction method and device

Country Status (1)

Country Link
CN (1) CN112965602A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023082727A1 (en) * 2021-11-10 2023-05-19 Huawei Technologies Co., Ltd. Methods and systems of display edge interactions in gesture-controlled device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062312A (en) * 2019-12-13 2020-04-24 RealMe重庆移动通信有限公司 Gesture recognition method, gesture control method, device, medium and terminal device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062312A (en) * 2019-12-13 2020-04-24 RealMe重庆移动通信有限公司 Gesture recognition method, gesture control method, device, medium and terminal device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023082727A1 (en) * 2021-11-10 2023-05-19 Huawei Technologies Co., Ltd. Methods and systems of display edge interactions in gesture-controlled device
US11693483B2 (en) 2021-11-10 2023-07-04 Huawei Technologies Co., Ltd. Methods and systems of display edge interactions in a gesture-controlled device

Similar Documents

Publication Publication Date Title
US11354825B2 (en) Method, apparatus for generating special effect based on face, and electronic device
WO2020073860A1 (en) Video cropping method and device
Chen et al. Repetitive assembly action recognition based on object detection and pose estimation
US9418280B2 (en) Image segmentation method and image segmentation device
CN104350509B (en) Quick attitude detector
CN111488791A (en) On-device classification of fingertip movement patterns as gestures in real time
KR20200111617A (en) Gesture recognition method, device, electronic device, and storage medium
KR101929077B1 (en) Image identificaiton method and image identification device
CN108171133B (en) Dynamic gesture recognition method based on characteristic covariance matrix
Badi Recent methods in vision-based hand gesture recognition
Schneider et al. Gesture recognition in RGB videos using human body keypoints and dynamic time warping
Baig et al. Text writing in the air
CN110858277A (en) Method and device for obtaining attitude classification model
Rehman et al. Face detection and tracking using hybrid margin-based ROI techniques
CN114549557A (en) Portrait segmentation network training method, device, equipment and medium
CN113128368B (en) Method, device and system for detecting character interaction relationship
Jetley et al. 3D activity recognition using motion history and binary shape templates
CN112965602A (en) Gesture-based human-computer interaction method and device
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
CN111651038A (en) Gesture recognition control method based on ToF and control system thereof
CN110825218A (en) System and device for performing gesture detection
Ji et al. Design of human machine interactive system based on hand gesture recognition
CN109725722B (en) Gesture control method and device for screen equipment
CN113192127A (en) Image processing method and device, electronic equipment and storage medium
Singh et al. Volume Control using Gestures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210615