CN111651038A - Gesture recognition control method based on ToF and control system thereof - Google Patents

Gesture recognition control method based on ToF and control system thereof Download PDF

Info

Publication number
CN111651038A
CN111651038A CN202010406590.3A CN202010406590A CN111651038A CN 111651038 A CN111651038 A CN 111651038A CN 202010406590 A CN202010406590 A CN 202010406590A CN 111651038 A CN111651038 A CN 111651038A
Authority
CN
China
Prior art keywords
gesture
depth image
tof
hand
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010406590.3A
Other languages
Chinese (zh)
Inventor
谢永明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hong Kong Shinning Cloud Technology Co ltd
Original Assignee
Hong Kong Shinning Cloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hong Kong Shinning Cloud Technology Co ltd filed Critical Hong Kong Shinning Cloud Technology Co ltd
Priority to CN202010406590.3A priority Critical patent/CN111651038A/en
Publication of CN111651038A publication Critical patent/CN111651038A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The invention provides a gesture recognition control method based on ToF, which comprises the following steps: s1, acquiring a scene depth image shot by a ToF module in real time, and processing the acquired scene depth image into a depth image frame sequence in real time; s2, identifying and judging whether the scene depth image contains a hand model, if so, executing the next step, otherwise, returning to execute the step S1; s3, extracting a gesture feature subsequence from the depth image frame sequence collected in the step S1 based on a sliding window method; and S4, comparing the extracted gesture feature subsequence with gesture information in a gesture library to identify control information of the gesture feature subsequence so as to realize corresponding control. According to the gesture recognition control method based on the ToF provided by the invention, the capture of fine hand interaction is realized based on the acquisition rate of hundreds of frames to thousands of frames per second of the ToF camera, so that the rapid hand action is detected and divided, and the accuracy of gesture recognition is improved. The invention also provides a gesture recognition control system based on the ToF.

Description

Gesture recognition control method based on ToF and control system thereof
Technical Field
The invention relates to the field of human-computer interaction, in particular to a gesture recognition control method and a control system based on ToF.
Background
Virtual reality or augmented reality devices need to operate in a real 3D environment. Among these applications or devices, human-computer interaction by gestures is undoubtedly the most direct and effective interactive tool. Some simple gestures, such as swipe, click, confirmation, etc., are put into and applied in some hardware systems, but the recognition of "fine" gesture actions is more difficult than these simple "coarse" gestures, but often these "fine" gestures are the more natural modes of human-computer interaction.
Some depth sensors, such as Microsoft Kinect from Microsoft, are now increasingly used in the field of computer vision. Moreover, with the continuous maturity of the depth sensor technology, many sensors have been used for the identification and control of large joints (human body movement) and have achieved good effects, but gesture identification on small and fine joints has not achieved good use effects.
Therefore, there is a need to improve the existing human-computer interaction method for gesture recognition to meet the application requirements of human-computer interaction.
Disclosure of Invention
The invention aims to provide a gesture recognition control method based on ToF and a control system thereof, which can meet the application requirement of human-computer interaction by improving the accuracy and the rapidness of gesture recognition.
In order to achieve the above object, the present invention provides a gesture recognition control method based on ToF, comprising the steps of: s1, acquiring a scene depth image shot by a ToF module in real time, and processing the acquired scene depth image into a depth image frame sequence in real time; s2, identifying and judging whether the scene depth image contains a hand model, if so, executing the next step, otherwise, returning to execute the step S1; s3, extracting a gesture feature subsequence from the depth image frame sequence collected in the step S1 based on a sliding window method; and S4, comparing the extracted gesture feature subsequence with gesture information in a gesture library to identify control information of the gesture feature subsequence so as to realize corresponding control.
Compared with the prior art, the gesture recognition control method based on the ToF, provided by the invention, has the advantages that the scene depth image shot by the ToF module is collected in real time, and the collected scene depth image is processed into the depth image frame sequence in real time; when the hand model is judged to be contained, extracting a gesture feature subsequence from the depth image frame sequence containing the hand model by a sliding window method, and comparing the extracted gesture feature subsequence with gesture information in a gesture library to identify control information of the gesture feature subsequence. According to the gesture recognition control method based on the ToF provided by the invention, based on the acquisition rate of hundreds of frames to thousands of frames per second of the ToF camera, the capture of fine hand interaction actions is realized, so that the rapid hand actions are detected and divided, and the recognition of static (single frame)/dynamic (multi-frame) gesture actions is completed by utilizing a machine learning model, so that the problems of shielding, errors of a positive hand and a negative hand and the like in 2D gesture recognition can be effectively solved, and the accuracy and the rate of the gesture recognition are improved.
Preferably, the step S2 of "identifying and determining whether the scene depth image includes a hand model" includes: according to a scene depth image shot by the acquired ToF module, segmenting an object based on contour information in the scene depth image, and comparing the segmented object with a hand skeleton model in a form mode so as to determine whether a hand model exists in the scene.
Specifically, the hand skeleton model is a pre-stored hand skeleton model with 21 joints.
Preferably, the step S3 specifically includes: a size of a sliding window is defined that slides forward one frame at a time over the sequence of depth image frames to extract a sub-sequence of gesture features.
Specifically, the method for defining the size of the sliding window is as follows: applying a fast fourier transform algorithm to estimate the duration of each gesture execution; since each sequence contains multiple repetitions of the same gesture, the signature sequence can be approximated as a periodic signal; applying a fast Fourier transform algorithm and adding the position of the fundamental harmonic component, and evaluating the period as the reciprocal of the peak position; the estimated period is used to define the size of the sliding window.
Preferably, the step S4 specifically includes: and comparing the gesture information in the gesture library respectively through a plurality of continuous gesture characteristic subsequences, performing maximum scheme voting on a plurality of obtained recognition results, and taking the maximum voting number as a final recognition result.
Preferably, the gesture information stored in the gesture library comprises pre-stored gesture information and a training model which is generated by learning through a gesture set with depth information based on a multilayer convolutional neural network.
In order to achieve the above object, the present invention also provides a ToF-based gesture recognition control system, including: the system comprises an acquisition and depth calculation module, a hand recognition module, a hand extraction module and a gesture recognition module, wherein the acquisition and depth calculation module is used for acquiring a scene depth image shot by a ToF module in real time and processing the acquired scene depth image into a depth image frame sequence in real time; the hand recognition module is used for recognizing and judging whether the scene depth image contains a hand model or not; the hand extraction module extracts a gesture feature subsequence from the depth image frame sequence acquired by the acquisition and depth calculation module based on a sliding window method; the gesture recognition module is used for comparing the extracted gesture feature subsequence with gesture information in a gesture library so as to recognize control information of the gesture feature subsequence and further realize corresponding control.
Compared with the prior art, the gesture recognition control system based on the ToF comprises a collection and depth calculation module, a hand recognition module, a hand extraction module and a gesture recognition module, scene depth images collected by the ToF module at a collection rate of hundreds of frames to thousands of frames per second are obtained and recognized through mutual matching of the modules, capture of fine hand interaction actions is achieved, rapid hand actions are detected and divided, static (single frame)/dynamic (multi-frame) gesture actions are recognized through a machine learning model, the problems of shielding, wrong hands and the like in 2D gesture recognition can be effectively solved, and accuracy and speed of gesture recognition are improved.
Drawings
Fig. 1 is a schematic view of an application scenario of the ToF-based gesture recognition control method and the ToF-based gesture recognition control system according to the present invention.
Fig. 2 is a flowchart illustrating a ToF-based gesture recognition control method and a control system thereof according to the present invention.
Detailed Description
In order to explain technical contents, structural features, and objects and effects of the present invention in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
As shown in fig. 1 and 2, the present invention provides a gesture recognition control method based on ToF, including the steps of: s1, acquiring a scene depth image shot by a ToF module in real time, and processing the acquired scene depth image into a depth image frame sequence in real time; s2, identifying and judging whether the scene depth image contains a hand model, if so, executing the next step, otherwise, returning to execute the step S1; s3, extracting a gesture feature subsequence from the depth image frame sequence collected in the step S1 based on a sliding window method; and S4, comparing the extracted gesture feature subsequence with gesture information in a gesture library to identify control information of the gesture feature subsequence so as to realize corresponding control.
Compared with the prior art, the gesture recognition control method based on the ToF, provided by the invention, has the advantages that the scene depth image shot by the ToF module is collected in real time, and the collected scene depth image is processed into the depth image frame sequence in real time; when the hand model is judged to be contained, extracting a gesture feature subsequence from the depth image frame sequence containing the hand model by a sliding window method, and comparing the extracted gesture feature subsequence with gesture information in a gesture library to identify control information of the gesture feature subsequence. According to the gesture recognition control method based on the ToF provided by the invention, based on the acquisition rate of hundreds of frames to thousands of frames per second of the ToF camera, the capture of fine hand interaction actions is realized, so that the rapid hand actions are detected and divided, and the recognition of static (single frame)/dynamic (multi-frame) gesture actions is completed by utilizing a machine learning model, so that the problems of shielding, wrong hands and the like in 2D gesture recognition can be effectively solved, and the accuracy rate and the recognition rate of the gesture recognition are improved.
As shown in fig. 1 and 2, the present invention also provides a ToF-based gesture recognition control system, including: the system comprises an acquisition and depth calculation module, a hand recognition module, a hand extraction module and a gesture recognition module, wherein the acquisition and depth calculation module is used for acquiring a scene depth image shot by a ToF module in real time and processing the acquired scene depth image into a depth image frame sequence in real time; the hand recognition module is used for recognizing and judging whether the scene depth image contains a hand model or not; the hand extraction module extracts a gesture feature subsequence from the depth image frame sequence acquired by the acquisition and depth calculation module based on a sliding window method; the gesture recognition module is used for comparing the extracted gesture feature subsequence with gesture information in a gesture library so as to recognize control information of the gesture feature subsequence and further realize corresponding control. It can be understood that the gesture recognition control system based on ToF provided by the present invention exists in the computer device shown in fig. 1, wherein the scene depth image shot by the ToF module, which is collected in real time by the collection and depth calculation module, is further identified and determined by the hand recognition module, the hand extraction module and the gesture recognition module, and further the gesture command is recognized to realize the corresponding operation.
Compared with the prior art, the gesture recognition control system based on the ToF comprises a collection and depth calculation module, a hand recognition module, a hand extraction module and a gesture recognition module, scene depth images collected by the ToF module at a collection rate of hundreds of frames to thousands of frames per second are obtained and recognized through mutual matching of the modules, capture of fine hand interaction actions is achieved, rapid hand actions are detected and divided, static (single frame)/dynamic (multi-frame) gesture actions are recognized through a machine learning model, the problems of shielding, wrong hands and the like in 2D gesture recognition can be effectively solved, and the accuracy and recognition rate of gesture recognition are improved.
The ToF-based gesture recognition control method and control system provided by the invention are described in detail with reference to fig. 1 and 2. The gesture recognition control method based on the ToF comprises the following steps:
s1, acquiring a scene depth image shot by a ToF module in real time, and processing the acquired scene depth image into a depth image frame sequence in real time;
understandably, compared to RGB cameras, ToF module collects scene depth images at a collection rate of hundreds to thousands of frames per second; the acquisition and depth calculation module acquires the scene depth image shot by the ToF module in real time and processes the acquired scene depth image into a depth image frame sequence in real time, namely, the scene depth images shot by the ToF module are sequenced from hundreds of frames per second to thousands of frames per second according to the shooting time to form the depth image frame sequence. Because the high-speed acquisition of the ToF module only makes a difference of millisecond level between two adjacent frames of images in the depth image frame sequence obtained by processing, the hand action can be completely recorded in the depth image frame sequence, the action record loss can not occur, the subsequent gesture recognition accuracy can be greatly improved based on the perfect hand action record, and meanwhile, the problems of shielding, forehand and backhand errors and the like in 2D gesture recognition can be effectively solved by combining the acquisition of the depth 3D gesture image and a corresponding training model.
S2, identifying and judging whether the scene depth image contains a hand model, if so, executing the next step, otherwise, returning to execute the step S1;
specifically, referring to fig. 2, step S2 is mainly used for hand recognition, and when it is determined that the depth of field image includes a hand model, the subsequent steps are performed to recognize the meaning of a gesture in the image; and if the depth-of-field image does not contain the hand model, returning to continue to execute the step S1 to continue to collect the scene depth image shot by the ToF module.
In step S2, the step of "identifying and determining whether the scene depth image includes a hand model" specifically includes: according to a scene depth image shot by the ToF module and acquired in real time, segmenting an object based on contour information in the scene depth image, and comparing the shape of the segmented object with that of a hand skeleton model to determine whether a hand model exists in the scene.
Further, the hand skeleton model is a hand skeleton model with 21 joints, which is prestored in a storage module of the ToF-based gesture recognition control system; and comparing the segmented object with a pre-stored hand skeleton model with 21 joints to judge the similarity so as to judge whether the hand model exists in the scene.
S3, extracting a gesture feature subsequence from the depth image frame sequence collected in the step S1 based on a sliding window method;
the step S3 may be understood as extracting a sub-sequence from the sequence of depth image frames at a certain image frame sequence length (i.e. the size of the sliding window), wherein each time the sub-sequence is extracted, the image is slid one frame forward on the sequence of depth image frames compared to the previous time. The length of each extracted gesture feature subsequence is the size of a sliding window, and the images only positioned at the head end and the tail end of the sequence in the two adjacent gesture feature subsequences are different, and other images are the same.
The size of the sliding window needs to be defined. It is understood that the size of the sliding window needs to be as reasonable as possible, including as a complete operation gesture as possible. Specifically, the method for defining the size of the sliding window is as follows: applying a fast fourier transform algorithm to estimate the duration of each gesture execution; since each sequence contains multiple repetitions of the same gesture, the signature sequence can be approximated as a periodic signal; applying a fast Fourier transform algorithm and adding the position of the fundamental harmonic component, and evaluating the period as the reciprocal of the peak position; the estimated period is used to define the size of the sliding window.
And S4, comparing the extracted gesture feature subsequence with gesture information in a gesture library to identify control information of the gesture feature subsequence so as to realize corresponding control.
Specifically, the gesture information in the gesture library is compared respectively for a plurality of continuous gesture feature sub-sequences, the maximum scheme voting is carried out on a plurality of obtained recognition results, and the maximum voting number is used as the final recognition result.
In the calculation process, the gesture feature subsequence represents a feature vector; and providing the gesture feature subsequence as input to a classifier for gesture recognition, and outputting a recognition result by the classifier. Further, the maximum scheme voting is carried out on a plurality of continuous answers of the classifier obtained by testing a plurality of continuous gesture feature sub-sequences, and the maximum voting number is used as a final recognition result.
The gesture information stored in the gesture library comprises pre-stored gesture information and a training model which is generated by means of gesture set learning with depth information based on a multilayer convolutional neural network. Specifically, the gesture library is pre-stored with gesture information, such as grabbing, clicking, expanding, pinching, selecting, sliding, and moving in all directions, and the hand shape to be defined may be a static (single frame) or dynamic (multiple frames) hand feature. Aiming at different hand types, based on a multilayer convolutional neural network, the existing hand type set with depth information is utilized for learning, a training model is generated, and then the recognition accuracy is improved by learning, training, expanding capacity and upgrading gesture information in a gesture library.
According to the gesture recognition control method and the control system based on the ToF, the ToF camera is used for collecting depth information, gesture extraction is achieved based on the depth information, the training model of the multilayer convolutional neural network is used for achieving recognition of the 3D gesture, and therefore 3D gesture control in the 3D operation space is achieved. Compared with the method for gesture recognition applied to the traditional camera or vehicle-mounted gesture recognition device and the like, the gesture recognition control method and the control system based on the ToF can rapidly acquire scene image information and directly acquire space information, are beneficial to a sliding window method and maximum scheme voting based on the acquired complete and rich operation gesture information, and effectively improve the hand judgment accuracy, so that the recognition rate and the recognition accuracy of the gesture recognition control method and the control system based on the ToF are remarkably improved, and the application requirement of man-machine interaction is met.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the scope of the present invention, therefore, the present invention is not limited by the appended claims.

Claims (8)

1. A gesture recognition control method based on ToF is characterized by comprising the following steps:
s1, acquiring a scene depth image shot by a ToF module in real time, and processing the acquired scene depth image into a depth image frame sequence in real time;
s2, identifying and judging whether the scene depth image contains a hand model, if so, executing the next step, otherwise, returning to the step S1;
s3, extracting a gesture feature subsequence from the depth image frame sequence collected in the step S1 based on a sliding window method;
and S4, comparing the extracted gesture feature subsequence with gesture information in a gesture library to identify control information of the gesture feature subsequence so as to realize corresponding control.
2. The ToF-based gesture recognition control method according to claim 1, wherein the step S2 of "recognizing and determining whether the scene depth image includes a hand model" includes: according to a scene depth image shot by the acquired ToF module, segmenting an object based on contour information in the scene depth image, and comparing the segmented object with a hand skeleton model in a form mode so as to determine whether a hand model exists in the scene.
3. The ToF-based gesture recognition control method according to claim 2, wherein the hand skeleton model is a pre-stored hand skeleton model with 21 joints.
4. The ToF-based gesture recognition control method according to claim 1, wherein the step S3 specifically includes: a size of a sliding window is defined that slides forward one frame at a time over the sequence of depth image frames to extract a sub-sequence of gesture features.
5. The ToF-based gesture recognition control method according to claim 4, wherein the method of defining the size of the sliding window is: applying a fast fourier transform algorithm to estimate the duration of each gesture execution; since each sequence contains multiple repetitions of the same gesture, the signature sequence can be approximated as a periodic signal; applying a fast Fourier transform algorithm and adding the position of the fundamental harmonic component, and evaluating the period as the reciprocal of the peak position; the estimated period is used to define the size of the sliding window.
6. The ToF-based gesture recognition control method according to claim 1, wherein the step S4 specifically includes: and comparing the gesture information in the gesture library respectively through a plurality of continuous gesture characteristic subsequences, performing maximum scheme voting on a plurality of obtained recognition results, and taking the maximum voting number as a final recognition result.
7. The ToF-based gesture recognition control method according to claim 1, wherein the gesture information stored in the gesture library comprises pre-stored gesture information and a training model generated by gesture set learning with depth information based on a multi-layer convolutional neural network.
8. A ToF-based gesture recognition control system, comprising:
the acquisition and depth calculation module is used for acquiring a scene depth image shot by the ToF module in real time and processing the acquired scene depth image into a depth image frame sequence in real time;
the hand recognition module is used for recognizing and judging whether the scene depth image contains a hand model or not;
the hand extraction module extracts a gesture feature subsequence from the depth image frame sequence acquired by the acquisition and depth calculation module based on a sliding window method;
and the gesture recognition module is used for comparing the extracted gesture feature subsequence with gesture information in a gesture library so as to recognize control information of the gesture feature subsequence and further realize corresponding control.
CN202010406590.3A 2020-05-14 2020-05-14 Gesture recognition control method based on ToF and control system thereof Pending CN111651038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010406590.3A CN111651038A (en) 2020-05-14 2020-05-14 Gesture recognition control method based on ToF and control system thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010406590.3A CN111651038A (en) 2020-05-14 2020-05-14 Gesture recognition control method based on ToF and control system thereof

Publications (1)

Publication Number Publication Date
CN111651038A true CN111651038A (en) 2020-09-11

Family

ID=72343910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010406590.3A Pending CN111651038A (en) 2020-05-14 2020-05-14 Gesture recognition control method based on ToF and control system thereof

Country Status (1)

Country Link
CN (1) CN111651038A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507924A (en) * 2020-12-16 2021-03-16 深圳荆虹科技有限公司 3D gesture recognition method, device and system
CN114415830A (en) * 2021-12-31 2022-04-29 科大讯飞股份有限公司 Air input method and device, computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839040A (en) * 2012-11-27 2014-06-04 株式会社理光 Gesture identification method and device based on depth images
US20170300124A1 (en) * 2017-03-06 2017-10-19 Microsoft Technology Licensing, Llc Ultrasonic based gesture recognition
CN108597542A (en) * 2018-03-19 2018-09-28 华南理工大学 A kind of dysarthrosis severity method of estimation based on depth audio frequency characteristics
CN109409277A (en) * 2018-10-18 2019-03-01 北京旷视科技有限公司 Gesture identification method, device, intelligent terminal and computer storage medium
CN109614922A (en) * 2018-12-07 2019-04-12 南京富士通南大软件技术有限公司 A kind of dynamic static gesture identification method and system
CN110209273A (en) * 2019-05-23 2019-09-06 Oppo广东移动通信有限公司 Gesture identification method, interaction control method, device, medium and electronic equipment
CN110265135A (en) * 2019-07-30 2019-09-20 北京航空航天大学杭州创新研究院 A kind of stamping quality testing assessment system and method based on inertial sensor
CN110784653A (en) * 2019-11-20 2020-02-11 香港光云科技有限公司 Dynamic focusing method based on flight time and camera device thereof

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839040A (en) * 2012-11-27 2014-06-04 株式会社理光 Gesture identification method and device based on depth images
US20170300124A1 (en) * 2017-03-06 2017-10-19 Microsoft Technology Licensing, Llc Ultrasonic based gesture recognition
CN108597542A (en) * 2018-03-19 2018-09-28 华南理工大学 A kind of dysarthrosis severity method of estimation based on depth audio frequency characteristics
CN109409277A (en) * 2018-10-18 2019-03-01 北京旷视科技有限公司 Gesture identification method, device, intelligent terminal and computer storage medium
CN109614922A (en) * 2018-12-07 2019-04-12 南京富士通南大软件技术有限公司 A kind of dynamic static gesture identification method and system
CN110209273A (en) * 2019-05-23 2019-09-06 Oppo广东移动通信有限公司 Gesture identification method, interaction control method, device, medium and electronic equipment
CN110265135A (en) * 2019-07-30 2019-09-20 北京航空航天大学杭州创新研究院 A kind of stamping quality testing assessment system and method based on inertial sensor
CN110784653A (en) * 2019-11-20 2020-02-11 香港光云科技有限公司 Dynamic focusing method based on flight time and camera device thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507924A (en) * 2020-12-16 2021-03-16 深圳荆虹科技有限公司 3D gesture recognition method, device and system
CN112507924B (en) * 2020-12-16 2024-04-09 深圳荆虹科技有限公司 3D gesture recognition method, device and system
CN114415830A (en) * 2021-12-31 2022-04-29 科大讯飞股份有限公司 Air input method and device, computer readable storage medium

Similar Documents

Publication Publication Date Title
Chen et al. Repetitive assembly action recognition based on object detection and pose estimation
Gurav et al. Real time finger tracking and contour detection for gesture recognition using OpenCV
Raheja et al. Real-time robotic hand control using hand gestures
Malima et al. A fast algorithm for vision-based hand gesture recognition for robot control
Keskin et al. Real time hand tracking and 3d gesture recognition for interactive interfaces using hmm
JP5845002B2 (en) Image processing apparatus and method, and program
CN107885327B (en) Fingertip detection method based on Kinect depth information
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
CN109727275B (en) Object detection method, device, system and computer readable storage medium
CN111680594A (en) Augmented reality interaction method based on gesture recognition
CN113378770B (en) Gesture recognition method, device, equipment and storage medium
WO2008139399A2 (en) Method of determining motion-related features and method of performing motion classification
CN111652017B (en) Dynamic gesture recognition method and system
CN112527113A (en) Method and apparatus for training gesture recognition and gesture recognition network, medium, and device
Tofighi et al. Rapid hand posture recognition using adaptive histogram template of skin and hand edge contour
CN109086725B (en) Hand tracking method and machine-readable storage medium
CN111651038A (en) Gesture recognition control method based on ToF and control system thereof
CN112668492A (en) Behavior identification method for self-supervised learning and skeletal information
Je et al. Hand gesture recognition to understand musical conducting action
Huu et al. Proposing recognition algorithms for hand gestures based on machine learning model
CN112686122A (en) Human body and shadow detection method, device, electronic device and storage medium
Hoque et al. Computer vision based gesture recognition for desktop object manipulation
Gupta et al. Progression modelling for online and early gesture detection
CN113420839B (en) Semi-automatic labeling method and segmentation positioning system for stacking planar target objects
Ji et al. A view-invariant action recognition based on multi-view space hidden markov models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination