CN112906563A - Dynamic gesture recognition method, device and system and readable storage medium - Google Patents

Dynamic gesture recognition method, device and system and readable storage medium Download PDF

Info

Publication number
CN112906563A
CN112906563A CN202110189502.3A CN202110189502A CN112906563A CN 112906563 A CN112906563 A CN 112906563A CN 202110189502 A CN202110189502 A CN 202110189502A CN 112906563 A CN112906563 A CN 112906563A
Authority
CN
China
Prior art keywords
gesture
frame
point
video image
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110189502.3A
Other languages
Chinese (zh)
Inventor
马贝贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Yingxin Computer Technology Co Ltd
Original Assignee
Shandong Yingxin Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Yingxin Computer Technology Co Ltd filed Critical Shandong Yingxin Computer Technology Co Ltd
Priority to CN202110189502.3A priority Critical patent/CN112906563A/en
Publication of CN112906563A publication Critical patent/CN112906563A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a dynamic gesture recognition method, a device and a system and a computer readable storage medium, wherein the method comprises the steps of recognizing acquired gesture video information by adopting a pre-established gesture recognition network model to obtain control hand gesture frames corresponding to each frame of video image; performing centroid extraction on the control hand gesture frame of each frame of video image to obtain each centroid point corresponding to each pixel point in each frame of video image; determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of video image; performing curve fitting on each control point to obtain a corresponding gesture track; recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track; the invention can realize the identification of the dynamic gesture in the using process, and is beneficial to improving the identification efficiency and the identification accuracy of the gesture identification.

Description

Dynamic gesture recognition method, device and system and readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of image recognition, in particular to a dynamic gesture recognition method, a device and a system and a computer readable storage medium.
Background
Along with the development of computer technology, the importance of human-machine interaction gradually becomes prominent, and through the research and analysis of the development process of human-machine interaction and the current situation of human-machine interaction, the development trend in the future tends to be a natural interaction mode. The development of human-computer interaction is gradually started from the original keyboard input to the appearance of a mouse and a touch screen to the gesture, and the human-computer interaction is developed towards a more humanized, more natural and convenient idea direction which can reflect the man-made center.
The gesture is a human body gesture containing rich information, and is widely applied to human-computer interaction. Due to the characteristics of diversity and complexity of gestures, difference of gesture motion in time and space and the like, and uncertainty of vision, the human body gestures have great challenges in recognition. Complicated background information in the gesture recognition process causes great interference to gesture recognition, most of the existing gesture recognition technologies are static single-picture recognition, the static gesture recognition lacks space-time continuity information, and the significance of the gesture is difficult to accurately understand in the human-computer interaction process, so that the gesture recognition efficiency is low and the accuracy is poor. Especially, when a plurality of people exist in the control field, the gestures of other people have great influence on the gestures of the control people, so that the gesture information of the control people is more difficult to accurately identify, and the identification accuracy is influenced.
In view of the above, how to provide a dynamic gesture recognition method, apparatus, system and computer readable storage medium is a problem to be solved by those skilled in the art.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, a system, and a computer-readable storage medium for recognizing a dynamic gesture, which can recognize a dynamic gesture during use, and are beneficial to improving recognition efficiency and recognition accuracy of gesture recognition.
In order to solve the above technical problem, an embodiment of the present invention provides a dynamic gesture recognition method, including:
recognizing the acquired gesture video information by adopting a pre-established gesture recognition network model to obtain control hand gesture frames corresponding to each frame of video image;
performing centroid extraction on the control hand gesture frame of each frame of the video image to obtain each centroid point corresponding to each pixel point in each frame of the video image;
determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of the video image;
performing curve fitting on each control point to obtain a corresponding gesture track;
and recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track.
Optionally, the process of determining each control point of the gesture according to each centroid point corresponding to each pixel point in each frame of the video image is as follows:
acquiring a previous control point from a pre-established control point array;
calculating Euclidean distances between the centroid points corresponding to the pixel points in the current frame video image and the control points respectively, and taking the centroid point with the minimum Euclidean distance as a target centroid point;
judging whether the target centroid point meets a preset condition, if so, adding the target centroid point serving as a control point into the control point array; if not, (0,0) is added to the control point array;
judging whether the number of elements in the control point array reaches a preset value or not, if so, removing all (0,0) points in the control point array, and taking the rest control points in the control point array as control points of a gesture; and if not, taking the next frame of video image as the current frame of video image, and returning to the step of acquiring the previous control point from the pre-established control point array so as to enter the identification of the next frame of image.
Optionally, the preset conditions are:
the vertical distance between the target centroid point and the previous control point is smaller than a first preset distance value;
the horizontal distance between the target centroid point and the previous control point is smaller than a second preset distance value;
and the frame number difference between the current frame video image and the video image corresponding to the previous control point is smaller than a preset difference value.
Optionally, the process of extracting the centroid of the control hand gesture box of each frame of the video image to obtain each centroid point corresponding to each frame of the video image is as follows:
and carrying out centroid extraction on the control hand gesture box of each frame of the video image by adopting a bilinear interpolation method to obtain each centroid point respectively corresponding to each frame of the video image.
Optionally, the establishment process of the gesture recognition network model is as follows:
adding two convolution layers in advance in an original convolution layer of the convolutional neural network to form an improved convolutional neural network comprising 5 convolution layers with different scales;
and training the modified convolutional neural network by adopting a gesture training sample set and a gesture testing sample set to obtain a gesture recognition neural network model.
Optionally, the convolutional neural network is a YOLOv3 convolutional neural network;
the resolution of each convolution layer is 64 × 64, 32 × 32, 16 × 16, 8 × 8, 4 × 4, respectively.
The embodiment of the invention also correspondingly provides a dynamic gesture recognition device, which comprises:
the first identification module is used for identifying the acquired gesture video information by adopting a pre-established gesture identification network model to obtain control hand gesture frames corresponding to each frame of video image;
the extraction module is used for carrying out centroid extraction on the control hand gesture frame of each frame of video image to obtain each centroid point corresponding to each pixel point in each frame of video image;
the determining module is used for determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of video image;
the fitting module is used for performing curve fitting on each control point to obtain a corresponding gesture track;
and the second recognition module is used for recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track.
The embodiment of the present invention further provides a dynamic gesture recognition system, including:
a memory for storing a computer program;
a processor for implementing the steps of the dynamic gesture recognition method as described above when executing the computer program.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the dynamic gesture recognition method are implemented as described above.
The embodiment of the invention provides a dynamic gesture recognition method, a device and a system as well as a computer readable storage medium, wherein the method adopts a pre-established gesture recognition network model to recognize acquired gesture video information to obtain control hand gesture frames corresponding to each frame of video image; then, centroid extraction is carried out on the control hand gesture frame of each frame of video image, and each centroid point corresponding to each pixel point in each frame of video image is obtained; determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of video image; performing curve fitting on each control point to obtain a corresponding gesture track; then, recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track; the invention can realize the identification of the dynamic gesture in the using process, and is beneficial to improving the identification efficiency and the identification accuracy of the gesture identification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a dynamic gesture recognition method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a dynamic gesture recognition apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a dynamic gesture recognition method, a device and a system and a computer readable storage medium, which can realize the recognition of a dynamic gesture in the using process and are beneficial to improving the recognition efficiency and the recognition accuracy of gesture recognition.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a dynamic gesture recognition method according to an embodiment of the present invention. The method comprises the following steps:
s110: recognizing the acquired gesture video information by adopting a pre-established gesture recognition network model to obtain control hand gesture frames corresponding to each frame of video image;
it should be noted that a gesture recognition network model may be established in advance, specifically, two convolutional layers may be added in advance in the original convolutional layer of the convolutional neural network to form an improved convolutional neural network including 5 convolutional layers with different scales; and then training the modified convolutional neural network by adopting a gesture training sample set and a gesture testing sample set so as to obtain a gesture recognition neural network model.
Specifically, the convolutional neural network is a YOLOv3 convolutional neural network, that is, specifically, two convolutional layers may be added on the basis of 3 original convolutional layers of the YOLOv3 convolutional neural network to jointly construct a feature pyramid including 5 convolutional layers with different scales, where the resolution of each convolutional layer is 64 × 64, 32 × 32, 16 × 16, 8 × 8, and 4 × 4, respectively. The pyramid can be up-sampled by 2 times of step length and fused by a depth residual error network. When a video image is recognized, a feature extraction network in a gesture recognition network model divides an input video image into M multiplied by M unit cells according to the size of a feature map, a gesture center falls into which unit cell, the unit cell is responsible for detecting the target, richer and more distinctive features are obtained through the fusion of convolution features and corresponding up-sampling features and are sent to a detection network, the detection network performs feature regression on 5 scales, an amazing algorithm is used for calculating the intersection and ratio (IOU) of a currently obtained highest prediction frame and other prediction frames, and the prediction frames of non-gestures are filtered according to the threshold value and the score of each prediction frame to obtain the prediction frame of the gesture, namely the control gesture frame in the video image.
S120: performing centroid extraction on the control hand gesture frame of each frame of video image to obtain each centroid point corresponding to each pixel point in each frame of video image;
specifically, after a control hand gesture frame corresponding to each frame of video image is obtained, the following operations are performed for each frame of video image:
the control hand gesture frame can be firstly converted into HSV space from RGB space, then the gesture skin color area is divided based on the HSV space to obtain a gesture area picture, then the divided gesture area picture is corroded and expanded, noise is removed through a Gaussian filter algorithm, a polygonal frame of the gesture area picture of the hand is extracted through an 8-connected region filling algorithm, then the mass center of each pixel point in the polygonal frame is extracted, the mass center point corresponding to each pixel point is obtained, specifically the mass center of each pixel point in the control hand gesture frame is obtained, and specifically the mass center of each pixel point can be extracted through a bilinear interpolation method.
S130: determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of video image;
it should be noted that, for each frame of video image, the control point of the gesture can be obtained by analyzing the centroid point corresponding to each pixel point of the frame of video image, where the specific process may be:
acquiring a previous control point from a pre-established control point array;
calculating Euclidean distances between the centroid points corresponding to the pixel points in the current frame video image and the control points respectively, and taking the centroid point with the minimum Euclidean distance as a target centroid point;
judging whether the target centroid point meets a preset condition, if so, adding the target centroid point serving as a control point into the control point array; if not, (0,0) is added to the control point array;
judging whether the number of elements in the control point array reaches a preset value or not, if so, removing all (0,0) points in the control point array, and taking the rest control points in the control point array as control points of the gesture; and if not, taking the next frame of video image as the current frame of video image, and returning to the step of acquiring the previous control point from the pre-established control point array so as to enter the identification of the next frame of image.
Specifically, a control point array is established in advance, when each control point is determined, for a current frame video image, a previous control point, that is, a newly selected control point, can be selected from the control point array, then an euclidean distance between a centroid point corresponding to each pixel point in the current frame video image (that is, a centroid point corresponding to each pixel point of a control hand gesture frame in the current frame video image) and a previous control end is calculated, then a centroid point corresponding to a minimum euclidean distance in each euclidean distance is taken as a target centroid point, and whether the target centroid point meets a preset condition is determined, wherein the preset condition is:
the vertical distance between the target centroid point and the previous control point is smaller than a first preset distance value;
the horizontal distance between the target centroid point and the previous control point is smaller than a second preset distance value;
the difference between the frame number of the current frame video image and the frame number of the video image corresponding to the previous control point is smaller than a preset difference (for example, 15 frames).
That is, when the target centroid point satisfies the above three conditions, the target centroid point is stored as a control point in the control point array, specifically, sequentially stored in the control point array, and when a next frame of video image is processed, the control point is used as a next control point. For the first frame of video image, the last control point is a (0,0) point, that is, the initial control point coordinate is (0, 0). When the target centroid point does not satisfy the three conditions, adding the (0,0) point to the control point array, then judging whether the current element number in the control point array reaches a preset value (for example, 32), when the preset value is reached, indicating that the current gesture recognition is completed, deleting all the (0,0) points in the control point array at this time to obtain the remaining control points, wherein the control points are the control points of the gesture.
S140: performing curve fitting on each control point to obtain a corresponding gesture track;
specifically, after each control point of the gesture is obtained, fitting is performed on each control point, that is, curve fitting is performed on each remaining control point in the control point array to form a gesture track, and the gesture track is the gesture action of the user.
S150: and recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track.
The gesture trajectory is obtained, and then the gesture trajectory is classified and recognized through the classifier, so that corresponding gesture information is obtained. Of course, after the gesture information is obtained, a control instruction can be generated according to the gesture information and sent to the terminal device, so that the terminal device can execute corresponding operation according to the control instruction.
According to the method, the acquired gesture video information is recognized by adopting a pre-established gesture recognition network model, and control hand gesture frames corresponding to each frame of video image are obtained; then, centroid extraction is carried out on the control hand gesture frame of each frame of video image, and each centroid point corresponding to each pixel point in each frame of video image is obtained; determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of video image; performing curve fitting on each control point to obtain a corresponding gesture track; then, recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track; the invention can realize the identification of the dynamic gesture in the using process, and is beneficial to improving the identification efficiency and the identification accuracy of the gesture identification.
On the basis of the above embodiments, the embodiment of the present invention further provides a dynamic gesture recognition apparatus, which is specifically shown in fig. 2. The device includes:
the first identification module 21 is configured to identify the acquired gesture video information by using a pre-established gesture identification network model to obtain control hand gesture frames corresponding to each frame of video image;
the extraction module 22 is configured to perform centroid extraction on the control hand gesture frame of each frame of video image to obtain centroid points corresponding to the pixel points in each frame of video image;
the determining module 23 is configured to determine each control point of the gesture according to each centroid point corresponding to each pixel point in each frame of video image;
the fitting module 24 is configured to perform curve fitting on each control point to obtain a corresponding gesture track;
and the second recognition module 25 is configured to recognize the gesture trajectory by using a pre-established classifier, so as to obtain gesture information corresponding to the gesture trajectory.
It should be noted that the dynamic gesture recognition apparatus provided in the embodiment of the present invention has the same beneficial effects as the dynamic gesture recognition method provided in the above embodiment, and for the specific description of the dynamic gesture recognition method related in the embodiment of the present invention, please refer to the above embodiment, which is not described herein again.
On the basis of the above embodiment, an embodiment of the present invention further provides a dynamic gesture recognition system, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of the dynamic gesture recognition method when executing the computer program.
For example, the processor in the embodiment of the present invention may be specifically configured to identify the acquired gesture video information by using a pre-established gesture recognition network model, so as to obtain control hand gesture boxes corresponding to each frame of video image; performing centroid extraction on the control hand gesture frame of each frame of video image to obtain each centroid point corresponding to each pixel point in each frame of video image; determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of video image; performing curve fitting on each control point to obtain a corresponding gesture track; and recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track.
On the basis of the foregoing embodiments, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the dynamic gesture recognition method as described above.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A dynamic gesture recognition method, comprising:
recognizing the acquired gesture video information by adopting a pre-established gesture recognition network model to obtain control hand gesture frames corresponding to each frame of video image;
performing centroid extraction on the control hand gesture frame of each frame of the video image to obtain each centroid point corresponding to each pixel point in each frame of the video image;
determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of the video image;
performing curve fitting on each control point to obtain a corresponding gesture track;
and recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track.
2. The method according to claim 1, wherein the process of determining each control point of the gesture based on each centroid point corresponding to each pixel point in each frame of the video image comprises:
acquiring a previous control point from a pre-established control point array;
calculating Euclidean distances between the centroid points corresponding to the pixel points in the current frame video image and the control points respectively, and taking the centroid point with the minimum Euclidean distance as a target centroid point;
judging whether the target centroid point meets a preset condition, if so, adding the target centroid point serving as a control point into the control point array; if not, (0,0) is added to the control point array;
judging whether the number of elements in the control point array reaches a preset value or not, if so, removing all (0,0) points in the control point array, and taking the rest control points in the control point array as control points of a gesture; and if not, taking the next frame of video image as the current frame of video image, and returning to the step of acquiring the previous control point from the pre-established control point array so as to enter the identification of the next frame of image.
3. The dynamic gesture recognition method according to claim 2, wherein the preset conditions are:
the vertical distance between the target centroid point and the previous control point is smaller than a first preset distance value;
the horizontal distance between the target centroid point and the previous control point is smaller than a second preset distance value;
and the frame number difference between the current frame video image and the video image corresponding to the previous control point is smaller than a preset difference value.
4. The dynamic gesture recognition method according to claim 2, wherein the process of extracting the centroid of the control hand gesture box of each frame of the video image to obtain each centroid point respectively corresponding to each frame of the video image comprises:
and carrying out centroid extraction on the control hand gesture box of each frame of the video image by adopting a bilinear interpolation method to obtain each centroid point respectively corresponding to each frame of the video image.
5. The dynamic gesture recognition method according to claim 1, wherein the gesture recognition network model is established by:
adding two convolution layers in advance in an original convolution layer of the convolutional neural network to form an improved convolutional neural network comprising 5 convolution layers with different scales;
and training the modified convolutional neural network by adopting a gesture training sample set and a gesture testing sample set to obtain a gesture recognition neural network model.
6. The dynamic gesture recognition method of claim 5, wherein the convolutional neural network is a YOLOv3 convolutional neural network;
the resolution of each convolution layer is 64 × 64, 32 × 32, 16 × 16, 8 × 8, 4 × 4, respectively.
7. A dynamic gesture recognition apparatus, comprising:
the first identification module is used for identifying the acquired gesture video information by adopting a pre-established gesture identification network model to obtain control hand gesture frames corresponding to each frame of video image;
the extraction module is used for carrying out centroid extraction on the control hand gesture frame of each frame of video image to obtain each centroid point corresponding to each pixel point in each frame of video image;
the determining module is used for determining each control point of the gesture according to each center of mass point corresponding to each pixel point in each frame of video image;
the fitting module is used for performing curve fitting on each control point to obtain a corresponding gesture track;
and the second recognition module is used for recognizing the gesture track by adopting a pre-established classifier to obtain gesture information corresponding to the gesture track.
8. A dynamic gesture recognition system, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the dynamic gesture recognition method according to any one of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the dynamic gesture recognition method according to any one of claims 1 to 6.
CN202110189502.3A 2021-02-19 2021-02-19 Dynamic gesture recognition method, device and system and readable storage medium Withdrawn CN112906563A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110189502.3A CN112906563A (en) 2021-02-19 2021-02-19 Dynamic gesture recognition method, device and system and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110189502.3A CN112906563A (en) 2021-02-19 2021-02-19 Dynamic gesture recognition method, device and system and readable storage medium

Publications (1)

Publication Number Publication Date
CN112906563A true CN112906563A (en) 2021-06-04

Family

ID=76123878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110189502.3A Withdrawn CN112906563A (en) 2021-02-19 2021-02-19 Dynamic gesture recognition method, device and system and readable storage medium

Country Status (1)

Country Link
CN (1) CN112906563A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556606A (en) * 2021-07-07 2021-10-26 深圳创维-Rgb电子有限公司 Television control method, device and equipment based on gestures and storage medium
CN113703581A (en) * 2021-09-03 2021-11-26 广州朗国电子科技股份有限公司 Window adjusting method based on gesture switching, electronic whiteboard and storage medium
CN114785955A (en) * 2022-05-05 2022-07-22 广州新华学院 Motion compensation method, system and storage medium for dynamic camera in complex scene

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096471A (en) * 2011-02-18 2011-06-15 广东威创视讯科技股份有限公司 Human-computer interaction method based on machine vision
CN103389799A (en) * 2013-07-24 2013-11-13 清华大学深圳研究生院 Method for tracking motion trail of fingertip
CN104392210A (en) * 2014-11-13 2015-03-04 海信集团有限公司 Gesture recognition method
CN105335711A (en) * 2015-10-22 2016-02-17 华南理工大学 Fingertip detection method in complex environment
CN110287894A (en) * 2019-06-27 2019-09-27 深圳市优象计算技术有限公司 A kind of gesture identification method and system for ultra-wide angle video
CN111176443A (en) * 2019-12-12 2020-05-19 青岛小鸟看看科技有限公司 Vehicle-mounted intelligent system and control method thereof
CN111797709A (en) * 2020-06-14 2020-10-20 浙江工业大学 Real-time dynamic gesture track recognition method based on regression detection
CN112115853A (en) * 2020-09-17 2020-12-22 西安羚控电子科技有限公司 Gesture recognition method and device, computer storage medium and electronic equipment
CN112506342A (en) * 2020-12-04 2021-03-16 郑州中业科技股份有限公司 Man-machine interaction method and system based on dynamic gesture recognition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096471A (en) * 2011-02-18 2011-06-15 广东威创视讯科技股份有限公司 Human-computer interaction method based on machine vision
CN103389799A (en) * 2013-07-24 2013-11-13 清华大学深圳研究生院 Method for tracking motion trail of fingertip
CN104392210A (en) * 2014-11-13 2015-03-04 海信集团有限公司 Gesture recognition method
CN105335711A (en) * 2015-10-22 2016-02-17 华南理工大学 Fingertip detection method in complex environment
CN110287894A (en) * 2019-06-27 2019-09-27 深圳市优象计算技术有限公司 A kind of gesture identification method and system for ultra-wide angle video
CN111176443A (en) * 2019-12-12 2020-05-19 青岛小鸟看看科技有限公司 Vehicle-mounted intelligent system and control method thereof
CN111797709A (en) * 2020-06-14 2020-10-20 浙江工业大学 Real-time dynamic gesture track recognition method based on regression detection
CN112115853A (en) * 2020-09-17 2020-12-22 西安羚控电子科技有限公司 Gesture recognition method and device, computer storage medium and electronic equipment
CN112506342A (en) * 2020-12-04 2021-03-16 郑州中业科技股份有限公司 Man-machine interaction method and system based on dynamic gesture recognition

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113556606A (en) * 2021-07-07 2021-10-26 深圳创维-Rgb电子有限公司 Television control method, device and equipment based on gestures and storage medium
CN113703581A (en) * 2021-09-03 2021-11-26 广州朗国电子科技股份有限公司 Window adjusting method based on gesture switching, electronic whiteboard and storage medium
CN114785955A (en) * 2022-05-05 2022-07-22 广州新华学院 Motion compensation method, system and storage medium for dynamic camera in complex scene
CN114785955B (en) * 2022-05-05 2023-08-15 广州新华学院 Dynamic camera motion compensation method, system and storage medium under complex scene

Similar Documents

Publication Publication Date Title
CN112506342B (en) Man-machine interaction method and system based on dynamic gesture recognition
CN112906563A (en) Dynamic gesture recognition method, device and system and readable storage medium
CN108520247A (en) To the recognition methods of the Object node in image, device, terminal and readable medium
WO2017152794A1 (en) Method and device for target tracking
EP3514724B1 (en) Depth map-based heuristic finger detection method
US20150278167A1 (en) Automatic measure of visual similarity between fonts
JP7246104B2 (en) License plate identification method based on text line identification
US11449706B2 (en) Information processing method and information processing system
CN113313083B (en) Text detection method and device
CN106155540B (en) Electronic brush pen pen shape treating method and apparatus
KR102677200B1 (en) Gesture stroke recognition in touch-based user interface input
CN110827246A (en) Electronic equipment frame appearance flaw detection method and equipment
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN113516113A (en) Image content identification method, device, equipment and storage medium
JP2022540101A (en) POSITIONING METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM
CN113723264A (en) Method and system for intelligently identifying playing errors for assisting piano teaching
CN108520263B (en) Panoramic image identification method and system and computer storage medium
CN112560584A (en) Face detection method and device, storage medium and terminal
CN114519853A (en) Three-dimensional target detection method and system based on multi-mode fusion
CN110796250A (en) Convolution processing method and system applied to convolutional neural network and related components
CN110007764B (en) Gesture skeleton recognition method, device and system and storage medium
EP4435571A1 (en) Touch handwriting generation method and apparatus, electronic device, and storage medium
JP6405603B2 (en) Information processing apparatus, information processing system, and program
CN110110660B (en) Method, device and equipment for analyzing hand operation behaviors
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210604

WW01 Invention patent application withdrawn after publication