CN112418089A - Gesture recognition method and device and terminal - Google Patents

Gesture recognition method and device and terminal Download PDF

Info

Publication number
CN112418089A
CN112418089A CN202011320587.6A CN202011320587A CN112418089A CN 112418089 A CN112418089 A CN 112418089A CN 202011320587 A CN202011320587 A CN 202011320587A CN 112418089 A CN112418089 A CN 112418089A
Authority
CN
China
Prior art keywords
gesture
point cloud
cloud data
target
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011320587.6A
Other languages
Chinese (zh)
Inventor
黄小浦
薛晓君
焦子朋
胡玉斌
秦屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Whst Co Ltd
Original Assignee
Whst Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Whst Co Ltd filed Critical Whst Co Ltd
Priority to CN202011320587.6A priority Critical patent/CN112418089A/en
Publication of CN112418089A publication Critical patent/CN112418089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention is suitable for the technical field of gesture recognition, and provides a gesture recognition method, a device and a terminal, wherein the method comprises the following steps: acquiring point cloud data of a target gesture; determining a plurality of feature extraction parameters, and extracting feature values corresponding to the feature extraction parameters from the point cloud data to obtain gesture features of the target gesture; and identifying the target gesture based on the gesture characteristics and a preset gesture identification model. The invention can reduce the calculation amount in the gesture recognition process and improve the gesture recognition efficiency.

Description

Gesture recognition method and device and terminal
Technical Field
The invention belongs to the technical field of gesture recognition, and particularly relates to a gesture recognition method, a gesture recognition device and a terminal.
Background
Gesture recognition is increasingly well known and recognized as a new type of human-computer interaction. Compared with a visual recognition mode and an infrared recognition mode, the gesture recognition based on the millimeter wave radar is not affected by light and temperature, can penetrate through an obstacle, and has high reliability.
However, the inventor of the present application finds that, in the prior art, most of gesture recognition methods based on the millimeter wave radar extract gesture features according to a distance-doppler diagram by training a CNN (convolutional neural network) recognition model, and the method introduces a large number of parameters, so that the gesture recognition process needs to occupy a large amount of computing resources, and is time-consuming and inefficient.
Disclosure of Invention
In view of this, embodiments of the present invention provide a gesture recognition method, a gesture recognition device and a terminal, so as to solve the problems of a large calculation amount and low efficiency of the gesture recognition method in the prior art.
A first aspect of an embodiment of the present invention provides a gesture recognition method, including:
acquiring point cloud data of a target gesture;
determining a plurality of feature extraction parameters, and extracting feature values corresponding to the feature extraction parameters from the point cloud data to obtain gesture features of the target gesture;
and identifying the target gesture based on the gesture characteristics and a preset gesture identification model.
A second aspect of an embodiment of the present invention provides a gesture recognition apparatus, including:
the acquisition module is used for acquiring point cloud data of the target gesture;
the feature extraction module is used for determining a plurality of feature extraction parameters and extracting feature values corresponding to the feature extraction parameters from the point cloud data to obtain gesture features of the target gesture;
and the gesture recognition module is used for recognizing the target gesture based on the gesture characteristics and a preset gesture recognition model.
A third aspect of the embodiments of the present invention provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the gesture recognition method when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the gesture recognition method as described above.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
according to the invention, a plurality of feature extraction parameters are determined, and the feature values corresponding to the feature extraction parameters are extracted from the point cloud data of the target gesture to obtain the gesture features of the target gesture, so that the number of the feature values of the gesture is greatly reduced, and the target gesture is effectively recognized based on the gesture features and a preset gesture recognition model. The invention can reduce the calculation amount in the gesture recognition process and improve the gesture recognition efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a gesture recognition method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a gesture recognition apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
A first aspect of an embodiment of the present invention provides a gesture recognition method, as shown in fig. 1, the method specifically includes the following steps:
and S101, point cloud data of the target gesture are obtained.
Optionally, as a specific implementation manner of the gesture recognition method provided by the embodiment of the present invention, after the point cloud data of the target gesture is obtained, the method further includes:
preprocessing point cloud data of the target gesture; wherein the pretreatment comprises: and removing wrong point cloud data, and performing data enhancement on the point cloud data of the target gesture.
In the embodiment of the invention, because different people have different understandings on the standard gesture and errors caused by human factors, the point cloud data of the target gesture may contain error data, and the error data can be removed through preprocessing. In addition, if the data amount of the point cloud data is small, data enhancement can be performed on the point cloud data, for example, random disturbance is added to the original data to form new data, and the like.
And S102, determining a plurality of feature extraction parameters, and extracting feature values corresponding to the feature extraction parameters from the point cloud data to obtain the gesture features of the target gesture.
Optionally, as a specific implementation manner of the gesture recognition method provided by the embodiment of the present invention, the point cloud data of the target gesture is point cloud data corresponding to each gesture frame of the target gesture, where each gesture frame of the target gesture is obtained by detecting the target gesture by the target detection device; the plurality of feature extraction parameters include distance, velocity, azimuth, and pitch.
Extracting characteristic values corresponding to the characteristic extraction parameters from the point cloud data to obtain gesture characteristics of the target gesture comprises the following steps:
and extracting the distance value, the speed value, the azimuth angle value and the pitch angle value corresponding to each gesture frame from the point cloud data corresponding to each gesture frame to obtain the gesture features of the target gesture.
In the embodiment of the invention, the target gesture is detected by the target detection equipment to obtain the gesture frame sequence of the target gesture, wherein the target detection equipment can be a millimeter wave radar, then each gesture frame in the gesture frame sequence is analyzed to obtain the point cloud data corresponding to each gesture frame, and further the point cloud data corresponding to each gesture frame of the target gesture can be obtained.
By determining four parameters of distance, speed, azimuth angle and pitch angle as characteristic extraction parameters and extracting the distance, speed, azimuth angle and pitch angle corresponding to each gesture frame from the point cloud data corresponding to each gesture frame of the target gesture, each frame of data of the target gesture is not a huge distance-Doppler image any more but is only four simple numerical values, and the number of parameters for gesture recognition is greatly reduced.
In the embodiment of the present invention, the distance refers to a distance between a target gesture and a target detection device, the speed refers to a speed of the target gesture, the azimuth refers to an azimuth of the target gesture relative to the target detection device, and the pitch refers to a pitch of the target gesture relative to the target detection device.
Optionally, as a specific implementation manner of the gesture recognition method provided by the embodiment of the present invention, after obtaining the gesture feature of the target gesture, the method further includes:
judging the size relation between the number of the characteristic values corresponding to each characteristic extraction parameter and a preset threshold value; the preset threshold is a preset input size of the gesture recognition model;
and performing feature alignment on the gesture features of the target gesture based on the size relationship.
Optionally, as a specific implementation manner of the gesture recognition method provided in the embodiment of the present invention, performing feature alignment on the gesture features of the target gesture based on the size relationship includes:
calculating the difference value between the number of the characteristic values corresponding to each characteristic extraction parameter and a preset threshold value, and solving the absolute value of the difference value;
supplementing the feature value of the feature extraction parameter with the number of the feature values smaller than a preset threshold value based on the absolute value of the difference value;
and deleting the characteristic values of the characteristic extraction parameters of which the number of the characteristic values is greater than a preset threshold value based on the absolute values of the differences.
In the embodiment of the invention, as the habits of each person in making gestures are different, the number of gesture frames detected by the target detection device each time is not a determined value, further, the number of feature values corresponding to each feature extraction parameter is not a determined value, and the gesture recognition model requires that the input gesture features conform to the preset input size, therefore, for the feature extraction parameters of which the number of feature values is smaller than the preset threshold value, the feature values of the feature extraction parameters can be supplemented in a linear interpolation or mean value interpolation mode, for the feature extraction parameters of which the number of feature values is larger than the preset threshold value, the feature values of the feature extraction parameters can be deleted in a simple deletion or mean value combination mode, and finally, the gesture features of the target gesture conform to the input conditions of the gesture recognition model.
And S103, recognizing the target gesture based on the gesture characteristics and a preset gesture recognition model.
In the embodiment of the invention, the preset gesture recognition model is a lightweight multilayer neural network model, and compared with a convolutional neural network model, the multilayer neural network model has the advantages of simple structure, high calculation speed and high recognition accuracy.
Optionally, as a specific implementation manner of the gesture recognition method provided in the embodiment of the present invention, the method for establishing the gesture recognition model includes:
acquiring point cloud data of a plurality of preset gestures;
respectively extracting characteristic values corresponding to the characteristic extraction parameters from the point cloud data of each preset gesture to obtain a training set of each preset gesture;
and performing model training based on the training set of each preset gesture to obtain a gesture recognition model.
Optionally, as a specific implementation manner of the gesture recognition method provided by the embodiment of the present invention, point cloud data of a certain preset gesture is point cloud data corresponding to each gesture frame of the preset gesture, where each gesture frame of the preset gesture is obtained by detecting the preset gesture by the target detection device;
after point cloud data of a plurality of preset gestures are acquired, the method further comprises the following steps:
calculating the total number of gesture frames of multiple preset gestures, and averaging;
and determining the preset input size of the gesture recognition model based on the average value.
In the embodiment of the invention, the preset gestures include left-right sliding, right-left sliding, up-down sliding, down-up sliding, clockwise rotating, counterclockwise rotating and calling, and the 7 preset gestures can be collected by a millimeter wave radar and then trained to obtain a gesture recognition model. It should be noted that, in practical application, because the gesture habits of each person are different and the same gesture made by the same person is not completely the same, for the same gesture, the gesture data of a plurality of persons can be collected and the same person can be collected for a plurality of times, and the recognition accuracy of the gesture recognition model can be obviously improved.
In addition, in the embodiment of the present invention, the process of extracting the gesture feature of each preset gesture is similar to the process of extracting the gesture feature of the target gesture, and is not repeated here.
According to the invention, the plurality of feature extraction parameters are determined, the feature values corresponding to the feature extraction parameters are extracted from the point cloud data of the target gesture, so that the gesture features of the target gesture are obtained, the number of the feature values of the gesture is greatly reduced, and the target gesture is identified based on the gesture features and the preset gesture identification model. Compared with the method for extracting the gesture features according to the range-Doppler image in the prior art, the method can effectively reduce the calculated amount in the gesture recognition process and improve the gesture recognition efficiency.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
A second aspect of the embodiments of the present invention provides a gesture recognition apparatus, as shown in fig. 2, the gesture recognition apparatus 2 includes:
and the acquisition module 21 is configured to acquire point cloud data of the target gesture.
And the feature extraction module 22 is configured to determine a plurality of feature extraction parameters, and extract feature values corresponding to the feature extraction parameters from the point cloud data to obtain gesture features of the target gesture.
And the gesture recognition module 23 is configured to recognize the target gesture based on the gesture feature and a preset gesture recognition model.
Optionally, as a specific implementation manner of the gesture recognition apparatus according to the second aspect of the embodiment of the present invention, the obtaining module 21 is further configured to:
preprocessing point cloud data of the target gesture; wherein the pretreatment comprises: and removing wrong point cloud data, and performing data enhancement on the point cloud data of the target gesture.
Optionally, as a specific implementation manner of the gesture recognition apparatus provided in the second aspect of the embodiment of the present invention, the point cloud data of the target gesture is point cloud data corresponding to each gesture frame of the target gesture, where each gesture frame of the target gesture is obtained by detecting the target gesture by the target detection device; the plurality of feature extraction parameters include distance, velocity, azimuth, and pitch. The feature extraction module 22 is specifically configured to:
and extracting the distance value, the speed value, the azimuth angle value and the pitch angle value corresponding to each gesture frame from the point cloud data corresponding to each gesture frame to obtain the gesture features of the target gesture.
Optionally, as a specific implementation manner of the gesture recognition apparatus provided in the second aspect of the embodiment of the present invention, the gesture recognition module 23 is further configured to:
judging the size relation between the number of the characteristic values corresponding to each characteristic extraction parameter and a preset threshold value; the preset threshold is a preset input size of the gesture recognition model;
and performing feature alignment on the gesture features of the target gesture based on the size relationship.
Optionally, as a specific implementation manner of the gesture recognition apparatus provided in the second aspect of the embodiment of the present invention, the feature alignment is performed on the gesture feature of the target gesture based on the size relationship, which may be detailed as follows:
calculating the difference value between the number of the characteristic values corresponding to each characteristic extraction parameter and a preset threshold value, and solving the absolute value of the difference value;
supplementing the feature value of the feature extraction parameter with the number of the feature values smaller than a preset threshold value based on the absolute value of the difference value;
and deleting the characteristic values of the characteristic extraction parameters of which the number of the characteristic values is greater than a preset threshold value based on the absolute values of the differences.
Optionally, as a specific implementation manner of the gesture recognition apparatus provided in the second aspect of the embodiment of the present invention, the gesture recognition module 23 is further configured to:
acquiring point cloud data of a plurality of preset gestures;
respectively extracting characteristic values corresponding to the characteristic extraction parameters from the point cloud data of each preset gesture to obtain a training set of each preset gesture;
and performing model training based on the training set of each preset gesture to obtain a gesture recognition model.
Optionally, as a specific implementation manner of the gesture recognition apparatus provided in the second aspect of the embodiment of the present invention, the point cloud data of a certain preset gesture is point cloud data corresponding to each gesture frame of the preset gesture, where each gesture frame of the preset gesture is obtained by detecting the preset gesture by the target detection device. The gesture recognition module 23 is further configured to:
calculating the total number of gesture frames of multiple preset gestures, and averaging;
and determining the preset input size of the gesture recognition model based on the average value.
Fig. 3 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 3, the terminal 3 of this embodiment includes: a processor 30, a memory 31, and a computer program 32 stored in the memory 31 and executable on the processor 30. The processor 30, when executing the computer program 32, implements the steps in the above-described embodiments of the gesture recognition method, such as the steps S101 to S103 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 32, implements the functions of the respective modules in the above-described apparatus embodiments, such as the functions of the modules 21 to 23 shown in fig. 2.
Illustratively, the computer program 32 may be partitioned into one or more modules, which are stored in the memory 31 and executed by the processor 30 to implement the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 32 in the terminal 3. For example, the computer program 32 may be divided into an acquisition module 31, a feature extraction module 32, and a gesture recognition module 33 (a module in a virtual device), and each module has the following specific functions:
and the acquisition module 21 is configured to acquire point cloud data of the target gesture.
And the feature extraction module 22 is configured to determine a plurality of feature extraction parameters, and extract feature values corresponding to the feature extraction parameters from the point cloud data to obtain gesture features of the target gesture.
And the gesture recognition module 23 is configured to recognize the target gesture based on the gesture feature and a preset gesture recognition model.
The terminal 3 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal 3 may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is only an example of a terminal 3 and does not constitute a limitation of the terminal 3 and may comprise more or less components than those shown, or some components may be combined, or different components, e.g. the terminal 3 may further comprise input output devices, network access devices, buses, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the terminal 3, such as a hard disk or a memory of the terminal 3. The memory 31 may also be an external storage device of the terminal 3, such as a plug-in hard disk provided on the terminal 3, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 31 may also include both an internal storage unit of the terminal 3 and an external storage device. The memory 31 is used for storing computer programs and other programs and data required by the terminal 3. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A gesture recognition method, comprising:
acquiring point cloud data of a target gesture;
determining a plurality of feature extraction parameters, and extracting feature values corresponding to the feature extraction parameters from the point cloud data to obtain gesture features of the target gesture;
and identifying the target gesture based on the gesture features and a preset gesture identification model.
2. The gesture recognition method of claim 1, after obtaining point cloud data of the target gesture, further comprising:
preprocessing the point cloud data of the target gesture; wherein the pre-processing comprises: and removing wrong point cloud data, and performing data enhancement on the point cloud data of the target gesture.
3. The gesture recognition method according to claim 1, wherein the point cloud data of the target gesture is point cloud data corresponding to each gesture frame of the target gesture, wherein each gesture frame of the target gesture is detected by a target detection device; the plurality of feature extraction parameters comprise distance, speed, azimuth angle and pitch angle;
the extracting of the feature values corresponding to the feature extraction parameters from the point cloud data to obtain the gesture features of the target gesture comprises:
and extracting a distance value, a speed value, an azimuth angle value and a pitch angle value corresponding to each gesture frame from the point cloud data corresponding to each gesture frame to obtain the gesture features of the target gesture.
4. The gesture recognition method according to any one of claims 1-3, further comprising, after obtaining the gesture feature of the target gesture:
judging the size relation between the number of the characteristic values corresponding to each characteristic extraction parameter and a preset threshold value; the preset threshold is a preset input size of the gesture recognition model;
feature aligning gesture features of the target gesture based on the magnitude relationship.
5. The gesture recognition method of claim 4, wherein the feature aligning the gesture features of the target gesture based on the magnitude relationship comprises:
calculating the difference between the number of the characteristic values corresponding to each characteristic extraction parameter and the preset threshold value, and solving the absolute value of the difference;
supplementing the feature value of the feature extraction parameter with the number of the feature values smaller than the preset threshold value based on the absolute value of the difference value;
and for the feature extraction parameters of which the number of the feature values is greater than the preset threshold, deleting the feature values of the feature extraction parameters based on the absolute values of the difference values.
6. A gesture recognition method according to any one of claims 1-3, wherein the gesture recognition model establishment method comprises:
acquiring point cloud data of a plurality of preset gestures;
respectively extracting characteristic values corresponding to the characteristic extraction parameters from the point cloud data of each preset gesture to obtain a training set of each preset gesture;
and performing model training based on the training set of each preset gesture to obtain the gesture recognition model.
7. The gesture recognition method of claim 6, wherein the point cloud data of a predetermined gesture is the point cloud data corresponding to each gesture frame of the predetermined gesture, wherein each gesture frame of the predetermined gesture is detected by the target detection device;
after point cloud data of a plurality of preset gestures are acquired, the method further comprises the following steps:
calculating the total number of gesture frames of multiple preset gestures, and averaging;
and determining a preset input size of the gesture recognition model based on the average value.
8. A gesture recognition apparatus, comprising:
the acquisition module is used for acquiring point cloud data of the target gesture;
the feature extraction module is used for determining a plurality of feature extraction parameters and extracting feature values corresponding to the feature extraction parameters from the point cloud data to obtain the gesture features of the target gesture;
and the gesture recognition module is used for recognizing the target gesture based on the gesture characteristics and a preset gesture recognition model.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202011320587.6A 2020-11-23 2020-11-23 Gesture recognition method and device and terminal Pending CN112418089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011320587.6A CN112418089A (en) 2020-11-23 2020-11-23 Gesture recognition method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011320587.6A CN112418089A (en) 2020-11-23 2020-11-23 Gesture recognition method and device and terminal

Publications (1)

Publication Number Publication Date
CN112418089A true CN112418089A (en) 2021-02-26

Family

ID=74776882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011320587.6A Pending CN112418089A (en) 2020-11-23 2020-11-23 Gesture recognition method and device and terminal

Country Status (1)

Country Link
CN (1) CN112418089A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113561911A (en) * 2021-08-12 2021-10-29 森思泰克河北科技有限公司 Vehicle control method, vehicle control device, millimeter wave radar, and storage medium
CN113569635A (en) * 2021-06-22 2021-10-29 惠州越登智能科技有限公司 Gesture recognition method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150016712A1 (en) * 2013-04-11 2015-01-15 Digimarc Corporation Methods for object recognition and related arrangements
CN110751097A (en) * 2019-10-22 2020-02-04 中山大学 Semi-supervised three-dimensional point cloud gesture key point detection method
CN110895683A (en) * 2019-10-15 2020-03-20 西安理工大学 Kinect-based single-viewpoint gesture and posture recognition method
CN111695420A (en) * 2020-04-30 2020-09-22 华为技术有限公司 Gesture recognition method and related device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150016712A1 (en) * 2013-04-11 2015-01-15 Digimarc Corporation Methods for object recognition and related arrangements
CN110895683A (en) * 2019-10-15 2020-03-20 西安理工大学 Kinect-based single-viewpoint gesture and posture recognition method
CN110751097A (en) * 2019-10-22 2020-02-04 中山大学 Semi-supervised three-dimensional point cloud gesture key point detection method
CN111695420A (en) * 2020-04-30 2020-09-22 华为技术有限公司 Gesture recognition method and related device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
薛艳萍 等: "Myo 手势识别方法在古建筑漫游系统中的应用", 《系统仿真学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569635A (en) * 2021-06-22 2021-10-29 惠州越登智能科技有限公司 Gesture recognition method and system
CN113569635B (en) * 2021-06-22 2024-07-16 深圳玩智商科技有限公司 Gesture recognition method and system
CN113561911A (en) * 2021-08-12 2021-10-29 森思泰克河北科技有限公司 Vehicle control method, vehicle control device, millimeter wave radar, and storage medium

Similar Documents

Publication Publication Date Title
CN111815754B (en) Three-dimensional information determining method, three-dimensional information determining device and terminal equipment
CN112528831B (en) Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment
CN108564579B (en) Concrete crack detection method and detection device based on time-space correlation
CN109886127B (en) Fingerprint identification method and terminal equipment
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
US10922535B2 (en) Method and device for identifying wrist, method for identifying gesture, electronic equipment and computer-readable storage medium
CN107272899B (en) VR (virtual reality) interaction method and device based on dynamic gestures and electronic equipment
CN112418089A (en) Gesture recognition method and device and terminal
CN113392681A (en) Human body falling detection method and device and terminal equipment
CN111199198A (en) Image target positioning method, image target positioning device and mobile robot
CN112686122A (en) Human body and shadow detection method, device, electronic device and storage medium
CN112416128B (en) Gesture recognition method and terminal equipment
CN116844006A (en) Target identification method and device, electronic equipment and readable storage medium
CN108629219B (en) Method and device for identifying one-dimensional code
CN112950652B (en) Robot and hand image segmentation method and device thereof
CN114330418A (en) Electroencephalogram and eye movement fusion method, medium and equipment for AR target recognition
CN114612971A (en) Face detection method, model training method, electronic device, and program product
CN109816709A (en) Monocular camera-based depth estimation method, device and equipment
CN114549584A (en) Information processing method and device, electronic equipment and storage medium
CN114399432A (en) Target identification method, device, equipment, medium and product
CN112967321A (en) Moving object detection method and device, terminal equipment and storage medium
CN112733670A (en) Fingerprint feature extraction method and device, electronic equipment and storage medium
CN112348112A (en) Training method and device for image recognition model and terminal equipment
CN114333061B (en) Method, device and terminal for identifying operator action violations
CN114037865B (en) Image processing method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210226