WO2022165675A1 - 一种手势识别方法、装置、终端设备及可读存储介质 - Google Patents

一种手势识别方法、装置、终端设备及可读存储介质 Download PDF

Info

Publication number
WO2022165675A1
WO2022165675A1 PCT/CN2021/075094 CN2021075094W WO2022165675A1 WO 2022165675 A1 WO2022165675 A1 WO 2022165675A1 CN 2021075094 W CN2021075094 W CN 2021075094W WO 2022165675 A1 WO2022165675 A1 WO 2022165675A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
network model
processing
gesture recognition
gesture
Prior art date
Application number
PCT/CN2021/075094
Other languages
English (en)
French (fr)
Inventor
龙柏君
黄凯明
Original Assignee
深圳市锐明技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市锐明技术股份有限公司 filed Critical 深圳市锐明技术股份有限公司
Priority to PCT/CN2021/075094 priority Critical patent/WO2022165675A1/zh
Priority to CN202180000451.3A priority patent/CN112997192A/zh
Publication of WO2022165675A1 publication Critical patent/WO2022165675A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/117Biometrics derived from hands

Definitions

  • the present application relates to the technical field of image data processing, and in particular, to a gesture recognition method, apparatus, terminal device and readable storage medium.
  • the traditional gesture-based management method is to check the all-weather surveillance video by security managers to judge whether the driver performs the corresponding operation according to the requirements according to the driver's gesture.
  • the above method requires a lot of manpower and material resources and is inefficient.
  • the related gesture recognition methods mainly realize real-time, offline, and fully automatic gesture recognition through different artificial intelligence algorithms, and determine the corresponding recognition results.
  • the above methods have lower recognition accuracy and poorer robustness than the results.
  • One of the purposes of the embodiments of the present application is to provide a gesture recognition method, device, terminal device, and readable storage medium, which aims to solve the problems of lower recognition accuracy and poorer robustness than results of related gesture recognition methods.
  • a gesture recognition method including:
  • a gesture recognition device including:
  • the first acquisition module is used to acquire real video data
  • a first preprocessing module configured to preprocess the real video data to obtain to-be-processed video data
  • an image processing module configured to input the video data to be processed into a pre-trained gesture recognition network model for processing to obtain a recognition result
  • a sending module configured to send the identification result to the preset management terminal.
  • a terminal device including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the first method described above when the processor executes the computer program.
  • the gesture recognition method described in the aspect is provided, including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the first method described above when the processor executes the computer program.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the gesture recognition method according to the first aspect.
  • a fifth aspect provides a computer program product that, when the computer program product runs on a terminal device, enables the terminal device to execute the gesture recognition method described in the first aspect.
  • the beneficial effect of the gesture recognition method provided by the embodiment of the present application is that: the pre-processed video data to be processed is processed by the pre-trained gesture recognition network model, the recognition result is obtained, and the recognition result is sent to the preset management terminal, so that the video data can be recognized Based on scenes with different rate changes, it effectively integrates image features of different time rates, realizes intelligent recognition of gesture types, reduces the amount of calculation, improves the accuracy of recognition results, and has high robustness.
  • FIG. 1 is a schematic flowchart of a gesture recognition method provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of step S102 of the gesture recognition method provided by the embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a deep convolutional neural network model of a space-time three-dimensional kernel provided by an embodiment of the present application;
  • FIG. 4 is a schematic flowchart of step S103 of the gesture recognition method provided by the embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a gesture recognition network model provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a gesture recognition device provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • Some embodiments of the present application provide a gesture recognition method that can be applied to terminal devices such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, and notebook computers.
  • terminal devices such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, and notebook computers.
  • the embodiments of the present application do not impose any restrictions on specific types of terminal devices.
  • FIG. 1 shows a schematic flow chart of the potential identification method provided by the present application.
  • the method can be applied to the above-mentioned in-vehicle device.
  • real video data including hand movements of the target user obtained by shooting with a preset camera is obtained; wherein, the target user refers to a user who needs to perform gesture type recognition.
  • the set target users include but are not limited to train drivers or subway drivers.
  • the camera needs to be set inside the train cab or subway cab, so as to facilitate the shooting of train drivers or subway drivers. real video data of hand movements.
  • real video data is preprocessed to obtain to-be-processed video data; wherein, the preprocessing methods include but are not limited to frame division processing, frame skipping processing, and reorganization processing.
  • the preprocessing methods include but are not limited to frame division processing, frame skipping processing, and reorganization processing.
  • input the video data to be processed into a pre-trained gesture recognition network model for processing obtain the probability value that the gesture of the target user in the video data to be processed belongs to each preset gesture type, and determine the to-be-processed gesture according to the probability value The gesture type of the target user in the video data as the recognition result.
  • the gesture actions of target users with different identities include multiple different types.
  • the preset gesture type can be specifically set according to the identity of the target user.
  • the corresponding preset gesture types include but are not limited to "normal driving”, “make a fist”, “extend two fingers”, “extend thumb”, “make a fist and shake", the above gesture types are used to indicate The driving state of the train driver.
  • the identification result is sent to the preset management terminal of the administrator, so that the administrator can determine the driving state of the target user based on the identification result.
  • the step S102 includes:
  • the real video data is processed into frames, and the real video data is converted into a plurality of framed video clips, and the size of each framed video clip is a time frame.
  • the framed video clips are recombined according to a preset method to obtain image data to be processed.
  • the S1022 includes:
  • a plurality of framed video segments are selected and recombined to obtain the to-be-processed video data.
  • a plurality of framed video clips are selected according to a frame skipping process, and the selected framed video clips are recombined in the order of time frames to obtain video data to be processed.
  • the to-be-processed video data obtained based on the recombination is specifically continuous image data including multiple time frames.
  • the real video data contains 24 frames of image data per second, and it is set that 4 seconds of real video data are acquired each time for processing, with a total of 96 frames.
  • the gesture recognition network model is set to include two networks: the slow channel SNet network model and the fast channel FNet network model to integrate the image features at different time rates, corresponding to The setting of , obtains the input data corresponding to the fast channel FNet network and the input data corresponding to the slow channel SNet network model through different frame skipping methods.
  • 6 frames of continuous image data are selected as the input data corresponding to the slow channel SNet network model.
  • the method further includes:
  • the gesture recognition network model is pre-trained according to the training data set, and the pre-trained gesture recognition network model is obtained.
  • a large amount of training video data containing each preset gesture type and a preset size is obtained.
  • the training data set is divided into training sample data and test sample data, based on the stochastic gradient descent algorithm (Stochastic Gradient Descent algorithm) according to the training sample data and test sample data.
  • Descent, SGD pre-train the gesture recognition network model to obtain the pre-trained gesture recognition network model, so that the pre-trained gesture recognition network model can process the input data and determine that the gesture type of the user in the input data belongs to each The probability value of a preset gesture type.
  • the preset size can be specifically set according to actual needs.
  • the input data size of the gesture recognition network model is set to 96 time frames, and the corresponding preset size is set to 96 time frames.
  • the specific implementation manner of preprocessing the training video data is the same as the implementation manner of step S102, and details are not repeated here.
  • the gesture recognition network model includes a fast channel network model, a slow channel network model, a hybrid network model and a predictive recognition network model.
  • the fast-path network model includes a deep convolutional neural network model of a first spatiotemporal three-dimensional kernel
  • the slow-pass network model includes a deep convolutional neural network model of a second spatiotemporal three-dimensional kernel
  • the hybrid network includes a fusion layer
  • the prediction and recognition network model includes a global pooling layer, a deep fusion layer and a fully connected layer.
  • gesture recognition network models include fast-track network models, slow-track network models, hybrid network models, and predictive recognition network models.
  • the fast channel network model includes a deep convolutional neural network model (Fast ResNet3D CNN) with a first spatiotemporal 3D kernel
  • the slow channel network model includes a deep convolutional neural network model (Slow ResNet3D CNN) with a second spatiotemporal 3D kernel
  • a hybrid network The model includes two fusion layers (Fuse_layer)
  • the prediction and recognition network model includes a global pooling layer (global pooling), a deep fusion layer (concat) and a fully connected layer (fc).
  • the deep convolutional neural network model of the spatiotemporal three-dimensional kernel refers to a network structure composed of a 3D convolutional neural network layer, a 3D pooling layer, and four 3D deep residual network blocks (ResNet3d_block) connected in series.
  • the difference between the deep convolutional neural network model of the first spatiotemporal three-dimensional kernel and the deep convolutional neural network model of the second spatiotemporal three-dimensional kernel is: the convolution kernel of the deep convolutional neural network model of the first spatiotemporal three-dimensional kernel
  • the number of channels is less than the deep convolutional neural network model of the second spatiotemporal three-dimensional kernel; and the input data of the deep convolutional neural network model of the first spatiotemporal three-dimensional kernel is larger than the deep convolutional neural network model of the second spatiotemporal three-dimensional kernel.
  • the deep convolutional neural network model of the spatiotemporal three-dimensional kernel has a total of 101 learning layers, including 49 layers of the slow channel network, 49 layers of the fast channel network, 2 layers of the hybrid network, and 1 layer of the prediction network.
  • FIG. 3 a schematic structural diagram of a deep convolutional neural network model with a spatiotemporal three-dimensional kernel is provided.
  • the backbone network ResNet3D CNN of the deep convolutional neural network model of the spatiotemporal three-dimensional kernel is mainly used to extract the features of video sequences.
  • Its basic unit is the 3D deep residual network block ResNet3D_block.
  • the backbone network ResNet3D CNN mainly includes a 3D Convolutional neural network layers and 4 3D deep residual network blocks ResNet3D_block.
  • the basic convolution kernel of the 3D deep residual network block is the 3D convolution kernel, and the main parameters are the number of channels C and the number of stacks N.
  • the 3D convolutional neural network layer is used to downsample the input data to reduce the size of the input data.
  • Each 3D deep residual network block is set with a different number of channels, and the number of channels is specifically set by the fast channel network model SNet and the slow channel network model FNet.
  • the video data to be processed includes first video data and second video data.
  • the image data to be processed includes the first video data and the second video data; wherein, the time frame of the first video data The number is greater than the second video data, the first video data is specifically the input data of the fast channel network model, and the second video data is specifically the input data of the slow channel network model.
  • the step S103 includes:
  • the first video data is input into the fast channel network model for processing, the first processing result is obtained, the second video data is input into the slow channel network model for processing, and at the same time, according to the hybrid network model, by means of feature fusion,
  • the feature information in the fast channel network model is superimposed into the slow channel network model to realize information mixing on different time scales and obtain the second processing result;
  • the first processing result and the second processing result of the slow channel network use global pooling processing, and correspondingly obtain two processing results.
  • the two processing results are combined through the deep feature fusion layer to obtain the combined result.
  • the combined result is processed through the fully connected layer to obtain the probability value that the gesture in the video data to be processed belongs to each preset gesture type, and the gesture type with the largest probability value is selected as the recognition result.
  • the probability value corresponding to "normal driving" output by the pre-trained gesture recognition network model is 70%, the probability value corresponding to "making a fist” is 10%, and the probability value corresponding to "extending two fingers” is 10%;
  • the probability value corresponding to "extending the thumb” is 5%;
  • the probability value corresponding to "making a fist and shaking” is 5%, corresponding to determining that the user's gesture type in the real video data is "normal driving”.
  • the feature information in the fast channel network model is superimposed on the slow channel network model by means of feature fusion according to the hybrid network model to obtain a second processing result, including:
  • the 3D convolution layer with a convolution kernel size of 1x1 in the fusion layer fuse of the network model performs convolution processing on the fast channel network model to obtain an output result of size T*F*F*C;
  • the output results of the network and the output results corresponding to the slow channel are superimposed by the high-level layer eltwise, so that the image features learned in the fast channel network model are iterated to the slow channel network model to obtain the second processing result.
  • FIG. 5 a schematic structural diagram of a gesture recognition network model is provided.
  • the size of the first video data input by the fast channel network model is set to (48x224x224x3)
  • the convolutional neural network layer adopts a convolution kernel of 5x7x7
  • the number of channels is 8
  • the pooling operation is 1x2x2
  • the number of channels of the 3D deep residual network blocks are [8, 16, 32, 128]
  • the number of stacks is [3, 4, 6, 3]
  • the corresponding output size of the fast channel network model is 48x7x7x128.
  • the size of the second video data input by the slow channel network model is set as: (6x224x224x3), the convolutional neural network layer uses a convolution kernel of 1x7x7, the number of channels is 64, the pooling operation is 1x2x2, and its 4 3D depth residues
  • the number of channels of the difference network blocks are [64, 128, 256, 512], and the number of stacks is [3, 4, 6, 3], respectively, and the size of the corresponding output of the slow channel network model is 6x7x7x512.
  • the output results of the fast channel network model and the slow channel network model are globally pooled, respectively, to obtain a 1x1x1x512-dimensional vector and a 1x1x1x128-dimensional vector, and then merge the above two vectors through deep feature fusion.
  • a fully-connected layer that obtains the probability value that gestures in the video data to be processed belong to each preset gesture type.
  • the pre-trained gesture recognition network model is used to process the pre-processed video data to be processed to obtain a recognition result, which is sent to the preset management terminal, so that scenes in the video data based on changes at different rates can be recognized and effectively integrated Image features of different time rates are used to realize intelligent recognition of gesture types, reduce the amount of calculation, improve the accuracy of recognition results, and have high robustness.
  • FIG. 6 shows a structural block diagram of the gesture recognition apparatus provided by the embodiment of the present application. For convenience of description, only the part related to the embodiment of the present application is shown.
  • the gesture recognition device includes: a processor, wherein the processor is configured to execute the following program modules stored in the memory: a first acquisition module for acquiring real video data; a first preprocessing module for The real video data is preprocessed to obtain to-be-processed video data; an image processing module is used to input the to-be-processed video data into a pre-trained gesture recognition network model for processing to obtain a recognition result; a sending module is used to The identification result is sent to the preset management terminal.
  • a processor wherein the processor is configured to execute the following program modules stored in the memory: a first acquisition module for acquiring real video data; a first preprocessing module for The real video data is preprocessed to obtain to-be-processed video data; an image processing module is used to input the to-be-processed video data into a pre-trained gesture recognition network model for processing to obtain a recognition result; a sending module is used to The identification result is sent to the preset management terminal.
  • the gesture recognition device 100 includes:
  • the first acquisition module 101 is used to acquire real video data
  • a first preprocessing module 102 configured to preprocess the real video data to obtain to-be-processed video data
  • an image processing module 103 configured to input the video data to be processed into a pre-trained gesture recognition network model for processing, and obtain a recognition result;
  • the sending module 104 is configured to send the identification result to the preset management terminal.
  • the first preprocessing module includes:
  • a frame-by-frame processing unit configured to perform frame-by-frame processing on the real video data to obtain frame-by-frame video clips
  • a recombination unit configured to recombine the framed video clips in a preset manner to obtain the to-be-processed video data.
  • described recombination unit comprises:
  • the recombination subunit is used for selecting and recombining a plurality of frame-divided video clips according to the frame skipping processing mode to obtain the to-be-processed video data.
  • the gesture recognition device further includes:
  • the second acquisition module is used to acquire multiple training video data
  • a second preprocessing module configured to preprocess the training video data to obtain preprocessed training video data
  • the labeling module is used to respectively add labels to the corresponding preprocessed training video data according to the gesture type in each of the training video data to obtain a training data set;
  • the pre-training module is used to pre-train the gesture recognition network model according to the training data set, and obtain the pre-trained gesture recognition network model.
  • the gesture recognition network model includes a fast channel network model, a slow channel network model, a hybrid network model and a predictive recognition network model.
  • the video data to be processed includes first video data and second video data
  • the image processing module includes:
  • a first processing unit configured to input the first video data into the fast channel network model for processing, and obtain a first processing result
  • a second processing unit configured to input the second video data into the slow channel network model, and perform processing through the slow channel network model and the hybrid network model to obtain a second processing result
  • a fusion unit configured to perform fusion processing on the first processing result and the second processing result through the prediction and recognition network model to obtain a probability value that the gesture in the to-be-processed video data belongs to each preset gesture type
  • the recognition unit is used to select the gesture type with the largest probability value as the recognition result.
  • the fast-path network model includes a deep convolutional neural network model of a first spatiotemporal three-dimensional kernel
  • the slow-pass network model includes a deep convolutional neural network model of a second spatiotemporal three-dimensional kernel
  • the hybrid network includes a fusion layer
  • the prediction and recognition network model includes a global pooling layer, a deep fusion layer and a fully connected layer.
  • the pre-trained gesture recognition network model is used to process the pre-processed video data to be processed to obtain a recognition result, which is sent to the preset management terminal, so that scenes in the video data based on changes at different rates can be recognized and effectively integrated Image features of different time rates are used to realize intelligent recognition of gesture types, reduce the amount of calculation, improve the accuracy of recognition results, and have high robustness.
  • FIG. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • the terminal device 7 of this embodiment includes: at least one processor 70 (only one is shown in FIG. 7 ), a memory 71 , and a processor stored in the memory 71 and can be processed in the at least one processor
  • the computer program 72 running on the processor 70 when the processor 70 executes the computer program 72, implements the steps in any of the foregoing gesture recognition method embodiments.
  • the terminal device 7 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 70 and a memory 71 .
  • FIG. 7 is only an example of the terminal device 7, and does not constitute a limitation on the terminal device 7, and may include more or less components than the one shown, or combine some components, or different components , for example, may also include input and output devices, network access devices, and the like.
  • the so-called processor 70 may be a central processing unit (Central Processing Unit, CPU), and the processor 70 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuits) Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 71 may be an internal storage unit of the terminal device 7 in some embodiments, such as a hard disk or a memory of the terminal device 7 .
  • the memory 71 may also be an external storage device of the terminal device 7 in other embodiments, for example, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital Card (Secure Digital, SD), Flash Card (Flash Card), etc.
  • the memory 71 may also include both an internal storage unit of the terminal device 7 and an external storage device.
  • the memory 71 is used to store an operating system, an application program, a boot loader (Boot Loader), data, and other programs, such as program codes of the computer program, and the like.
  • the memory 71 may also be used to temporarily store data that has been output or will be output.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be implemented when the mobile terminal executes the computer program product.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • the present application realizes all or part of the processes in the methods of the above embodiments, which can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include at least: any entity or device capable of carrying the computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunication signals
  • software distribution media For example, U disk, mobile hard disk, disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/network device and method may be implemented in other manners.
  • the apparatus/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开一种手势识别方法、装置、终端设备及可读存储介质,该方法包括:获取真实视频数据;对真实视频数据进行预处理,获得待处理视频数据;将待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果;将识别结果发送至预设管理终端。本申请通过识别视频数据中基于不同速率变化的场景,有效的整合了不同时间速率的图像特征,实现对手势类型的智能识别,降低了计算量提高了识别结果的精度,且鲁棒性高。

Description

一种手势识别方法、装置、终端设备及可读存储介质 技术领域
本申请涉及图像数据处理技术领域,具体涉及一种手势识别方法、装置、终端设备及可读存储介质。
背景技术
在轨道交通系统中,当车辆经过固定检查点时,司机需要根据需求作出相应的手势,以此跟地面人员进行沟通交流。传统基于手势的管理方法是通过安全管理人员查看全天候的监控视频,来根据司机的手势判断司机是否按照要求进行对应的操作,上述方法需要耗费大量的人力和物力且效率低下。
相关手势识别方法主要通过不同的人工智能算法实现实时、离线、全自动的手势识别,确定对应的识别结果,然而上述方法识别较结果的精度低且鲁棒性差。
技术问题
本申请实施例的目的之一在于:提供一种手势识别方法、装置、终端设备及可读存储介质,旨在解决相关手势识别方法识别较结果的精度低且鲁棒性差的问题。
技术解决方案
为解决上述技术问题,本申请实施例采用的技术方案是:
第一方面,提供了一种手势识别方法,包括:
获取真实视频数据;
对所述真实视频数据进行预处理,获得待处理视频数据;
将所述待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果;
将所述识别结果发送至预设管理终端。
第二方面,提供了一种手势识别装置,包括:
第一获取模块,用于获取真实视频数据;
第一预处理模块,用于对所述真实视频数据进行预处理,获得待处理视频数据;
图像处理模块,用于将所述待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果;
发送模块,用于将所述识别结果发送至预设管理终端。
第三方面,提供一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如上述第一方面所述的手势识别方法。
第四方面,提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述第一方面所述的手势识别方法。
第五方面,提供一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面所述的手势识别方法。
有益效果
本申请实施例提供的手势识别方法的有益效果在于:通过预训练的手势识别网络模型对经过预处理的待处理视频数据进行处理,获得识别结果,并发送至预设管理终端,能够识别视频数据中基于不同速率变化的场景,有效的整合了不同时间速率的图像特征,实现对手势类型的智能识别,降低了计算量提高了识别结果的精度,且鲁棒性高。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或示范性技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1是本申请实施例提供的手势识别方法的流程示意图;
图2是本申请实施例提供的手势识别方法步骤S102的流程示意图;
图3是本申请实施例提供的时空三维内核的深度卷积神经网络模型的结构示意图;
图4是本申请实施例提供的手势识别方法步骤S103的流程示意图;
图5是本申请实施例提供的手势识别网络模型的结构示意图;
图6是本申请实施例提供的手势识别装置的结构示意图;
图7是本申请实施例提供的终端设备的结构示意图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本申请。
需说明的是,当部件被称为“固定于”或“设置于”另一个部件,它可以直接在另一个部件上或者间接在该另一个部件上。当一个部件被称为是“连接于”另一个部件,它可以是直接或者间接连接至该另一个部件上。术语“上”、“下”、“左”、“右”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制,对于本领域的普通技术人员而言,可以根据具体情况理解上述术语的具体含义。术语“第一”、“第二”仅用于便于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明技术特征的数量。“多个”的含义是两个或两个以上,除非另有明确具体的限定。
为了说明本申请所提供的技术方案,以下结合具体附图及实施例进行详细说明。
本申请的一些实施例提供手势识别方法可以应用于手机、平板电脑、可穿戴设备、车载设备、笔记本电脑等终端设备上,本申请实施例对终端设备的具体类型不作任何限制。
图1示出了本申请提供的势识别方法的示意性流程图,作为示例而非限定,该方法可以应用于上述车载设备中。
S101、获取真实视频数据。
在具体应用中,获取通过预先设置的摄像头拍摄获得包含目标用户的手部动作的真实视频数据;其中,目标用户是指需进行手势类型识别的用户。
在本实施例中,设定目标用户包括但不限于火车驾驶员或地铁驾驶员,对应的需将摄像头设置于火车驾驶室或地铁驾驶室的内部,以便于拍摄包含火车驾驶员或地铁驾驶员的手部动作的真实视频数据。
S102、对所述真实视频数据进行预处理,获得待处理视频数据。
在具体应用中,对真实视频数据进行预处理,获得待处理视频数据;其中,预处理方式包括但不限于分帧处理、跳帧处理及重组处理。通过对真实视频数据进行预处理,获得待处理视频,可以减小数据计算量,提高计算效率。
S103、将所述待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果。
在具体应用中,将待处理视频数据输入至预训练的手势识别网络模型进行处理,获得待处理视频数据中目标用户的手势属于每个预设手势类型的概率值,并根据概率值确定待处理视频数据中目标用户的手势类型,作为识别结果。
在实际场景中,不同身份的目标用户的手势动作包括多个不同的类型,对应的,预设手势类型可根据目标用户的身份进行具体设定。例如,当目标用户为火车驾驶员时,对应预设手势类型包括但不限于“正常驾驶”、“握拳”、“伸二指”、“伸拇指”、“握拳摇晃”,上述手势类型用于表示火车驾驶员的驾驶状态。
S104、将所述识别结果发送至预设管理终端。
在具体应用中,将识别结果发送至管理人员的预设管理终端,便于管理人员基于识别结果确定目标用户的驾驶状态。
如图2所示,在一个实施例中,所述步骤S102,包括:
S1021、对所述真实视频数据进行分帧处理,获得分帧后的视频片段;
S1022、按照预设方式对所述分帧后的视频片段进行重组,获得所述待处理视频数据。
在具体应用中,对真实视频数据进行分帧处理,将真实视频数据转化为多个分帧后的视频片段,每个分帧后的视频片段的大小为一个时间帧。按照预设方式对分帧后的视频片段进行重组,获得待处理图像数据。
在一个实施例中,所述S1022,包括:
按照跳帧处理方式选取多个分帧后的视频片段并进行重组,获得所述待处理视频数据。
在具体应用中,按照跳帧处理方式选取多个分帧后的视频片段,并将选取的分帧后的视频片段按照时间帧的顺序进行重组,获得待处理视频数据。可以理解的是,基于重组获得的待处理视频数据具体为包含多个时间帧的连续图像数据。
在具体应用中,真实视频数据中每秒钟包含24帧的图像数据,设定每次获取4秒钟的真实视频数据进行处理,共计96帧。为实现识别视频数据中的不同变化速率的目标用户的手势动作类型,设定手势识别网络模型包含慢通道SNet网络模型、快通道FNet网络模型两个网络来整合不同时间速率下的图像特征,对应的设定通过不同的跳帧方式,分别获取与快通道FNet网络对应的输入数据和与慢通道SNet网络模型的输入数据。例如,设定以每隔1帧,保存1帧图像数据的跳帧方式,选取48帧的连续图像数据作为与快通道FNet网络对应的输入数据;以每间隔12帧,保存1帧图像数据的跳帧方式,选取6帧的连续图像数据作为与慢通道SNet网络模型对应的输入数据。
在一个实施例中,所述方法还包括:
获取多个训练视频数据;
对所述训练视频数据进行预处理,获得预处理后的训练视频数据;
根据每个所述训练视频数据中的手势类型分别对对应的预处理的训练视频数据添加标签,获得训练数据集;
根据训练数据集对手势识别网络模型进行预训练,获得预训练后的手势识别网络模型。
在具体应用中,获取大量包含每个预设手势类型的、预设大小的训练视频数据。对训练视频数据进行预处理,获得预处理后的训练视频数据,根据每个训练视频数据中的目标用户的手势类型,分别为对应的预处理后的训练视频数据添加标签,获得训练数据集;将训练数据集划分为训练样本数据和测试样本数据,根据训练样本数据和测试样本数据基于随机梯度下降算法(Stochastic Gradient Descent,SGD),对手势识别网络模型进行预训练,获得预训练后的手势识别网络模型,使得预训练后的手势识别网络模型能够对输入数据进行处理,确定输入数据中用户的手势类型属于每个预设手势类型的概率值。
其中,预设大小可根据实际需求进行具体设定。在本实施例中,手势识别网络模型的输入数据大小设定为96个时间帧,对应设定预设大小为96个时间帧。
在具体应用中,对训练视频数据进行预处理的具体实施方式和步骤S102的实施方式相同,在此不再赘述。
例如,获取20万个包含“正常驾驶”手势类型的84帧训练视频数据,并在预处理后,分别对上述20万个预处理后的训练视频数据,添加“正常驾驶”标签;获取20万个包含“握拳”手势类型的84帧训练视频数据,并在预处理后,分别对上述20万个预处理后的训练视频数据添加“握拳”标签;获取20万个包含“伸二指”手势类型的84帧训练视频数据,并在预处理后,分别对上述20万个预处理后的训练视频数据,添加“伸二指”标签;获取20万个包含“伸拇指”手势类型的84帧训练视频数据,并在预处理后,分别对上述20万个预处理后的训练视频数据,添加“伸拇指”标签;获取20万个包含“握拳摇晃”手势类型的84帧训练视频数据,并在预处理后,分别对上述20万个预处理后的训练视频数据,添加“握拳摇晃”标签。
在一个实施例中,所述手势识别网络模型包括快通道网络模型、慢通道网络模型、混合网络模型和预测识别网络模型。
在一个实施例中,所述快通道网络模型包括第一时空三维内核的深度卷积神经网络模型,所述慢通道网络模型包括第二时空三维内核的深度卷积神经网络模型,所述混合网络模型包括融合层,所述预测识别网络模型包括全局池化层、深度融合层和全连接层。
在具体应用中,手势识别网络模型包括快通道网络模型、慢通道网络模型、混合网络模型和预测识别网络模型。其中,快通道网络模型包括第一时空三维内核的深度卷积神经网络模型(Fast ResNet3D CNN),慢通道网络模型包括第二时空三维内核的深度卷积神经网络模型(Slow ResNet3D CNN),混合网络模型包括两个融合层(Fuse_layer),预测识别网络模型包括全局池化层(global pooling)、深度融合层(concat)和全连接层(fc)。时空三维内核的深度卷积神经网络模型是指由一个3D卷积神经网络层,1个3D池化层,4个3D深度残差网络块(deep residual network,ResNet3d_block)相互串联构成的网络结构。
在具体应用中,第一时空三维内核的深度卷积神经网络模型和第二时空三维内核的深度卷积神经网络模型的区别在于:第一时空三维内核的深度卷积神经网络模型的卷积核通道数少于第二时空三维内核的深度卷积神经网络模型;且第一时空三维内核的深度卷积神经网络模型的输入数据大于第二时空三维内核的深度卷积神经网络模型。
在具体应用中,时空三维内核的深度卷积神经网络模型一共有101个学习层,其中慢通道网络49层,快通道网络49层,混合网络2层,预测网络1层。
如图3所示,提供一种时空三维内核的深度卷积神经网络模型的结构示意图。
图3中,时空三维内核的深度卷积神经网络模型的主干网络ResNet3D CNN主要用于提取视频序列的特征,其基本组成单元是3D深度残差网络块ResNet3D_block,主干网络ResNet3D CNN主要包括1个3D卷积神经网络层和4个3D深度残差网络块ResNet3D_block。3D深度残差网络块的基本卷积核为3D卷积核,主要参数为通道数C和堆叠数N。3D卷积神经网络层用于对输入数据进行降采样处理,缩小输入数据的尺寸。每个3D深度残差网络块设定有不同的通道数,通道数具体由快通道网络模型SNet和慢通道网络模型FNet对应设定。
在一个实施例中,所述待处理视频数据包括第一视频数据和第二视频数据。
在具体应用中,由于快通道网络模型和慢通道网络模型的输入数据的大小不同,因此,设定待处理图像数据包括第一视频数据和第二视频数据;其中,第一视频数据的时间帧数大于第二视频数据,第一视频数据具体为快通道网络模型的输入数据,第二视频数据具体为慢通道网络模型的输入数据。
如图4所示,在一个实施例中,所述步骤S103,包括:
S1031、将所述第一视频数据输入至快通道网络模型进行处理,获得第一处理结果;
S1032、将所述第二视频数据输入至所述慢通道网络模型,通过所述慢通道网络模型和所述混合网络模型进行处理,获得第二处理结果;
S1033、通过所述预测识别网络模型对所述第一处理结果和第二处理结果进行融合处理,获得所述待处理视频数据中的手势属于每个预设手势类型的概率值;
S1034、选取概率值最大的手势类型作为识别结果。
在具体应用中,将第一视频数据输入至快通道网络模型进行处理,获得第一处理结果,将第二视频数据输入至慢通道网络模型进行处理,同时根据混合网络模型通过特征融合的方式,将快通道网络模型中的特征信息叠加至慢通道网络模型中,以实现不同时间尺度上的信息混合,获得第二处理结果;通过预测识别网络模型中的全局池化层分别对快通道网络的第一处理结果和慢通道网络的第二处理结果使用全局池化处理,对应获得两个处理结果。通过深度特征融合层将两个处理结果进行合并,得到合并结果。然后通过全连接层对合并结果进行处理,获得待处理视频数据中的手势属于每个预设手势类型的概率值,选取概率值最大的手势类型作为识别结果。
例如,预训练后的手势识别网络模型输出的与“正常驾驶”对应的概率值为70%,与“握拳”对应的概率值为10%;与“伸二指”对应的概率值为10%;与“伸拇指”对应的概率值为5%;与“握拳摇晃”对应的概率值为5%,对应判定真实视频数据中用户的手势类型为“正常驾驶”。
在具体应用中,根据混合网络模型通过特征融合的方式,将快通道网络模型中的特征信息叠加至慢通道网络模型,获得第二处理结果,包括:
设定慢通道网络模型的输出为T*F*F*C,快通道网络模型的输出为aT*F*F*bC,其中,a=8,b=1/8。使用一个步长T_stride为8,通道数channel为C。
将第二视频数据输入至慢通道网络模型后,分别在快通道网络模型中的第一快通道3D深度残差网络块f_res3D_Block1后,和第三快通道3D深度残差网络块f_res3D_Block3后,通过混合网络模型的融合层fuse中卷积核尺寸为1x1的3D卷积层,对快通道网络模型进行卷积处理,获得尺寸为T*F*F*C的输出结果;将卷积处理后的混合网络的输出结果与慢通道对应的输出结果通过高级层eltwise相叠加,实现将快通道网络模型中学习到的图像特征,迭代到慢通道网络模型,获得第二处理结果。
如图5所示,提供一种手势识别网络模型的结构示意图。
图5中,设定快通道网络模型输入的第一视频数据的大小为(48x224x224x3),其卷积神经网络层采用卷积核为5x7x7,通道数为8,池化操作为1x2x2,其4个3D深度残差网络块的通道数分别为[8,16,32,128],堆叠数分别为[3,4,6,3],对应的快通道网络模型输出结果尺寸为48x7x7x128。
设定慢通道网络模型输入的第二视频数据的大小为:(6x224x224x3),其卷积神经网络层采用卷积核为1x7x7,通道数为64,池化操作为1x2x2,其4个3D深度残差网络块的通道数分别为[64,128,256,512],堆叠数分别为[3,4,6,3],对应的慢通道网络模型输出结果的尺寸为6x7x7x512。
通过预测网络模型分别对快通道网络模型和慢通道网络模型的输出结果进行全局池化处理,分别获得1x1x1x512维的向量和1x1x1x128维的向量,然后通过深度特征融合将上述两个向量合并,最后经过一个全连接层,获得待处理视频数据中的手势属于每个预设手势类型的概率值。
本实施例通过预训练的手势识别网络模型对经过预处理的待处理视频数据进行处理,获得识别结果,并发送至预设管理终端,能够识别视频数据中基于不同速率变化的场景,有效的整合了不同时间速率的图像特征,实现对手势类型的智能识别,降低了计算量提高了识别结果的精度,且鲁棒性高。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的手势识别方法,图6示出了本申请实施例提供的手势识别装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
在本实施例中,手势识别装置包括:处理器,其中,所述处理器用于执行存在存储器的以下程序模块:第一获取模块,用于获取真实视频数据;第一预处理模块,用于对所述真实视频数据进行预处理,获得待处理视频数据;图像处理模块,用于将所述待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果;发送模块,用于将所述识别结果发送至预设管理终端。
参照图6,该手势识别装置100包括:
第一获取模块101,用于获取真实视频数据;
第一预处理模块102,用于对所述真实视频数据进行预处理,获得待处理视频数据;
图像处理模块103,用于将所述待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果;
发送模块104,用于将所述识别结果发送至预设管理终端。
在一个实施例中,所述第一预处理模块,包括:
分帧处理单元,用于对所述真实视频数据进行分帧处理,获得分帧后的视频片段;
重组单元,用于按照预设方式对所述分帧后的视频片段进行重组,获得所述待处理视频数据。
在一个实施例中,所述重组单元,包括:
重组子单元,用于按照跳帧处理方式选取多个分帧后的视频片段并进行重组,获得所述待处理视频数据。
在一个实施例中,手势识别装置,还包括:
第二获取模块,用于获取多个训练视频数据;
第二预处理模块,用于对所述训练视频数据进行预处理,获得预处理后的训练视频数据;
标签模块,用于根据每个所述训练视频数据中的手势类型分别对对应的预处理的训练视频数据添加标签,获得训练数据集;
预训练模块,用于根据训练数据集对手势识别网络模型进行预训练,获得预训练后的手势识别网络模型。
在一个实施例中,所述手势识别网络模型包括快通道网络模型、慢通道网络模型、混合网络模型和预测识别网络模型。
在一个实施例中,所述待处理视频数据包括第一视频数据和第二视频数据;
所述图像处理模块,包括:
第一处理单元,用于将所述第一视频数据输入至所述快通道网络模型进行处理,获得第一处理结果;
第二处理单元,用于将所述第二视频数据输入至所述慢通道网络模型,通过所述慢通道网络模型和所述混合网络模型进行处理,获得第二处理结果;
融合单元,用于通过所述预测识别网络模型对所述第一处理结果和第二处理结果进行融合处理,获得所述待处理视频数据中的手势属于每个预设手势类型的概率值;
识别单元,用于选取概率值最大的手势类型作为识别结果。
在一个实施例中,所述快通道网络模型包括第一时空三维内核的深度卷积神经网络模型,所述慢通道网络模型包括第二时空三维内核的深度卷积神经网络模型,所述混合网络模型包括融合层,所述预测识别网络模型包括全局池化层、深度融合层和全连接层。
本实施例通过预训练的手势识别网络模型对经过预处理的待处理视频数据进行处理,获得识别结果,并发送至预设管理终端,能够识别视频数据中基于不同速率变化的场景,有效的整合了不同时间速率的图像特征,实现对手势类型的智能识别,降低了计算量提高了识别结果的精度,且鲁棒性高。
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
图7为本申请一实施例提供的终端设备的结构示意图。如图7所示,该实施例的终端设备7包括:至少一个处理器70(图7中仅示出一个)处理器、存储器71以及存储在所述存储器71中并可在所述至少一个处理器70上运行的计算机程序72,所述处理器70执行所述计算机程序72时实现上述任意各个手势识别方法实施例中的步骤。
所述终端设备7可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该终端设备可包括,但不仅限于,处理器70、存储器71。本领域技术人员可以理解,图7仅仅是终端设备7的举例,并不构成对终端设备7的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。
所称处理器70可以是中央处理单元(Central Processing Unit,CPU),该处理器70还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器71在一些实施例中可以是所述终端设备7的内部存储单元,例如终端设备7的硬盘或内存。所述存储器71在另一些实施例中也可以是所述终端设备7的外部存储设备,例如所述终端设备7上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字卡(Secure Digital, SD),闪存卡(Flash Card)等。进一步地,所述存储器71还可以既包括所述终端设备7的内部存储单元也包括外部存储设备。所述存储器71用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器71还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上仅为本申请的可选实施例而已,并不用于限制本申请。对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (15)

  1. 一种手势识别方法,其特征在于,包括:
    获取真实视频数据;
    对所述真实视频数据进行预处理,获得待处理视频数据;
    将所述待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果;
    将所述识别结果发送至预设管理终端。
  2. 如权利要求1所述的手势识别方法,其特征在于,所述对所述真实视频数据进行预处理,获得待处理视频数据,包括:
    对所述真实视频数据进行分帧处理,获得分帧后的视频片段;
    按照预设方式对所述分帧后的视频片段进行重组,获得所述待处理视频数据。
  3. 如权利要求2所述的手势识别方法,其特征在于,所述按照预设方式对所述分帧后的视频片段进行重组,获得所述待处理视频数据,包括:
    按照跳帧处理方式选取多个分帧后的视频片段并进行重组,获得所述待处理视频数据。
  4. 如权利要求1所述的手势识别方法,其特征在于,所述方法还包括:
    获取多个训练视频数据;
    对所述训练视频数据进行预处理,获得预处理后的训练视频数据;
    根据每个所述训练视频数据中的手势类型分别对对应的预处理的训练视频数据添加标签,获得训练数据集;
    根据训练数据集对手势识别网络模型进行预训练,获得预训练后的手势识别网络模型。
  5. 如权利要求1至4 任一项所述的手势识别方法,其特征在于,所述手势识别网络模型包括快通道网络模型、慢通道网络模型、混合网络模型和预测识别网络模型。
  6. 如权利要求5所述的手势识别方法,其特征在于,所述待处理视频数据包括第一视频数据和第二视频数据;
    所述将所述待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果,包括:
    将所述第一视频数据输入至所述快通道网络模型进行处理,获得第一处理结果;
    将所述第二视频数据输入至所述慢通道网络模型,通过所述慢通道网络模型和所述混合网络模型进行处理,获得第二处理结果;
    通过所述预测识别网络模型对所述第一处理结果和第二处理结果进行融合处理,获得所述待处理视频数据中的手势属于每个预设手势类型的概率值;
    选取概率值最大的手势类型作为识别结果。
  7. 如权利要求5所述的手势识别方法,其特征在于,所述快通道网络模型包括第一时空三维内核的深度卷积神经网络模型,所述慢通道网络模型包括第二时空三维内核的深度卷积神经网络模型,所述混合网络模型包括融合层,所述预测识别网络模型包括全局池化层、深度融合层和全连接层。
  8. 一种手势识别装置,其特征在于,包括:
    第一获取模块,用于获取真实视频数据;
    第一预处理模块,用于对所述真实视频数据进行预处理,获得待处理视频数据;
    图像处理模块,用于将所述待处理视频数据输入至预训练的手势识别网络模型进行处理,获得识别结果;
    发送模块,用于将所述识别结果发送至预设管理终端。
  9. 如权利要求8所述的手势识别装置,其特征在于,所述第一预处理模块,包括:
    分帧处理单元,用于对所述真实视频数据进行分帧处理,获得分帧后的视频片段;
    重组单元,用于按照预设方式对所述分帧后的视频片段进行重组,获得所述待处理视频数据。
  10. 如权利要求9所述的手势识别装置,其特征在于,所述重组单元,包括:
    重组子单元,用于按照跳帧处理方式选取多个分帧后的视频片段并进行重组,获得所述待处理视频数据。
  11. 如权利要求8所述的手势识别装置,其特征在于,还包括:
    第二获取模块,用于获取多个训练视频数据;
    第二预处理模块,用于对所述训练视频数据进行预处理,获得预处理后的训练视频数据;
    标签模块,用于根据每个所述训练视频数据中的手势类型分别对对应的预处理的训练视频数据添加标签,获得训练数据集;
    预训练模块,用于根据训练数据集对手势识别网络模型进行预训练,获得预训练后的手势识别网络模型。
  12. 如权利要求8所述的手势识别装置,其特征在于,所述手势识别网络模型包括快通道网络模型、慢通道网络模型、混合网络模型和预测识别网络模型。
  13. 如权利要求12所述的手势识别装置,其特征在于,所述待处理视频数据包括第一视频数据和第二视频数据;
    所述图像处理模块,包括:
    第一处理单元,用于将所述第一视频数据输入至所述快通道网络模型进行处理,获得第一处理结果;
    第二处理单元,用于将所述第二视频数据输入至所述慢通道网络模型,通过所述慢通道网络模型和所述混合网络模型进行处理,获得第二处理结果;
    融合单元,用于通过所述预测识别网络模型对所述第一处理结果和第二处理结果进行融合处理,获得所述待处理视频数据中的手势属于每个预设手势类型的概率值;
    识别单元,用于选取概率值最大的手势类型作为识别结果。
  14. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。
  15. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。
PCT/CN2021/075094 2021-02-03 2021-02-03 一种手势识别方法、装置、终端设备及可读存储介质 WO2022165675A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/075094 WO2022165675A1 (zh) 2021-02-03 2021-02-03 一种手势识别方法、装置、终端设备及可读存储介质
CN202180000451.3A CN112997192A (zh) 2021-02-03 2021-02-03 一种手势识别方法、装置、终端设备及可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/075094 WO2022165675A1 (zh) 2021-02-03 2021-02-03 一种手势识别方法、装置、终端设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2022165675A1 true WO2022165675A1 (zh) 2022-08-11

Family

ID=76337136

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/075094 WO2022165675A1 (zh) 2021-02-03 2021-02-03 一种手势识别方法、装置、终端设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN112997192A (zh)
WO (1) WO2022165675A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578683A (zh) * 2022-12-08 2023-01-06 中国海洋大学 一种动态手势识别模型的搭建方法及动态手势识别方法
CN117789302A (zh) * 2023-12-29 2024-03-29 点昀技术(深圳)有限公司 手势识别方法和手势识别模型训练方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109886225A (zh) * 2019-02-27 2019-06-14 浙江理工大学 一种基于深度学习的图像手势动作在线检测与识别方法
CN110348494A (zh) * 2019-06-27 2019-10-18 中南大学 一种基于双通道残差神经网络的人体动作识别方法
CN110956059A (zh) * 2018-09-27 2020-04-03 深圳云天励飞技术有限公司 一种动态手势识别方法、装置和电子设备
CN111105803A (zh) * 2019-12-30 2020-05-05 苏州思必驰信息科技有限公司 快速识别性别的方法及装置、用于识别性别的算法模型的生成方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808150A (zh) * 2017-11-20 2018-03-16 珠海习悦信息技术有限公司 人体视频动作识别方法、装置、存储介质及处理器

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110956059A (zh) * 2018-09-27 2020-04-03 深圳云天励飞技术有限公司 一种动态手势识别方法、装置和电子设备
CN109886225A (zh) * 2019-02-27 2019-06-14 浙江理工大学 一种基于深度学习的图像手势动作在线检测与识别方法
CN110348494A (zh) * 2019-06-27 2019-10-18 中南大学 一种基于双通道残差神经网络的人体动作识别方法
CN111105803A (zh) * 2019-12-30 2020-05-05 苏州思必驰信息科技有限公司 快速识别性别的方法及装置、用于识别性别的算法模型的生成方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578683A (zh) * 2022-12-08 2023-01-06 中国海洋大学 一种动态手势识别模型的搭建方法及动态手势识别方法
CN117789302A (zh) * 2023-12-29 2024-03-29 点昀技术(深圳)有限公司 手势识别方法和手势识别模型训练方法

Also Published As

Publication number Publication date
CN112997192A (zh) 2021-06-18

Similar Documents

Publication Publication Date Title
CN111797893B (zh) 一种神经网络的训练方法、图像分类系统及相关设备
US10235602B1 (en) Machine learning artificial intelligence system for identifying vehicles
US11321583B2 (en) Image annotating method and electronic device
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
WO2019213459A1 (en) System and method for generating image landmarks
CN111476309A (zh) 图像处理方法、模型训练方法、装置、设备及可读介质
CN110765860A (zh) 摔倒判定方法、装置、计算机设备及存储介质
CN112132032B (zh) 交通标志牌检测方法、装置、电子设备及存储介质
CN112037142B (zh) 一种图像去噪方法、装置、计算机及可读存储介质
CN112257526B (zh) 一种基于特征交互学习的动作识别方法及终端设备
WO2022165675A1 (zh) 一种手势识别方法、装置、终端设备及可读存储介质
CN107944381B (zh) 人脸跟踪方法、装置、终端及存储介质
CN109598250A (zh) 特征提取方法、装置、电子设备和计算机可读介质
WO2022166258A1 (zh) 行为识别方法、装置、终端设备及计算机可读存储介质
CN114170516A (zh) 一种基于路侧感知的车辆重识别方法、装置及电子设备
WO2022052782A1 (zh) 图像的处理方法及相关设备
CN114612987A (zh) 一种表情识别方法及装置
CN113610034B (zh) 识别视频中人物实体的方法、装置、存储介质及电子设备
CN114677350A (zh) 连接点提取方法、装置、计算机设备及存储介质
CN111310595B (zh) 用于生成信息的方法和装置
CN115690845A (zh) 一种运动轨迹预测方法及装置
CN113593297B (zh) 一种停车位状态检测方法和装置
CN112287945A (zh) 碎屏确定方法、装置、计算机设备及计算机可读存储介质
Qi et al. An efficient deep learning hashing neural network for mobile visual search
CN111818364B (zh) 视频融合方法、系统、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21923703

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21923703

Country of ref document: EP

Kind code of ref document: A1