WO2020244075A1 - 手语识别方法、装置、计算机设备及存储介质 - Google Patents

手语识别方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020244075A1
WO2020244075A1 PCT/CN2019/103387 CN2019103387W WO2020244075A1 WO 2020244075 A1 WO2020244075 A1 WO 2020244075A1 CN 2019103387 W CN2019103387 W CN 2019103387W WO 2020244075 A1 WO2020244075 A1 WO 2020244075A1
Authority
WO
WIPO (PCT)
Prior art keywords
sign language
language recognition
recognition model
vectors
sample data
Prior art date
Application number
PCT/CN2019/103387
Other languages
English (en)
French (fr)
Inventor
朱文和
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020244075A1 publication Critical patent/WO2020244075A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Definitions

  • This application relates to the field of computer technology, in particular to a sign language recognition method, sign language recognition device, computer equipment and non-volatile readable storage medium.
  • a sign language recognition method includes:
  • each set of hand depth image information includes multiple depth images corresponding to a series of continuous sign language actions of the palms and finger bones of the left and right hands;
  • the three-dimensional space coordinates of the palms and finger bones of the left and right hands in the multiple depth images in each set of hand depth image information are formed into a set of vectors, and each set of vectors is labeled, and multiple sets of the vectors and the labels corresponding to the vectors are used as training A sample data set, where the label is used to identify the semantics of the sign language words corresponding to each group of vectors;
  • a sign language recognition device includes:
  • the hand image acquisition module is used to acquire multiple sets of hand depth image information taken by the depth camera, where each set of hand depth image information includes the depth of a series of continuous sign language actions of the left and right palms and finger bones. image;
  • the three-dimensional space coordinate determination module is used to determine the three-dimensional space coordinates of the left and right palms and finger bones in the three-dimensional space coordinate system of each depth image in each set of hand depth image information;
  • the training sample data set generation module is used to group the three-dimensional space coordinates of the left and right palms and finger bones in the multiple depth images in each group of hand depth image information into a set of vectors, and label each group of vectors, and combine multiple sets of The vector and the label corresponding to the vector are used as a training sample data set, where the label is used to identify the semantics of the sign language words corresponding to each group of vectors;
  • a model training module configured to construct a sign language recognition model, and input the training sample data set into the sign language recognition model to train the sign language recognition model;
  • the model testing module is configured to obtain a test sample data set, and input the test sample data set into the sign language recognition model to test the sign language recognition model;
  • the sign language recognition module is used to obtain the sign language image input by the user, and use the sign language recognition model to perform sign language recognition on the sign language image input by the user.
  • a computer device the computer device includes a processor, and the processor is configured to implement the aforementioned sign language recognition method when executing computer-readable instructions stored in a memory.
  • a non-volatile readable storage medium has computer readable instructions stored thereon, and when the computer readable instructions are executed by a processor, the sign language recognition method as described above is realized.
  • This application obtains the depth image information of the user's hand as a training sample, and automatically recognizes the user's sign language through the sign language recognition model, making sign language recognition more accurate, intelligent and efficient, and providing convenience for the communication between deaf-mute and normal people.
  • Fig. 1 is a schematic diagram of the application environment architecture of the sign language recognition method provided by an embodiment of the present application.
  • Fig. 2 is a flowchart of a sign language recognition method provided by an embodiment of the present application.
  • Fig. 3 is a schematic structural diagram of a sign language recognition device provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a computer device provided by an embodiment of the present application.
  • FIG. 1 it is a schematic diagram of the application environment architecture of the sign language recognition method provided by an embodiment of this application.
  • the sign language recognition method in this application is applied to a computer device 1.
  • the computer device 1 may be an electronic device installed with sign language recognition software, such as a tablet computer, a smart phone, a desktop computer, a server, etc., wherein the server may be Single server, server cluster or cloud server, etc.
  • the computer equipment 1 and at least one depth camera 2 communicate interactively.
  • the depth camera 2 is used to take three-dimensional images.
  • the depth imaging device 2 may be a Kinect depth camera or other device with a depth imaging function.
  • the depth camera device 2 may be directly installed in the computer device 1, or may establish a communication connection with the computer device 1 in a wired or wireless manner to realize interactive communication.
  • FIG. 2 is a flowchart of a sign language recognition method provided by an embodiment of the present application. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted.
  • Step S1 Acquire multiple sets of hand depth image information of the user captured by the depth camera, where each set of hand depth image information includes multiple depth images corresponding to a series of continuous sign language actions performed by the palms and finger bones of the left and right hands.
  • Every sign language in the world is composed of multiple sign language words.
  • these sign language words can be “hello”, “me”, “tomorrow”, etc.
  • Each sign language word corresponds to a series of continuous actions.
  • the multiple pieces of picture information in each group of user hand depth image information obtained from the depth camera correspond to a word in a sign language, for example, the sign language word “hello” corresponds to multiple pieces of a series of continuous actions (E.g. 5) image information, the sign language word “tomorrow” also corresponds to a series of multiple (e.g. 5) image information of continuous actions.
  • the depth image information of the user's hand may be obtained by using a Kinect camera, which collects video image data of a series of actions of the user's hand, and the video image data of the series of actions includes multiple hand images.
  • a Kinect camera which collects video image data of a series of actions of the user's hand, and the video image data of the series of actions includes multiple hand images.
  • the step S1 may further include: performing noise reduction processing on the multiple sets of depth image information of the user's hand acquired from the depth camera.
  • the depth camera may be affected by factors such as the lighting and background in the environment when collecting the depth image information of the user’s hand, the quality of the collected image is not high, and often contains glitch noise. In order to ensure the recognition accuracy, the collected depth image needs to be reduced. Noise processing.
  • performing noise reduction processing on the depth image may specifically be filtering discrete points in the depth image, and the noise reduction processing steps are as follows:
  • Step S2 Determine the three-dimensional coordinates of the left and right palms and finger bones in the three-dimensional coordinate system of each depth image in each set of hand depth image information.
  • the user's hand operation space has a linear correspondence with the three-dimensional space coordinate system, wherein the hand operation space is a real space of a series of continuous hand movements, and a depth camera is used to operate from the hand
  • the image data collected in space can obtain a series of continuous depth images of the hand.
  • the above-mentioned three-dimensional space coordinate system refers to the space coordinate system corresponding to the three-dimensional image data used to display the three-dimensional image.
  • the three-dimensional coordinate points of the left and right palms and the finger bones in the three-dimensional coordinate system are obtained by combining the left and right palms and left and right finger bone information and depth information.
  • the step S2 is to extract the three-dimensional space of the user's left and right hand palms and finger bones in each picture information. coordinate.
  • a group of user hand depth images includes 5 depth images, and the three-dimensional space coordinate value of the user's left palm of each of the 5 depth images is determined in the step S2.
  • Step S3. Combine the three-dimensional space coordinates of the left and right palms and finger bones in the multiple depth images in each set of hand depth image information into a set of vectors, label each set of vectors, and group multiple sets of the vectors and their corresponding
  • the tags are used as a training sample data set, where the tags are used to identify the semantics of the sign language words corresponding to each set of vectors.
  • each set of user hand depth images contains multiple depth pictures corresponding to continuous sign language actions
  • each set of vectors is composed of the user's left hand palm in the multiple pictures corresponding to the continuous sign language actions in the hand depth image
  • the three-dimensional space coordinates, the three-dimensional space coordinates of the left hand finger bones, the right hand palm space coordinates, and the right hand finger bones three-dimensional space coordinates.
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the first depth image are (a1, a2, a3, a4) );
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the second depth image are (b1, b2, b3, b4);
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the third depth image are (c1 ,c2,c3,c4);
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the fourth depth image are (d1, d2, d3, d4);
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the fifth depth image
  • the label corresponding to each group of vectors represents the semantics of the sign language word corresponding to the vector.
  • the label is added manually.
  • the step S3 may also include uniform vector size, specifically including the following steps:
  • the difference before the sign language word leads to different gestures and durations for each sign language word.
  • the number of depth images obtained for each sign language word is also different. This leads to a different amount of data in each group of vectors.
  • the maximum value uniformizes the size of each group of vectors for easy calculation.
  • Step S4 Construct a sign language recognition model, and input the training sample data set into the sign language recognition model to train the sign language recognition model.
  • a LSTM (Long Short-Term Memory) neural network may be used to train the sign language recognition model.
  • the basic idea of the LSTM neural network is to control the flow of information through different types of gate structures: Input Gate, Output Gate, and Forget Gate.
  • the LSTM neural network uses the following formula to control the flow of information:
  • I t ⁇ (W ix I t +W im m t-1 +W ic C t-1 +b i );
  • F t ⁇ (W Fx I t +W Fm m t-1 +W Fc C t-1 +b F );
  • O t ⁇ (W Ox I t +W Om m t-1 +W Oc C t-1 +b O );
  • I (I1,I2...IT)
  • T is the length of the input sequence
  • I t is the input at time t
  • W is the weight matrix
  • b is the bias matrix
  • I, F, c, O and m respectively represent the output of Input Gate, Forget Gate, Output Gate, state unit and LSTM structure
  • is the excitation function of the three control gates, and the formula is:
  • h is the excitation function of the state, and the formula is:
  • the LSTM neural network has the function of caching historical state information, and maintains the historical information through the gate structure, thereby expanding the influence of a wide range of context information on the current information, and improving the continuous sign language recognition Accuracy.
  • Step S5 Obtain a test sample data set, and test the trained sign language recognition model.
  • the method for obtaining the test sample data set is the same as the method for obtaining the training sample data set.
  • test sample data set may also be a test sample data set obtained from a network database, for example, a three-dimensional sign language video image obtained from a network database.
  • the testing of the sign language recognition model includes:
  • step S1 if the correct rate of the sign language recognition model outputting the correct sign language is less than a preset value, return to step S1 to obtain more sample data, and process the newly added sample data through steps S2-S4, Combining the processed new sample data with the previous sample data to retrain the sign language recognition model. If the correct rate is greater than the preset value, the training of the sign language model is completed.
  • Step S6 Obtain a sign language image input by the user, input the sign language image into the sign language recognition model, and perform sign language recognition on the input sign language image.
  • Figure 2 introduces the sign language recognition method of the present application in detail.
  • the following describes the functional modules of the software device that implements the sign language recognition method and the hardware device architecture that implements the sign language recognition method in conjunction with Figures 3-4.
  • Fig. 3 is a structural diagram of a preferred embodiment of a sign language recognition device of the present application.
  • the sign language recognition device 10 runs in a computer device.
  • the sign language recognition device 10 may include multiple functional modules composed of program code segments.
  • the program code of each program segment in the sign language recognition device 10 can be stored in the memory of the computer device and executed by the at least one processor to realize the sign language recognition function.
  • the sign language recognition device 10 can be divided into multiple functional modules according to the functions it performs.
  • the functional modules may include: a hand image acquisition module 101, a three-dimensional space coordinate determination module 102, a training sample data set generation module 103, a model training module 104, a model testing module 105, and a sign language recognition module 106.
  • the module referred to in this application refers to a series of computer-readable instruction segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory. In this embodiment, the function of each module will be described in detail in subsequent embodiments.
  • the hand image acquisition module 101 is used to acquire multiple sets of user hand depth image information from a depth camera, where each set of hand depth image information includes a series of continuous sign language actions performed by the user's left and right palms and finger bones. Zhang's depth image.
  • Every sign language in the world is composed of multiple sign language words.
  • these sign language words can be “hello”, “me”, “tomorrow”, etc.
  • Each sign language word corresponds to a series of continuous actions.
  • the multiple pieces of picture information in each group of user hand depth image information obtained from the depth camera correspond to a word in a sign language, for example, the sign language word “hello” corresponds to multiple pieces of a series of continuous actions (E.g. 5) image information, the sign language word “tomorrow” also corresponds to a series of multiple (e.g. 5) image information of continuous actions.
  • the depth image information of the user's hand may be obtained by using a Kinect camera, which collects video image data of a series of actions of the user's hand, and the video image data of the series of actions includes multiple hand images.
  • a Kinect camera which collects video image data of a series of actions of the user's hand, and the video image data of the series of actions includes multiple hand images.
  • the hand image acquisition module 101 is further configured to perform noise reduction processing on the multiple sets of user hand depth image information acquired from the depth camera.
  • the depth camera may be affected by factors such as the lighting and background in the environment when collecting the depth image information of the user’s hand, the quality of the collected image is not high, and often contains glitch noise. In order to ensure the recognition accuracy, the collected depth image needs to be reduced. Noise processing.
  • performing noise reduction processing on the depth image may specifically be filtering discrete points in the depth image, and the noise reduction processing steps are as follows:
  • the three-dimensional space coordinate determination module 102 is used to determine the three-dimensional space coordinates of the user's left and right hand palms and finger bones in the three-dimensional space coordinate system in each picture information in each set of hand depth image information.
  • the user's hand operation space has a linear correspondence with the three-dimensional space coordinate system, wherein the hand operation space is a real space of a series of continuous hand actions, and a depth camera is used to operate from the hand
  • the image data collected in space can obtain a series of continuous depth images of the hand.
  • the above-mentioned three-dimensional space coordinate system refers to the space coordinate system corresponding to the three-dimensional image data used to display the three-dimensional image.
  • the three-dimensional coordinate points of the left and right palms and the finger bones in the three-dimensional coordinate system are obtained by combining the left and right palms and left and right finger bone information and depth information.
  • each set of user hand depth images contains multiple picture information corresponding to a series of continuous actions
  • the three-dimensional space coordinate determination module 102 extracts the user's left and right hand palms and finger bones in each picture information.
  • the three-dimensional space coordinates For example, a set of hand depth images of a user includes 5 depth images, and the three-dimensional space coordinate determination module 102 determines the three-dimensional space coordinate values of the user's left palm of each of the five height images.
  • the training sample data set generation module 103 is used to group the three-dimensional space coordinates of the user's left and right hand palms and finger bones in the multiple pictures in each group of hand depth image information into a set of vectors, and label each set of vectors.
  • the set of vectors and their corresponding labels are used as a training sample data set, where the labels are used to identify the semantics of the sign language words corresponding to each set of vectors.
  • each set of user hand depth images contains multiple depth pictures corresponding to continuous sign language actions
  • each set of vectors is composed of the user's left hand palm in the multiple pictures corresponding to the continuous sign language actions in the hand depth image
  • the three-dimensional space coordinates, the three-dimensional space coordinates of the left hand finger bones, the right hand palm space coordinates, and the right hand finger bones three-dimensional space coordinates.
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the first depth image are (a1, a2, a3, a4) );
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the second depth image are (b1, b2, b3, b4);
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the third depth image are (c1 ,c2,c3,c4);
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the fourth depth image are (d1, d2, d3, d4);
  • the three-dimensional coordinates of the user's left and right palms and finger bones in the fifth depth image
  • the label corresponding to each group of vectors represents the semantics of the sign language word corresponding to the vector.
  • the label is added manually.
  • the training sample data set generation module 103 is also used to unify the vector size, which specifically includes the following steps:
  • the difference before the sign language word leads to different gestures and durations for each sign language word.
  • the number of depth images obtained for each sign language word is also different. This leads to a different amount of data in each group of vectors.
  • the maximum value uniformizes the size of each group of vectors for easy calculation.
  • the model training module 104 is used to construct a sign language recognition training model, and input the training sample data set into the sign language recognition model to train the sign language recognition model.
  • a LSTM (Long Short-Term Memory) neural network may be used to train the sign language recognition model.
  • the basic idea of the LSTM neural network is to control the flow of information through different types of gate structures: Input Gate, Output Gate, and Forget Gate.
  • the LSTM neural network uses the following formula to control the flow of information:
  • I t ⁇ (W ix I t +W im m t-1 +W ic C t-1 +b i );
  • F t ⁇ (W Fx I t +W Fm m t-1 +W Fc C t-1 +b F );
  • O t ⁇ (W Ox I t +W Om m t-1 +W Oc C t-1 +b O );
  • T is the length of the input sequence
  • W is the weight matrix
  • b is the bias matrix
  • I, F, c, O , M represent the output of Input Gate, Forget Gate, Output Gate, state unit and LSTM structure respectively;
  • is the excitation function of the three control gates, and the formula is:
  • h is the excitation function of the state, and the formula is:
  • the LSTM neural network has the function of caching historical state information, and maintains the historical information through the gate structure, thereby expanding the influence of a wide range of context information on the current information, and improving the continuous sign language recognition Accuracy.
  • the model testing module 105 is used to obtain a test sample data set, and test the sign language recognition model trained in step S4.
  • the method for obtaining the test sample data set is the same as the method for obtaining the training sample data set.
  • test sample data set may also be a test sample data set obtained from a network database, for example, a three-dimensional sign language video image obtained from a network database.
  • the testing of the sign language recognition model includes:
  • the sign language recognition module 106 obtains the sign language image input by the user, and uses the sign language recognition model trained and tested in steps S4-S5 to perform sign language recognition on the sign language image input by the user.
  • Fig. 4 is a schematic diagram of a preferred embodiment of the computer equipment of this application.
  • the computer device 1 includes a memory 20, a processor 30, and computer readable instructions 40 stored in the memory 20 and running on the processor 30, such as a sign language recognition program.
  • the processor 30 executes the computer-readable instructions 40, the steps in the embodiment of the sign language recognition method described above are implemented, for example, steps S1 to S6 shown in FIG. 2.
  • the processor 30 executes the computer-readable instruction 40, the function of each module/unit in the embodiment of the sign language recognition device is realized, for example, the modules 101-106 in FIG. 3.
  • the computer-readable instructions 40 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 20 and executed by the processor 30, To complete this application.
  • the one or more modules/units may be a series of computer-readable instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 40 in the computer device 1.
  • the computer-readable instructions 40 can be divided into the hand image acquisition module 101, the three-dimensional coordinate determination module 102, the training sample data set generation module 103, the model training module 104, the model testing module 105, and sign language in FIG. Recognition module 106.
  • the computer-readable instructions 40 can be divided into the hand image acquisition module 101, the three-dimensional coordinate determination module 102, the training sample data set generation module 103, the model training module 104, the model testing module 105, and sign language in FIG. Recognition module 106.
  • the third embodiment For specific functions of each module, refer to the third embodiment.
  • the computer device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram is only an example of the computer device 1 and does not constitute a limitation on the computer device 1. It may include more or less components than those shown in the figure, or a combination of certain components, or different components. Components, for example, the computer device 1 may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 30 may be a central processing unit (Central Processing Unit, CPU), other general processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor, or the processor 30 may also be any conventional processor, etc.
  • the processor 30 is the control center of the computer device 1 and connects the entire computer device 1 with various interfaces and lines. Parts.
  • the memory 20 may be used to store the computer-readable instructions 40 and/or modules/units, and the processor 30 can run or execute the computer-readable instructions and/or modules/units stored in the memory 20, and
  • the data stored in the memory 20 is called to realize various functions of the computer device 1.
  • the memory 20 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, etc.) created according to the use of the computer device 1 and the like are stored.
  • the memory 20 may include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), At least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), At least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

一种手语识别方法、手语识别装置、计算机设备及非易失性可读存储介质,所述方法包括:获取多组用户手部深度图像信息;确定每一组手部深度图像信息中每张图片信息内用户左右手手掌及手指骨骼的三维空间坐标;将每组手部深度图像信息内的多张图片中用户左右手手掌及手指骨骼的三维空间坐标组成一组向量,为每组向量打标签,将多组向量及其对应的标签作为训练样本数据集,所述标签用于标识每组向量对应的手语单词的语义;构建手语识别训练模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练;测试所述手语识别模型;所述手语识别模型对输入的手语图像进行识别。该方法使得手语识别更加准确、智能且高效。

Description

手语识别方法、装置、计算机设备及存储介质
本申请要求于2019年06月05日提交中国专利局,申请号为201910484375.2发明名称为“手语识别方法、装置、计算机装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种手语识别方法、手语识别装置、计算机设备及非易失性可读存储介质。
背景技术
世界上聋哑人的数量众多,他们主要通过手语与人进行交流,但与没有手语学习基础的人进行交流存在严重的障碍。目前出现了基于数据手套的手语识别的方法,使用手语的人需要佩戴专用的数据手套,手套上的传感器将采集到的位置、速度等信息反馈给计算机进行手势识别。该方法的优点是可以精确地追踪目标的位置和轨迹,实时性强;缺点是设备昂贵,用户需要佩戴专用的手套,削弱了人机交互的自然性,所以难以在现实生活中推广。
发明内容
鉴于以上内容,有必要提出一种手语识别方法及装置、计算机设备和非易失性可读存储介质,使得手语识别更加准确、高效和智能化。
一种手语识别方法,所述方法包括:
获取深度摄像机拍摄的多组手部深度图像信息,其中,每组手部深度图像信息中均包括左右手手掌及手指骨骼的一系列连续手语动作对应的多张的深度图像;
确定每一组手部深度图像信息中每张深度图像内左右手手掌及手指骨骼在三维空间坐标系中的三维空间坐标;
将每组手部深度图像信息内的多张深度图像中左右手手掌及手指骨骼的三 维空间坐标组成一组向量,并为每组向量打标签,将多组所述向量及向量对应的标签作为训练样本数据集,其中,所述标签用于标识每组向量对应的手语单词的语义;
构建手语识别模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练;
获取测试样本数据集,将所述测试样本数据集输入所述手语识别模型测试所述手语识别模型;
获取用户输入的手语图像,将所述手语图像输入所述手语识别模型,对所述用户输入的手语图像进行手语识别。
一种手语识别装置,所述装置包括:
手部图像获取模块,用于获取深度摄像机拍摄的多组手部深度图像信息,其中,每组手部深度图像信息中均包括左右手手掌及手指骨骼的一系列连续手语动作对应的多张的深度图像;
三维空间坐标确定模块,用于确定每一组手部深度图像信息中每张深度图像内左右手手掌及手指骨骼在三维空间坐标系中的三维空间坐标;
训练样本数据集生成模块,用于将每组手部深度图像信息内的多张深度图像中左右手手掌及手指骨骼的三维空间坐标组成一组向量,并为每组向量打标签,将多组所述向量及所述向量对应的标签作为训练样本数据集,其中,所述标签用于标识每组向量对应的手语单词的语义;
模型训练模块,用于构建手语识别模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练;
模型测试模块,用于获取测试样本数据集,将所述测试样本数据集输入所述手语识别模型测试所述手语识别模型;
手语识别模块,用于获取用户输入的手语图像,使用所述手语识别模型对所述用户输入的手语图像进行手语识别。
一种计算机设备,所述计算机设备包括处理器,所述处理器用于执行存储器中存储的计算机可读指令时实现如前所述的手语识别方法。
一种非易失性可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如前所述的手语识别方法。
本申请通过获取用户手部深度图像信息作为训练样本,并通过手语识别模 型来自动识别用户的手语,使得手语识别更加准确、智能且高效,为聋哑人和正常人的交流提供了方便。
附图说明
图1手语识别是本申请一实施例提供的手语识别方法的应用环境架构示意图。
图2是本申请一实施例提供的手语识别方法流程图。
图3是本申请一实施例提供的手语识别装置的结构示意图。
图4是本申请一实施例提供的计算机设备示意图。
具体实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
参阅图1所示,为本申请一实施例提供的手语识别方法的应用环境架构示意图。
本申请中的手语识别方法应用在计算机设备1中,所述计算机设备1可以为安装有手语识别软件的电子设备,例如平板电脑、智能手机、台式计算机、服务器等,其中,所述服务器可以是单一的服务器、服务器集群或云服务器等。
所述计算机设备1和至少一个深度摄像装置2交互通信。所述深度摄像装置2用于拍摄三维图像。其中,所述深度摄像装置2可以是Kinect深度摄像机等具有深度摄像功能的装置。所述深度摄像装置2可以直接设置于所述计算机 设备1中,也可以和所述计算机设备1通过有线或无线方式建立通信连接从而实现交互通信。
请参阅图2所示,是本申请一实施例提供的手语识别方法的流程图。根据不同的需求,所述流程图中步骤的顺序可以改变,某些步骤可以省略。
步骤S1、获取深度摄像机拍摄的用户的多组手部深度图像信息,其中,每组手部深度图像信息中均包括左右手手掌及手指骨骼进行的一系列连续手语动作对应的多张的深度图像。
世界上每一种手语是由多个手语单词组成的,例如这些手语单词可以是“你好”、“我”、“明天”等,每个手语单词都对应一系列连续的动作。所述从深度摄像机获取的每一组用户手部深度图像信息中的多张图片信息均对应一种手语中的一个单词,举例而言,手语单词“你好”对应一系列连续动作的多张(例如5张)图像信息,手语单词“明天”也对应一系列连续动作的多张(例如5张)图像信息。
所述用户手部深度图像信息可以是利用Kinect摄像机获取的,所述Kinnect摄像机采集用户手部一系列动作的视频图像数据,所述一系列动作的视频图像数据中包括多张手部图像。
本申请一实施方式中,所述步骤S1还可以包括:将所述从深度摄像机获取的多组用户手部深度图像信息进行降噪处理。
由于深度摄像机在采集用户手部深度图像信息时可能受到环境中灯光、背景等因素影响,导致采集的图像质量不高,常常包含毛刺噪声,为了保证识别精度,需要对采集到的深度图像进行降噪处理。
一个实施方式中,对所述深度图像进行降噪处理具体可以是对所述深度图像中的离散点进行滤波处理,所述降噪处理步骤如下:
(1)计算点云中的点的欧氏距离;
(2)取一阈值,将欧氏距离小于此阈值的归为同类;
(3)统计每一类的点云数,将点云数最少的预设类别删除,例如将点云数最少的5%的类删除。
步骤S2、确定每一组手部深度图像信息中每张深度图像内左右手手掌及手指骨骼在三维空间坐标系中的三维空间坐标。
本实施方式中,用户手部操作空间与所述三维空间坐标系成线性对应关系, 其中,所述手部操作空间为手部一系列连续动作的真实空间,利用深度摄像机从所述手部操作空间采集的图像数据可以获取手部一系列连续的深度图像。上述三维空间坐标系是指用于显示三维图像的立体图像数据所对应的空间坐标系。所述方法中,从深度摄像机获取用户手部的深度图像后,根据左右手手掌及左右手手指骨骼信息及深度信息结合获得左右手手掌及手指骨骼在所述三维空间坐标系中对应的三维空间坐标点。
本实施方式中,由于每一组用户手部深度图像中都包含有一系列连续动作对应的多张图片信息,所述步骤S2中是提取每张图片信息中的用户左右手手掌及手指骨骼的三维空间坐标。举例而言,一组用户手部深度图像中包括5张深度图像,所述步骤S2中确定这5张身度图像中每一张深度图像的用户左手手掌三维空间坐标值。
步骤S3、将每组手部深度图像信息内的多张深度图像中左右手手掌及手指骨骼的三维空间坐标组成一组向量,并为每组向量打标签,将多组所述向量及其对应的标签作为训练样本数据集,其中,所述标签用于标识每组向量对应的手语单词的语义。
由于每一组用户手部深度图像中都包含有连续手语动作对应的多张深度图片,因此每一组向量由一组用手部深度图像中连续手语动作对应的多张图片中的用户左手手掌三维空间坐标、左手手指骨骼三维空间坐标、右手手掌的空间坐标、右手手指骨骼的三维空间坐标组成。
举例而言,当第一组用户手部深度图像中包括1-5共5张深度图像,第1张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(a1,a2,a3,a4);第2张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(b1,b2,b3,b4);第3张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(c1,c2,c3,c4);第4张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(d1,d2,d3,d4);第5张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(e1,e2,e3,e4),那么所述第一组向量I 1为:
Figure PCTCN2019103387-appb-000001
以此类推,得到多组用户手部深度图像对应的多组向量I 1-I n
本实施方式中,每一组向量对应的标签代表所述向量对应的手语单词的语义所述标签可以用数字符号代表,例如第一组向量的标签L=01,代表手语单词“你好”,第二组向量的标签L=02代表手语单词“我”,第三组向量的标签L=03代表手语单词“明天”。
本实施方式中,所述标签是通过人工方式添加的。
一实施方式中,所述步骤S3还可以包括统一向量大小,具体包括如下步骤:
1)设置向量最大值;
2)确定每组向量是否达到所述向量最大值,若未达到所述向量最大值,则将该组向量用0补齐,使得该组向量值与所述设置的向量最大值的数据量相等。
手语单词之前的差异导致每个手语单词对应的手势的动作和持续时间不同,相应地每个手语单词对应获取的深度图片数量也不同,这样就导致每组向量中的数据量不同,通过设置向量最大值将每组向量统一大小,便于计算。
步骤S4、构建手语识别模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练。
在一个实施例中,可以采用LSTM(Long Short-Term Memory,长短期记忆网络)神经网络训练所述手语识别模型。
LSTM型神经网络的基本思想是通过输入门(Input Gate)、输出门(Output Gate)和遗忘门(Forget Gate)这些不同类型的门结构来控制信息的流动。在本实施例中,所述LSTM型神经网络采用下式控制信息的流动:
I t=σ(W ixI t+W imm t-1+W icC t-1+b i);
F t=σ(W FxI t+W Fmm t-1+W FcC t-1+b F);
c t=F t⊙c t-1+It⊙g(W cxI t+W cmm t-1+b c);
O t=σ(W OxI t+W Omm t-1+W OcC t-1+b O);
m t=O t⊙h(C t);
其中,给定输入序列I=(I1,I2...IT),T为输入序列的长度,I t为t时刻的输入,W为权重矩阵,b为偏置矩阵,I、F、c、O、m分别代表输入Input Gate、Forget Gate、Output Gate、状态单元以及LSTM结构的输出;
其中,σ为三个控制门的激励函数,公式为:
Figure PCTCN2019103387-appb-000002
其中,h为状态的激励函数,公式为:
Figure PCTCN2019103387-appb-000003
通过结构和计算公式可以看出LSTM型神经网络具有缓存历史的状态信息的作用,并且通过门结构对历史信息进行维护,从而扩展了大范围上下文信息对当前信息的影响,提升了连续手语识别的准确率。
步骤S5、获取测试样本数据集,对训练完成的所述手语识别模型进行测试。
在一个实施例中,所述测试样本数据集的获取方法与所述训练样本数据集的获取方法相同。
在另一个实施例中,所述测试样本数据集也可以是由网络数据库中获取的测试样本数据集,例如由网络数据库中获取的三维手语视频图像。
在一个实施例中,所述测试所述手语识别模型包括:
(1)将测试样本数据集中的多组手语所对应的用户手部深度图像输入到所述手语识别模型,获得所述手语识别模型对应输出的手语语义;
(2)确定所述手语识别模型输出正确手语语义的正确率,根据所确定的正确率确定是否要重新训练所述手语识别模型。
在一个实施例中,若所述手语识别模型输出正确手语的正确率小于一个预设值,则回到步骤S1,获取更多样本数据,通过步骤S2-S4对新增的样本数据进行处理,将处理后的所述新增的样本数据结合前一次的样本数据重新训练所述手语识别模型。若正确率大于所述预设值,则所述手语模型训练完成。
步骤S6、获取用户输入的手语图像,将所述手语图像输入所述手语识别模型,对所述输入的手语图像进行手语识别。
上述图2详细介绍了本申请的手语识别方法,下面结合第3-4图,对实现所述手语识别方法的软件装置的功能模块以及实现所述手语识别方法的硬件装置架构进行介绍。
应所述了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。
图3为本申请手语识别装置较佳实施例的结构图。
在一些实施例中,手语识别装置10运行于计算机设备中。所述手语识别装置10可以包括多个由程序代码段所组成的功能模块。所述手语识别装置10中的各个程序段的程序代码可以存储于计算机设备的存储器中,并由所述至少一个处理器所执行,以实现手语识别功能。
本实施例中,所述手语识别装置10根据其所执行的功能,可以被划分为多个功能模块。参阅图3所示,所述功能模块可以包括:手部图像获取模块101、三维空间坐标确定模块102、训练样本数据集生成模块103、模型训练模块104、模型测试模块105、手语识别模块106。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机可读指令段,其存储在存储器中。在本实施例中,关于各模块的功能将在后续的实施例中详述。
所述手部图像获取模块101用于从深度摄像机获取多组用户手部深度图像信息,其中,每组手部深度图像信息中包括用户左右手手掌及手指骨骼进行的一系列连续手语动作对应的多张的深度图像。
世界上每一种手语是由多个手语单词组成的,例如这些手语单词可以是“你好”、“我”、“明天”等,每个手语单词都对应一系列连续的动作。所述从深度摄像机获取的每一组用户手部深度图像信息中的多张图片信息均对应一种手语中的一个单词,举例而言,手语单词“你好”对应一系列连续动作的多张(例如5张)图像信息,手语单词“明天”也对应一系列连续动作的多张(例如5张)图像信息。
所述用户手部深度图像信息可以是利用Kinect摄像机获取的,所述Kinnect摄像机采集用户手部一系列动作的视频图像数据,所述一系列动作的视频图像数据中包括多张手部图像。
本申请一实施方式中,所述手部图像获取模块101还用于将所述从深度摄像机获取的多组用户手部深度图像信息进行降噪处理。
由于深度摄像机在采集用户手部深度图像信息时可能受到环境中灯光、背景等因素影响,导致采集的图像质量不高,常常包含毛刺噪声,为了保证识别精度,需要对采集到的深度图像进行降噪处理。
一个实施方式中,对所述深度图像进行降噪处理具体可以是对所述深度图像中的离散点进行滤波处理,所述降噪处理步骤如下:
(1)计算点云中的点的欧氏距离;
(2)取一阈值,将欧氏距离小于此阈值的归为同类;
(3)统计每一类的点云数,将点云数最少的预设类别删除,例如将点云数最少的5%的类删除。
三维空间坐标确定模块102用于确定每一组手部深度图像信息中每张图片信息内用户左右手手掌及手指骨骼在三维空间坐标系中的三维空间坐标。
本实施方式中,用户手部操作空间与所述三维空间坐标系成线性对应关系,其中,所述手部操作空间为手部一系列连续动作的真实空间,利用深度摄像机从所述手部操作空间采集的图像数据可以获取手部一系列连续的深度图像。上述三维空间坐标系是指用于显示三维图像的立体图像数据所对应的空间坐标系。所述方法中,从深度摄像机获取用户手部的深度图像后,根据左右手手掌及左右手手指骨骼信息及深度信息结合获得左右手手掌及手指骨骼在所述三维空间坐标系中对应的三维空间坐标点。
本实施方式中,由于每一组用户手部深度图像中都包含有一系列连续动作对应的多张图片信息,所述三维空间坐标确定模块102是提取每张图片信息中的用户左右手手掌及手指骨骼的三维空间坐标。举例而言,一组用户手部深度图像中包括5张深度图像,所述三维空间坐标确定模块102确定这5张身度图像中每一张深度图像的用户左手手掌三维空间坐标值。
所述训练样本数据集生成模块103用于将每组手部深度图像信息内的多张图片中用户左右手手掌及手指骨骼的三维空间坐标组成一组向量,并为每组向量打标签,将多组向量及其对应的标签作为训练样本数据集,其中,所述标签用于标识每组向量对应的手语单词的语义。
由于每一组用户手部深度图像中都包含有连续手语动作对应的多张深度图片,因此每一组向量由一组用手部深度图像中连续手语动作对应的多张图片中的用户左手手掌三维空间坐标、左手手指骨骼三维空间坐标、右手手掌的空间坐 标、右手手指骨骼的三维空间坐标组成。
举例而言,当第一组用户手部深度图像中包括1-5共5张深度图像,第1张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(a1,a2,a3,a4);第2张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(b1,b2,b3,b4);第3张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(c1,c2,c3,c4);第4张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(d1,d2,d3,d4);第5张深度图像中用户左右手手掌及手指骨骼的三维空间坐标分别为(e1,e2,e3,e4),那么所述第一组向量I 1为:
Figure PCTCN2019103387-appb-000004
以此类推,得到多组用户手部深度图像对应的多组向量I 1-I n
本实施方式中,每一组向量对应的标签代表所述向量对应的手语单词的语义所述标签可以用数字符号代表,例如第一组向量的标签L=01,代表手语单词“你好”,第二组向量的标签L=02代表手语单词“我”,第三组向量的标签L=03代表手语单词“明天”。
本实施方式中,所述标签是通过人工方式添加的。
一实施方式中,训练样本数据集生成模块103还用于统一向量大小,具体包括如下步骤:
1)设置向量最大值;
2)确定每组向量是否达到所述向量最大值,若未达到所述向量最大值,则将该组向量用0补齐,使得该组向量值与所述设置的向量最大值的数据量相等。
手语单词之前的差异导致每个手语单词对应的手势的动作和持续时间不同,相应地每个手语单词对应获取的深度图片数量也不同,这样就导致每组向量中的数据量不同,通过设置向量最大值将每组向量统一大小,便于计算。
所述模型训练模块104用于构建手语识别训练模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练。
在一个实施例中,可以采用LSTM(Long Short-Term Memory,长短期记忆 网络)神经网络训练所述手语识别模型。
LSTM型神经网络的基本思想是通过输入门(Input Gate)、输出门(Output Gate)和遗忘门(Forget Gate)这些不同类型的门结构来控制信息的流动。在本实施例中,所述LSTM型神经网络采用下式控制信息的流动:
I t=σ(W ixI t+W imm t-1+W icC t-1+b i);
F t=σ(W FxI t+W Fmm t-1+W FcC t-1+b F);
c t=F t⊙c t-1+It⊙g(W cxI t+W cmm t-1+b c);
O t=σ(W OxI t+W Omm t-1+W OcC t-1+b O);
m t=O t⊙h(C t);
其中,给定输入序列I=(I1,I2...IT),T为输入序列的长度,It为t时刻的输入,W为权重矩阵,b为偏置矩阵,I、F、c、O、m分别代表输入Input Gate、Forget Gate、Output Gate、状态单元以及LSTM结构的输出;
其中,σ为三个控制门的激励函数,公式为:
Figure PCTCN2019103387-appb-000005
其中,h为状态的激励函数,公式为:
Figure PCTCN2019103387-appb-000006
通过结构和计算公式可以看出LSTM型神经网络具有缓存历史的状态信息的作用,并且通过门结构对历史信息进行维护,从而扩展了大范围上下文信息对当前信息的影响,提升了连续手语识别的准确率。
模型测试模块105用于获取测试样本数据集,对所述步骤S4训练完成的手语识别模型进行测试。
在一个实施例中,所述测试样本数据集的获取方法与所述训练样本数据集的获取方法相同。
在另一个实施例中,所述测试样本数据集也可以是由网络数据库中获取的测试样本数据集,例如由网络数据库中获取的三维手语视频图像。
在一个实施例中,所述测试所述手语识别模型包括:
(1)将测试样本数据集中的多组手语所对应的用户手部深度图像输入到所 述手语识别模型,获得所述手语识别模型对应输出的手语语义;
(2)确定所述手语识别模型输出正确手语语义的正确率,根据所确定的正确率确定是否要重新训练所述手语识别模型在一个实施例中,若所述手语识别模型输出正确手语的正确率小于一个预设值,则继续获取训练样本数据集对所述模型进行训练,对新增的样本数据进行处理,将处理后的所述新增的样本数据结合前一次的样本数据重新训练所述手语识别模型。若正确率大于所述预设值,则所述手语模型训练完成。
手语识别模块106获取用户输入的手语图像,使用步骤S4-S5中训练完成并通过测试的手语识别模型对所述用户输入的手语图像进行手语识别。
图4为本申请计算机设备较佳实施例的示意图。
所述计算机设备1包括存储器20、处理器30以及存储在所述存储器20中并可在所述处理器30上运行的计算机可读指令40,例如手语识别程序。所述处理器30执行所述计算机可读指令40时实现上述手语识别方法实施例中的步骤,例如图2所示的步骤S1~S6。或者,所述处理器30执行所述计算机可读指令40时实现上述手语识别装置实施例中各模块/单元的功能,例如图3中的模块101-106。
示例性的,所述计算机可读指令40可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器20中,并由所述处理器30执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,所述指令段用于描述所述计算机可读指令40在所述计算机设备1中的执行过程。例如,所述计算机可读指令40可以被分割成图3中的手部图像获取模块101、三维空间坐标确定模块102、训练样本数据集生成模块103、模型训练模块104、模型测试模块105、手语识别模块106。各模块具体功能参见实施例三。
所述计算机设备1可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图仅仅是计算机设备1的示例,并不构成对计算机设备1的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述计算机设备1还可以包括输入输出设备、网络接入设备、总线等。
所称处理器30可以是中央处理单元(Central Processing Unit,CPU),还可以 是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者所述处理器30也可以是任何常规的处理器等,所述处理器30是所述计算机设备1的控制中心,利用各种接口和线路连接整个计算机设备1的各个部分。
所述存储器20可用于存储所述计算机可读指令40和/或模块/单元,所述处理器30通过运行或执行存储在所述存储器20内的计算机可读指令和/或模块/单元,以及调用存储在存储器20内的数据,实现所述计算机设备1的各种功能。所述存储器20可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据计算机设备1的使用所创建的数据(比如音频数据等)等。此外,存储器20可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。
在本申请所提供的几个实施例中,应所述理解到,所揭露的计算机设备和方法,可以通过其它的方式实现。例如,以上所描述的计算机设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种手语识别方法,其特征在于,所述方法包括:
    获取深度摄像机拍摄的多组手部深度图像信息,其中,每组手部深度图像信息中均包括左右手手掌及手指骨骼的一系列连续手语动作对应的多张的深度图像;
    确定每一组手部深度图像信息中每张深度图像内左右手手掌及手指骨骼在三维空间坐标系中的三维空间坐标;
    将每组手部深度图像信息内的多张深度图像中左右手手掌及手指骨骼的三维空间坐标组成一组向量,并为每组向量打标签,将多组所述向量及向量对应的标签作为训练样本数据集,其中,所述标签用于标识每组向量对应的手语单词的语义;
    构建手语识别模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练;
    获取测试样本数据集,将所述测试样本数据集输入所述手语识别模型测试所述手语识别模型;
    获取用户输入的手语图像,将所述手语图像输入所述手语识别模型,对所述用户输入的手语图像进行手语识别。
  2. 如权利要求1所述的手语识别方法,其特征在于,所述方法还包括:将从所述深度摄像机获取的多组手部深度图像信息进行降噪处理。
  3. 如权利要求2中所述的手语识别方法,其特征在于,所述降噪处理包括:
    计算点云中的点的欧氏距离;
    取一阈值,将欧氏距离小于此阈值的归为同类;
    统计每一类的点云数,将点云数最少的预设类别删除。
  4. 如权利要求1所述的手语识别方法,其特征在于,所述方法还包括:
    设置向量最大值;
    确定每组向量是否达到所述向量最大值,若未达到所述向量最大值,则将该组向量用零补齐,使得该组向量值与所述设置的向量最大值的数据量相等。
  5. 如权利要求1所述的手语识别方法,其特征在于,采用长短期记忆网络训 练所述手语识别模型。
  6. 如权利要求1所述的手语识别方法,其特征在于,所述测试所述手语识别模型包括:
    将测试样本数据集中的多组手语所对应的用户手部深度图像输入到所述手语识别模型,获得所述手语识别模型对应输出的手语语义;
    确定所述手语识别模型输出正确手语语义的正确率,根据所确定的正确率确定是否要重新训练所述手语识别模型。
  7. 如权利要求1所述的手语识别方法,其特征在于,所述多组用户手部深度图像信息是从Kinnet摄像装置获取的。
  8. 一种手语识别装置,其特征在于,所述装置包括:
    手部图像获取模块,用于获取深度摄像机拍摄的多组手部深度图像信息,其中,每组手部深度图像信息中均包括左右手手掌及手指骨骼的一系列连续手语动作对应的多张的深度图像;
    三维空间坐标确定模块,用于确定每一组手部深度图像信息中每张深度图像内左右手手掌及手指骨骼在三维空间坐标系中的三维空间坐标;
    训练样本数据集生成模块,用于将每组手部深度图像信息内的多张深度图像中左右手手掌及手指骨骼的三维空间坐标组成一组向量,并为每组向量打标签,将多组所述向量及所述向量对应的标签作为训练样本数据集,其中,所述标签用于标识每组向量对应的手语单词的语义;
    模型训练模块,用于构建手语识别模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练;
    模型测试模块,用于获取测试样本数据集,将所述测试样本数据集输入所述手语识别模型测试所述手语识别模型;
    手语识别模块,用于获取用户输入的手语图像,使用所述手语识别模型对所述用户输入的手语图像进行手语识别。
  9. 如权利要求8所述的手语识别装置,其特征在于,所述手部图像获取模块还用于将从所述深度摄像机获取的多组手部深度图像信息进行降噪处理,其中,所述降噪处理包括:
    计算点云中的点的欧氏距离;
    取一阈值,将欧氏距离小于此阈值的归为同类;
    统计每一类的点云数,将点云数最少的预设类别删除。
  10. 如权利要求8所述的手语识别装置,其特征在于,所述训练样本数据集生成模块还用于:
    设置向量最大值;
    确定每组向量是否达到所述向量最大值,若未达到所述向量最大值,则将该组向量用零补齐,使得该组向量值与所述设置的向量最大值的数据量相等。
  11. 如权利要求8所述的手语识别装置,其特征在于,采用长短期记忆网络训练所述手语识别模型。
  12. 一种计算机设备,其特征在于,所述计算机设备包括处理器,所述处理器用于执行存储器中存储的计算机可读指令时实现以下步骤:
    获取深度摄像机拍摄的多组手部深度图像信息,其中,每组手部深度图像信息中均包括左右手手掌及手指骨骼的一系列连续手语动作对应的多张的深度图像;
    确定每一组手部深度图像信息中每张深度图像内左右手手掌及手指骨骼在三维空间坐标系中的三维空间坐标;
    将每组手部深度图像信息内的多张深度图像中左右手手掌及手指骨骼的三维空间坐标组成一组向量,并为每组向量打标签,将多组所述向量及向量对应的标签作为训练样本数据集,其中,所述标签用于标识每组向量对应的手语单词的语义;
    构建手语识别模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练;
    获取测试样本数据集,将所述测试样本数据集输入所述手语识别模型测试所述手语识别模型;
    获取用户输入的手语图像,将所述手语图像输入所述手语识别模型,对所述用户输入的手语图像进行手语识别。
  13. 如权利要求12所述的计算机设备,其特征在于,所述处理器执行计算机可读指令时还实现以下步骤:
    将从所述深度摄像机获取的多组手部深度图像信息进行降噪处理,其中,所述降噪处理包括:
    计算点云中的点的欧氏距离;
    取一阈值,将欧氏距离小于此阈值的归为同类;
    统计每一类的点云数,将点云数最少的预设类别删除。
  14. 如权利要求12所述的计算机设备,其特征在于,所述处理器执行计算机可读指令时还实现以下步骤:
    设置向量最大值;
    确定每组向量是否达到所述向量最大值,若未达到所述向量最大值,则将该组向量用零补齐,使得该组向量值与所述设置的向量最大值的数据量相等。
  15. 如权利要求12所述的计算机设备,其特征在于,所述处理器执行计算机可读指令以实现所述测试所述手语识别模型时,包括:
    将测试样本数据集中的多组手语所对应的用户手部深度图像输入到所述手语识别模型,获得所述手语识别模型对应输出的手语语义;
    确定所述手语识别模型输出正确手语语义的正确率,根据所确定的正确率确定是否要重新训练所述手语识别模型。
  16. 如权利要求12所述的计算机设备,其特征在于,所述多组用户手部深度图像信息是从Kinnet摄像装置获取的。
  17. 一种非易失性可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现以下步骤:
    获取深度摄像机拍摄的多组手部深度图像信息,其中,每组手部深度图像信息中均包括左右手手掌及手指骨骼的一系列连续手语动作对应的多张的深度图像;
    确定每一组手部深度图像信息中每张深度图像内左右手手掌及手指骨骼在三维空间坐标系中的三维空间坐标;
    将每组手部深度图像信息内的多张深度图像中左右手手掌及手指骨骼的三维空间坐标组成一组向量,并为每组向量打标签,将多组所述向量及向量对应的标签作为训练样本数据集,其中,所述标签用于标识每组向量对应的手语单词的语义;
    构建手语识别模型,将所述训练样本数据集输入所述手语识别模型对所述手语识别模型进行训练;
    获取测试样本数据集,将所述测试样本数据集输入所述手语识别模型测试所述手语识别模型;
    获取用户输入的手语图像,将所述手语图像输入所述手语识别模型,对所述用户输入的手语图像进行手语识别。
  18. 如权利要求17所述的存储介质,其特征在于,所述计算机可读指令被处理器执行时还实现以下步骤:
    将从所述深度摄像机获取的多组手部深度图像信息进行降噪处理,其中,所述降噪处理包括:
    计算点云中的点的欧氏距离;
    取一阈值,将欧氏距离小于此阈值的归为同类;
    统计每一类的点云数,将点云数最少的预设类别删除。
  19. 如权利要求17所述的存储介质,其特征在于,所述计算机可读指令被处理器执行时还实现以下步骤:
    设置向量最大值;
    确定每组向量是否达到所述向量最大值,若未达到所述向量最大值,则将该组向量用零补齐,使得该组向量值与所述设置的向量最大值的数据量相等。
  20. 如权利要求16所述的存储介质,其特征在于,所述计算机可读指令被处理器执行以实现所述测试所述手语识别模型时,包括:
    将测试样本数据集中的多组手语所对应的用户手部深度图像输入到所述手语识别模型,获得所述手语识别模型对应输出的手语语义;
    确定所述手语识别模型输出正确手语语义的正确率,根据所确定的正确率确定是否要重新训练所述手语识别模型。
PCT/CN2019/103387 2019-06-05 2019-08-29 手语识别方法、装置、计算机设备及存储介质 WO2020244075A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910484375.2 2019-06-05
CN201910484375.2A CN110363077A (zh) 2019-06-05 2019-06-05 手语识别方法、装置、计算机装置及存储介质

Publications (1)

Publication Number Publication Date
WO2020244075A1 true WO2020244075A1 (zh) 2020-12-10

Family

ID=68215426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103387 WO2020244075A1 (zh) 2019-06-05 2019-08-29 手语识别方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN110363077A (zh)
WO (1) WO2020244075A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095314A (zh) * 2021-04-07 2021-07-09 科大讯飞股份有限公司 一种公式识别方法、装置、存储介质及设备
CN114546117A (zh) * 2022-02-21 2022-05-27 吉林大学 一种基于深度学习与传感器技术的战术手语识别手套系统及实现方法
CN114677757A (zh) * 2022-03-18 2022-06-28 吉林云帆智能工程有限公司 一种轨道车辆行车手语识别算法
CN117523225A (zh) * 2024-01-04 2024-02-06 山东瑞邦智能装备股份有限公司 基于机器视觉的手套左右手识别方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363077A (zh) * 2019-06-05 2019-10-22 平安科技(深圳)有限公司 手语识别方法、装置、计算机装置及存储介质
CN112825125A (zh) * 2019-11-21 2021-05-21 京东数字科技控股有限公司 手语识别方法及装置、计算机存储介质、电子设备
CN111428871B (zh) * 2020-03-31 2023-02-24 上海市计量测试技术研究院 一种基于bp神经网络的手语翻译方法
CN113496168B (zh) * 2020-04-02 2023-07-25 百度在线网络技术(北京)有限公司 手语数据采集方法、设备、存储介质
CN111709268B (zh) * 2020-04-24 2022-10-14 中国科学院软件研究所 一种深度图像中的基于人手结构指导的人手姿态估计方法和装置
CN113561172B (zh) * 2021-07-06 2023-04-18 北京航空航天大学 一种基于双目视觉采集的灵巧手控制方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246891A (zh) * 2013-05-28 2013-08-14 重庆邮电大学 一种基于Kinect的中国手语识别方法
US10037458B1 (en) * 2017-05-02 2018-07-31 King Fahd University Of Petroleum And Minerals Automated sign language recognition
CN109117766A (zh) * 2018-07-30 2019-01-01 上海斐讯数据通信技术有限公司 一种动态手势识别方法及系统
CN110363077A (zh) * 2019-06-05 2019-10-22 平安科技(深圳)有限公司 手语识别方法、装置、计算机装置及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9672418B2 (en) * 2015-02-06 2017-06-06 King Fahd University Of Petroleum And Minerals Arabic sign language recognition using multi-sensor data fusion
CN105184226A (zh) * 2015-08-11 2015-12-23 北京新晨阳光科技有限公司 数字识别方法和装置及神经网络训练方法和装置
CN107633265B (zh) * 2017-09-04 2021-03-30 深圳市华傲数据技术有限公司 用于优化信用评估模型的数据处理方法及装置
CN109063706A (zh) * 2018-06-04 2018-12-21 平安科技(深圳)有限公司 文字模型训练方法、文字识别方法、装置、设备及介质
CN109389030B (zh) * 2018-08-23 2022-11-29 平安科技(深圳)有限公司 人脸特征点检测方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246891A (zh) * 2013-05-28 2013-08-14 重庆邮电大学 一种基于Kinect的中国手语识别方法
US10037458B1 (en) * 2017-05-02 2018-07-31 King Fahd University Of Petroleum And Minerals Automated sign language recognition
CN109117766A (zh) * 2018-07-30 2019-01-01 上海斐讯数据通信技术有限公司 一种动态手势识别方法及系统
CN110363077A (zh) * 2019-06-05 2019-10-22 平安科技(深圳)有限公司 手语识别方法、装置、计算机装置及存储介质

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113095314A (zh) * 2021-04-07 2021-07-09 科大讯飞股份有限公司 一种公式识别方法、装置、存储介质及设备
CN114546117A (zh) * 2022-02-21 2022-05-27 吉林大学 一种基于深度学习与传感器技术的战术手语识别手套系统及实现方法
CN114546117B (zh) * 2022-02-21 2023-11-10 吉林大学 一种基于深度学习与传感器技术的战术手语识别手套系统及实现方法
CN114677757A (zh) * 2022-03-18 2022-06-28 吉林云帆智能工程有限公司 一种轨道车辆行车手语识别算法
CN117523225A (zh) * 2024-01-04 2024-02-06 山东瑞邦智能装备股份有限公司 基于机器视觉的手套左右手识别方法
CN117523225B (zh) * 2024-01-04 2024-04-16 山东瑞邦智能装备股份有限公司 基于机器视觉的手套左右手识别方法

Also Published As

Publication number Publication date
CN110363077A (zh) 2019-10-22

Similar Documents

Publication Publication Date Title
WO2020244075A1 (zh) 手语识别方法、装置、计算机设备及存储介质
Sahu et al. Artificial intelligence (AI) in augmented reality (AR)-assisted manufacturing applications: a review
US10488939B2 (en) Gesture recognition
US20210279503A1 (en) Image processing method, apparatus, and device, and storage medium
CN109584276B (zh) 关键点检测方法、装置、设备及可读介质
WO2019242416A1 (zh) 视频图像处理方法及装置、计算机可读介质和电子设备
WO2020207190A1 (zh) 一种三维信息确定方法、三维信息确定装置及终端设备
Liu et al. Real-time robust vision-based hand gesture recognition using stereo images
US8442307B1 (en) Appearance augmented 3-D point clouds for trajectory and camera localization
KR20200111617A (ko) 제스처 인식 방법, 장치, 전자 기기 및 저장 매체
WO2019018063A1 (en) FINAL GRAIN IMAGE RECOGNITION
WO2021213067A1 (zh) 物品显示方法、装置、设备及存储介质
CN106874826A (zh) 人脸关键点跟踪方法和装置
CN110096929A (zh) 基于神经网络的目标检测
WO2021098802A1 (en) Object detection device, method, and systerm
WO2023151237A1 (zh) 人脸位姿估计方法、装置、电子设备及存储介质
CN110197149B (zh) 耳部关键点检测方法、装置、存储介质及电子设备
WO2020244151A1 (zh) 图像处理方法、装置、终端及存储介质
EP3937076A1 (en) Activity detection device, activity detection system, and activity detection method
WO2022227218A1 (zh) 药名识别方法、装置、计算机设备和存储介质
WO2021242445A1 (en) Tracking multiple objects in a video stream using occlusion-aware single-object tracking
WO2023109361A1 (zh) 用于视频处理的方法、系统、设备、介质和产品
WO2021217937A1 (zh) 姿态识别模型的训练方法及设备、姿态识别方法及其设备
WO2022247403A1 (zh) 关键点检测方法、电子设备、程序及存储介质
CN111722700A (zh) 一种人机交互方法及人机交互设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931743

Country of ref document: EP

Kind code of ref document: A1