CN108089693A - Gesture identification method and device, intelligence wearing terminal and server - Google Patents

Gesture identification method and device, intelligence wearing terminal and server Download PDF

Info

Publication number
CN108089693A
CN108089693A CN201611041278.9A CN201611041278A CN108089693A CN 108089693 A CN108089693 A CN 108089693A CN 201611041278 A CN201611041278 A CN 201611041278A CN 108089693 A CN108089693 A CN 108089693A
Authority
CN
China
Prior art keywords
gesture
identification
data
model
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611041278.9A
Other languages
Chinese (zh)
Inventor
付瑞林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201611041278.9A priority Critical patent/CN108089693A/en
Publication of CN108089693A publication Critical patent/CN108089693A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of gesture identification method and device, intelligence wearing terminal and server, by obtaining the corresponding sensing data of current gesture from sensor;Wherein, sensing data is used to characterize the variation of gesture, and sensing data is carried out image conversion, image data is obtained, image data is input in Model of Target Recognition, the target gesture-type of current gesture is determined by Model of Target Recognition.In the present embodiment, sensing data is converted into image data, then the type of gesture is identified, using more mature image recognition technology so as to improve the accuracy rate of identification by Model of Target Recognition.

Description

Gesture identification method and device, intelligence wearing terminal and server
Technical field
The present invention relates to signal processing technology field more particularly to a kind of gesture identification method and device, intelligence wearing eventually End and server.
Background technology
At present, more and more intelligent wearable devices are favored be subject to user, and in order to meet the needs of users, intelligence is dressed The function of equipment is increasingly diversified.For example, user can wear Intelligent bracelet on arm or wrist, then pass through intelligent hand Ring carries out note step.
The prior art dresses the identification for reacting user movement state by intelligence, is done generally by the way of feature extraction The identification of behavior pattern gathers the data of sensor on wearable device, then by feature extraction algorithm to sensing data Feature extracts, for example the feature that note step function proposes is exactly that the data reported have the wave crest and trough of regularity.It is carrying After the feature for taking out walking states, during user wears Intelligent bracelet, if sensor is reported again with similar spy During the sensing data of sign, then it is assumed that be that user is in walking states, Intelligent bracelet can be carried out note step at this time.
The accuracy of current feature extracting method is relatively low, can not be adapted to the individual difference of user.
The content of the invention
It is contemplated that it solves at least some of the technical problems in related technologies.
For this purpose, an object of the present invention is to provide a kind of gesture identification method, this method can improve gesture identification Accuracy rate, and be adapted to according to the individual difference with user.
It is another object of the present invention to propose a kind of gesture identifying device.
It is another object of the present invention to propose a kind of intelligence wearing terminal.
It is another object of the present invention to propose a kind of server.
In order to achieve the above objectives, the gesture identification method that first aspect present invention embodiment proposes, including:
The corresponding sensing data of current gesture is obtained from sensor;Wherein, the sensing data is used to characterize hand The variation of gesture;
The sensing data is subjected to image conversion, obtains image data;
Described image data are input in the Model of Target Recognition formed based on neutral net, pass through the target identification Model determines the target gesture-type of current gesture.
The gesture identification method that first aspect present invention embodiment proposes, by the way that sensing data is converted into picture number According to then the type of gesture being identified, so as to carry using more mature image recognition technology by identification model The accuracy rate of height identification.
In order to achieve the above objectives, the gesture identifying device that second aspect of the present invention embodiment proposes, including:
Acquisition module, for obtaining the corresponding sensing data of current gesture from sensor;Wherein, the sensor number According to for characterizing the variation of gesture;
Modular converter for the sensing data to be carried out image conversion, obtains image data;
Identification module for described image data to be input in Model of Target Recognition, passes through the Model of Target Recognition Determine the target gesture-type of current gesture.
The gesture identifying device that second aspect of the present invention embodiment proposes, by the way that sensing data is converted into picture number According to then the type of gesture being identified, so as to carry using more mature image recognition technology by identification model The accuracy rate of height identification.
In order to achieve the above objectives, the intelligence wearing terminal that third aspect present invention embodiment proposes, including:
Gesture identifying device as described above.
In order to achieve the above objectives, the server that fourth aspect present invention embodiment proposes, including:
Gesture identifying device as described above.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description It obtains substantially or is recognized by the practice of the present invention.
Description of the drawings
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments Substantially and it is readily appreciated that, wherein:
Fig. 1 is a kind of flow diagram of gesture identification method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of another gesture identification method provided in an embodiment of the present invention;
Fig. 3 is the flow diagram of another gesture identification method provided in an embodiment of the present invention;
Fig. 4 is a kind of hardware composition schematic diagram of the intelligence wearing terminal provided in the present embodiment;
Fig. 5 is a kind of composition schematic diagram of the software module of server provided in this embodiment;
Fig. 6 is a kind of composition schematic diagram of the software module of the intelligence wearing terminal provided in the present embodiment;
Fig. 7 is a kind of structure diagram of gesture identifying device provided in an embodiment of the present invention;
Fig. 8 is the structure diagram of another gesture identifying device provided in an embodiment of the present invention;
Fig. 9 is a kind of structure diagram of training module 15 provided in an embodiment of the present invention;
Figure 10 is a kind of structure diagram of server provided in an embodiment of the present invention;
Figure 11 is a kind of structure diagram of intelligence wearing terminal provided in an embodiment of the present invention;
Figure 12 is a kind of structure diagram of gesture recognition system provided in an embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar module or has the function of same or like module.Below with reference to attached The embodiment of figure description is exemplary, and is only used for explaining the present invention, and is not considered as limiting the invention.On the contrary, this The embodiment of invention includes falling into all changes in the range of the spirit and intension of attached claims, modification and equivalent Object.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include one or more this feature.In the description of the present invention, " multiple " are meant that two or more, Unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, represent to include Module, segment or the portion of the code of the executable instruction of one or more the step of being used to implement specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Fig. 1 is a kind of flow diagram of gesture identification method provided in an embodiment of the present invention, as shown in Fig. 2, the gesture Recognition methods comprises the following steps:
S101, the corresponding sensing data of current gesture is obtained from sensor.
Wherein, the sensing data is used to characterize the variation of gesture.
Intelligently be worn by being provided with sensor, by sensor can gather using the user intelligently dressed some Data.In the present embodiment, intelligently it is worn by that gravity sensor and gyro sensor can be provided with, wherein, gravity sensor For obtaining the 3-axis acceleration of user, which is the acceleration on solid space X-axis, Y-axis and Z axis.Gyro Instrument sensor is used to obtain three axis angular rates of user, which is three axis angular rates of solid space.
When user gesture changes, the corresponding sensing of current gesture can be recorded in the sensor being intelligently worn by Device data.When the gesture to user is needed to be identified, the sensor number for the current gesture that can be got from sensor According to.Sensing data is obtained from gravity sensor and gyro sensor, which includes six number of axle evidences, respectively For 3-axis acceleration and three axis angular rates.
S102, sensing data is subjected to image conversion, obtains image data.
Since image recognition technology is more mature, in order to improve the accuracy rate of gesture identification, sensing data can be based on Current gesture by characterization image is come out, image recognition technology is then based on and the type of gesture is identified.Specifically, will Sensing data carries out image conversion, to obtain the corresponding image data of the sensing data.It preferably, can be by sensor number Image conversion is carried out according to according to graphical format.Specifically, within the period of setting, using the time as transverse axis, that is, X-axis, will sense Device data form two dimensional plot as the longitudinal axis, that is, Y-axis.In the present embodiment, sensing data is 6 number of axle evidences, then in formation Two dimensional plot includes 6 different curves in possible track, to represent the situation of change of sensing data.
It is alternatively possible to sensing data is subjected to image conversion according to represented as histograms.Specifically, in the time of setting In section, using the time as transverse axis, that is, X-axis, using sensing data as the longitudinal axis, that is, Y-axis, histogram is formed.In the present embodiment, sensor Data are 6 number of axle evidences, then include the longitudinally oriented strip that 6 height may not wait in same time interval in the histogram of formation Line or line segment represent the situation of change of sensing data.
According to graphical format, either sensing data is converted into two dimensional plot or histogram by represented as histograms Afterwards, that is, image data corresponding with the sensing data is got.
S103, image data is input in Model of Target Recognition, the mesh of current gesture is determined by Model of Target Recognition Mark gesture-type.
After the corresponding image data of sensing data is generated, it is possible to which image data is input to Model of Target Recognition In be identified, which may be employed image recognition technology and image data is analyzed, it may be determined that current The corresponding target gesture-type of gesture.Preferably, Model of Target Recognition can be built-up based on neutral net, recycles a large amount of Sample data model is trained after formed.Basic gesture classification includes:Shake, left and right translation, upper right are drawn up and down Circle, upper left are drawn a circle.
The gesture identification method that the present embodiment proposes, by obtaining the corresponding sensor number of current gesture from sensor According to;Wherein, sensing data is used to characterize the variation of gesture, and sensing data is carried out image conversion, obtains image data, will Image data is input in Model of Target Recognition, and the target gesture-type of current gesture is determined by Model of Target Recognition.This reality It applies in example, sensing data is converted into image data, then identified by Model of Target Recognition using more mature image The type of gesture is identified in technology, so as to improve the accuracy rate of identification.
Fig. 2 is the flow diagram of another gesture identification method provided in an embodiment of the present invention.As shown in Fig. 2, the hand Gesture recognition methods comprises the following steps:
S201, acquisition is used for the sample sensor data of training from sensor.
In the present embodiment, an identification model is built in advance, which is based preferably on neutral net and is built. In order to realize the training to identification model so that the identification model just have recognition capability, it is necessary to gather for identification model into The substantial amounts of sample of row training.Specifically, sensing data can be gathered to individual event action orientation as sample sensor data, Such as every group of action can continuously be done 1000 times, and it is intermediate using minibreak as the interval of action, it receives sample in server and passes It, can be according to the dwell interval as mark fractionation action after sensor data.
S202, sample sensor data are subjected to image conversion, obtain sample image data.
After sample sensor data are got, it is possible to carry out image conversion to sample sensor data, obtain sample Image data.Preferably, can sample sensor data be subjected to image conversion according to graphical format.Specifically, setting Period in, using the time as transverse axis, that is, X-axis, using sample sensor data as the longitudinal axis, that is, Y-axis, form two dimensional plot.
It is alternatively possible to sample sensor data are subjected to image conversion according to represented as histograms.Specifically, in setting In period, using the time as transverse axis, that is, X-axis, using sample sensor data as the longitudinal axis, that is, Y-axis, histogram is formed.
According to graphical format either represented as histograms by sample sensor data conversion into two dimensional plot or Nogata After figure, that is, get sample image data corresponding with the sample sensor data.
S203, sample image data is input in default identification model it is trained, obtains Model of Target Recognition.
After sample image data is got, sample image data is input in default identification model and carries out gesture class Type identifies, further obtains the misclassification rate of identification model.Specifically, sample image data is being input to default identification mould Before being trained in type, user can label to the type of sample image data, form the sample image data pair The first kind label answered.The real gesture class corresponding to the sample image data can be identified by first kind label Type.After the completion of sample image data is identified by identification model, identification model can be that sample image data generates a type Label forms the Second Type label of the sample image data, which goes out the sample that identification model determines Gesture-type corresponding to this image data.Further, according to the first kind label and Second Type of sample image data Label statistics obtains the error rate of the identification model.Specifically, Second Type label and first kind label institute can be got Then the quantity and sample size are made ratio by the inconsistent quantity of corresponding gesture-type, which is exactly identification model Misclassification rate.
After misclassification rate is got, misclassification rate is made comparisons with default threshold value, if misclassification rate is greater than Default threshold value then illustrates that the recognition effect of identification model is poor, and the erroneous judgement data of appearance are more, it is necessary to be carried out to identification model Adjustment, to get the good identification model of recognition effect.When misclassification rate is greater than default threshold value, then identification is adjusted The parameter of model.It specifically, can be to the adjustment knowledge in the identification model when identification model, which is based on neutral net, to be built The network number of plies, learning rate and convolution kernel of neutral net etc. in other model, so that the training result convergence of identification model, also It is that the misclassification rate of identification model is made to drop under default threshold value.Preferably, may be employed is that convolutional neural networks are known to build Other model, the identification model mainly include 3 convolutional layers, 1 pond layer and 2 full articulamentums.
It is adjusted in the parameter to identification model, last trained Model of Target Recognition is, it is necessary to continue in order to obtain The identification model after adjustment is trained based on sample image data, until the misclassification rate of identification model is less than default threshold Value.Further, identification model when misclassification rate being less than threshold value is determined as Model of Target Recognition.
In the present embodiment, due to carrying out multiple repairing weld to user, the image data of user is obtained so that the image data energy Enough action forms for truly reflecting user, so as to embody the otherness of user.
S204, the corresponding sensing data of current gesture is obtained from sensor.
Wherein, the sensing data is used to characterize the variation of gesture.
S205, sensing data is subjected to image conversion, obtains image data.
S206, image data is input in Model of Target Recognition, the mesh of current gesture is determined by Model of Target Recognition Mark gesture-type.
On the introduction of related content in S204~S206, reference can be made in above-described embodiment related content record, this time It repeats no more.
The gesture identification method that the present embodiment proposes, by obtaining the corresponding sensor number of current gesture from sensor According to;Wherein, sensing data is used to characterize the variation of gesture, and sensing data is carried out image conversion, obtains image data, will Image data is input in Model of Target Recognition, and the target gesture-type of current gesture is determined by Model of Target Recognition.This reality It applies in example, sensing data is converted into image data, then identified by Model of Target Recognition using more mature image The type of gesture is identified in technology, so as to improve the accuracy rate of identification.
Further, identification model is trained by way of machine learning, obtains the mesh that misclassification rate is less than threshold value Identification model is marked, the corresponding image data of sensing data is identified by the Model of Target Recognition, is further carried The high accuracy rate of gesture identification.
Fig. 3 is the flow diagram of another gesture identification method provided in an embodiment of the present invention.As shown in figure 3, the hand The executive agent of gesture recognition methods includes:Intelligence wearing terminal and server.The gesture identification method comprises the following steps:
S300, server are trained default identification model, obtain Model of Target Recognition.
Specifically, acquisition is for trained sample sensor number in the slave sensor that server receiving terminal sends over According to, then by sample sensor data carry out image conversion, obtain sample image data, sample image data be input to default Identification model in be trained, to obtain Model of Target Recognition.Detailed process can be found in related content in above-described embodiment It records, this is repeated no more.
Optionally, after server gets the Model of Target Recognition, which can be fed back to intelligence Terminal is dressed, which is arranged in intelligence wearing terminal.
S301, intelligence wearing terminal obtain the sensing data of current gesture.
Fig. 4 is a kind of hardware composition schematic diagram of the intelligence wearing terminal provided in the present embodiment.As shown in figure 4, the intelligence Terminal, which can be dressed, to be included:Micro-control unit (Microcontroller Unit, abbreviation MCU), gravity sensor (G-sensor), Gyro sensor (A-sensor), battery (Battery), bluetooth (Bluetooth) and memory (Flash).Wherein, G- Sensor and A-sensor carries out current gesture data record, that is, forms the sensing data of gesture before deserving.
Sensing data is converted into image data by S302, intelligence wearing terminal.
MCU can be acquired the sensing data on sensor, that is, gravity sensor and gyro sensor, then Sensing data is converted into image, obtains image data.
Detailed process can be found in the record of related content in above-described embodiment, this is repeated no more.
Image data is reported to server by S303, intelligence wearing terminal.
Image data can be reported to server by intelligent wearable device by bluetooth.Optionally in intelligently wearing terminal WiFi is provided with, image data is reported to server also by the WiFi.
Further, the intelligence wearing terminal in Fig. 4 can also include:Heart rate sensor (HR-Sensor), passes through the heart The data such as the heart rate of rate Sensor monitoring user.
Image data is input in target identification module and is identified by S304, server, determines the target of current gesture Gesture-type.
In the present embodiment, the Model of Target Recognition after training is arranged on server side, when needs carry out current gesture During identification, intelligence dress terminal report image data, specifically, can by intelligence wearing terminal on bluetooth or WiFi will Image data is reported to server.Server carries out online recognition to image data, feeds back to recognition result after the completion of identification Intelligence wearing terminal.Intelligence wearing terminal can also include display screen (Screen), can on a display screen display server it is anti- The result of feedback.Under normal circumstances, the setting of target identification module on the server may not be used by the small volume of intelligence wearing terminal The space of intelligence wearing terminal is occupied again.
Herein it is to be appreciated that target identification module can be arranged in intelligence wearing terminal, server can also be arranged on On.After identification model training is completed to obtain target identification module by server, which can be fed back to intelligence Terminal can be dressed, when target identification module is when intelligently dressing end side, intelligence dresses terminal can be in local to working as remote holder The sensing data of gesture is identified, it is no longer necessary to be sent to server and be identified, can lower the pressure of server.
Fig. 5 is a kind of composition schematic diagram of the software module of the intelligence wearing terminal provided in the present embodiment.Such as Fig. 5 institutes Show, this intelligently wearing terminal include sensor driving, bluetooth driving, primary control program, WiFi module, identification module, LCD drive with And data transmission, sensor driving, bluetooth driving, primary control program, WiFi module, identification module and LCD drive modules pass through Data transmission and the intercommunication of MCU.Wherein, primary control program can be used for controlling MCU.Sensor driving can drive G- Sensor, A-sensor and HR-sensor.Bluetooth driving can drive bluetooth so that the sensing data got can lead to A WiFi module can also be included to server or in intelligence wearing terminal by crossing Bluetooth transmission, pass through the WiFi module It realizes WiFi transmission, sensing data is reported to server.Further, when being provided with target identification mould in identification module Type, intelligence wearing terminal are identified the type of gesture by target identification module.
Fig. 6 is a kind of composition schematic diagram of the software module of server provided in this embodiment.As shown in figure 5, the service Device includes:Wu Bantu (Ubuntu) operating system, convolutional neural networks frame (Convolution Architecture For Feature Extraction, abbreviation CAFFE), sample training, model output and data transmission.Wherein, convolutional neural networks frame Frame is used to implement the structure of identification model.The training of identification model is completed by sample training, to obtain Model of Target Recognition, The Model of Target Recognition can be exported by model feeds back to intelligence wearing terminal.Further, server can also include:It is logical With parallel computation framework (Compute Unified Device Architecture, abbreviation CUDA), figure is made by the CUDA The technical issues of processor (Graphics Processing Unit, abbreviation GPU) can solve complexity.
The gesture identification method that the present embodiment proposes, by obtaining the corresponding sensor number of current gesture from sensor According to;Wherein, sensing data is used to characterize the variation of gesture, and sensing data is carried out image conversion, obtains image data, will Image data is input in Model of Target Recognition, and the target gesture-type of current gesture is determined by Model of Target Recognition.This reality It applies in example, sensing data is converted into image data, then identified by Model of Target Recognition using more mature image The type of gesture is identified in technology, so as to improve the accuracy rate of identification.
Further, identification model is trained by way of machine learning, obtains the mesh that misclassification rate is less than threshold value Identification model is marked, the corresponding image data of sensing data is identified by the Model of Target Recognition, is further carried The high accuracy rate of gesture identification.
Fig. 7 is a kind of structure diagram of gesture identifying device provided in an embodiment of the present invention.As shown in fig. 7, the gesture Identification device 1 includes:Acquisition module 11, modular converter 12 and identification module 13.
Wherein, acquisition module 11, for obtaining the corresponding sensing data of current gesture from sensor;Wherein, it is described Sensing data is used to characterize the variation of gesture.
It is worn by being provided with sensor intelligently, acquisition module 11 can gather what is intelligently dressed using this by sensor Some data of user.In the present embodiment, intelligently it is worn by that gravity sensor and gyro sensor can be provided with.Work as user When gesture changes, the corresponding sensing data of current gesture can be recorded in the sensor being intelligently worn by.Work as needs When the gesture of user is identified, the sensing data for the current gesture that acquisition module 11 can be got from sensor. Sensing data is obtained from gravity sensor and gyro sensor, which includes six number of axle evidences, is respectively 3-axis acceleration and three axis angular rates.
Modular converter 12 for the sensing data to be carried out image conversion, obtains image data.
Modular converter 12, is specifically used for:By the sensing data and/or the sample sensor data according to curve Diagram form carries out image conversion.
Further, modular converter 12 are specifically used for:Within the period of setting, using the time as transverse axis, with the sensing Device data and/or the sample sensor data form two dimensional plot as the longitudinal axis.
Modular converter 12, is specifically used for:By the sensing data and/or the sample sensor data according to Nogata Diagram form carries out image conversion.
Further, modular converter 12 are specifically used for:Within the period of setting, using the time as transverse axis, with the sensing Device data and/or the sample sensor data form histogram as the longitudinal axis.
Identification module 13 for described image data to be input in Model of Target Recognition, passes through the target identification mould Type determines the target gesture-type of current gesture.
After the corresponding image data of sensing data is generated, image data can be input to mesh by identification module 13 It is identified in mark identification model, which may be employed image recognition technology and image data is analyzed, can To determine the corresponding target gesture-type of current gesture.Preferably, Model of Target Recognition can be built-up based on neutral net, Recycle what substantial amounts of sample data was formed after being trained to model.Basic gesture classification includes:Shake, left and right are flat up and down It moves, upper right is drawn a circle, upper left is drawn a circle.
The gesture identifying device that the present embodiment proposes, by obtaining the corresponding sensor number of current gesture from sensor According to;Wherein, sensing data is used to characterize the variation of gesture, and sensing data is carried out image conversion, obtains image data, will Image data is input in Model of Target Recognition, and the target gesture-type of current gesture is determined by Model of Target Recognition.This reality It applies in example, sensing data is converted into image data, then identified by Model of Target Recognition using more mature image The type of gesture is identified in technology, so as to improve the accuracy rate of identification.
Fig. 8 is the structure diagram of another gesture identifying device provided in an embodiment of the present invention.As shown in figure 8, the hand Gesture identification device 2 also wraps in addition to including the acquisition module 11 in above-described embodiment, modular converter 12 and identification module 13 It includes:Acquisition module 14 and training module 15.
Acquisition module 14, for gathering the sample sensor data for training from the sensor.
The modular converter 12 is additionally operable to the sample sensor data carrying out image conversion, obtains sample image number According to.
Training module 15 carries out gesture-type for the sample image data to be input in default identification model Recognition training, to obtain the Model of Target Recognition.
Fig. 9 is a kind of optional structure diagram of training module 15 in the present embodiment.The training module 15 includes:Training Unit 151, acquiring unit 152, adjustment unit 153 and determination unit 154.
Wherein, training unit 151 carry out gesture class for the sample image data to be input in the identification model The recognition training of type and the identification for continuing gesture-type to the identification model after adjustment based on the sample image data Training is less than the threshold value until the misclassification rate.
Acquiring unit 152, for obtaining the misclassification rate of the identification model.
Adjustment unit 153, if adjusting the identification mould greater than default threshold value for the misclassification rate The parameter of type.
Determination unit 154, for by the misclassification rate be less than the threshold value when the identification model be determined as the mesh Mark identification model.
Further, when the identification model, which is based on neutral net, to be built, adjustment unit 153 is specifically used for:
Adjust the network number of plies, learning rate and the convolution kernel of neutral net in the identification model.
Further, acquiring unit 152 are specifically used for:
Obtain the corresponding first kind label of the sample image data.
Obtain the corresponding Second Type label of the sample image data that the identification model identifies.
The error rate is counted according to the first kind label and the Second Type label.
The gesture identifying device that the present embodiment proposes, by obtaining the corresponding sensor number of current gesture from sensor According to;Wherein, sensing data is used to characterize the variation of gesture, and sensing data is carried out image conversion, obtains image data, will Image data is input in Model of Target Recognition, and the target gesture-type of current gesture is determined by Model of Target Recognition.This reality It applies in example, sensing data is converted into image data, then identified by Model of Target Recognition using more mature image The type of gesture is identified in technology, so as to improve the accuracy rate of identification.
Further, identification model is trained by way of machine learning, obtains the mesh that misclassification rate is less than threshold value Identification model is marked, the corresponding image data of sensing data is identified by the Model of Target Recognition, is further carried The high accuracy rate of gesture identification.
Figure 10 is a kind of structure diagram of server provided in an embodiment of the present invention.As shown in Figure 10, which wraps It includes:Gesture identifying device 2 in above-described embodiment.
In the present embodiment, acquisition module 11, modular converter 12 and identification module 13 in gesture identifying device 2 are arranged on clothes It is engaged on device 3, server 3 can obtain the sensing data of current gesture by acquisition module 11 online, then pass through modulus of conversion Block 12 carries out image conversion to sensing data, obtains image data, identification module 13 is again based on Model of Target Recognition to current The sensing data of gesture is identified, to determine the target gesture-type of current gesture.
Further, the acquisition module 14 in gesture identifying device 2 and training module 15 are also disposed on server 3, are led to It crosses acquisition module 14 and training module 15 is trained sample sensor data, finally obtain Model of Target Recognition.
Optionally, server 3 can only include the acquisition module 14 and training module 15 in gesture identifying device 2, that is, exist The training of model is identified in server 3, obtains Model of Target Recognition, Model of Target Recognition is then fed back into intelligent wearing Terminal is identified by intelligence wearing terminal in the type to current gesture.
In the present embodiment, the Model of Target Recognition after training is arranged on server side, when needs carry out current gesture During identification, intelligence dress terminal report image data, specifically, can by intelligence wearing terminal on bluetooth or WiFi will Image data is reported to server.Server carries out online recognition to image data, feeds back to recognition result after the completion of identification Intelligence wearing terminal.Intelligence wearing terminal can also include display screen (Screen), can on a display screen display server it is anti- The result of feedback.Under normal circumstances, the setting of target identification module on the server may not be used by the small volume of intelligence wearing terminal The space of intelligence wearing terminal is occupied again.
Further, since trained process needs substantial amounts of data, since the performance of server is better than intelligent wearing Terminal, so as to improve trained efficiency.
Figure 11 is a kind of structure diagram of intelligence wearing terminal provided in an embodiment of the present invention.As shown in figure 11, the intelligence Terminal 4, which can be dressed, to be included:Gesture identifying device 1, i.e., by acquisition module 11, modular converter 12 and the knowledge in gesture identifying device 1 Other module 13 is arranged in intelligence wearing terminal 4, and intelligence dresses terminal 4 can be to obtain current gesture under line by acquisition module 11 Sensing data, then by modular converter 12 to sensing data carry out image conversion, obtain image data, identification module 13 are again identified the sensing data of current gesture based on Model of Target Recognition, to determine the target gesture class of current gesture Type.
Wherein, the Model of Target Recognition in identification module 13 is to be based on sample sensor data to default knowledge by server After other model is trained, by server feedback to intelligence wearing terminal 4.
In the present embodiment, when target identification module is when intelligently dressing end side, intelligence dresses terminal can be in local The sensing data of current gesture is identified, it is no longer necessary to be sent to server and be identified, server can be lowered Pressure.
Figure 12 is a kind of structure diagram of gesture recognition system provided in an embodiment of the present invention.As shown in figure 12, the hand Gesture identifying system includes:Intelligence wearing terminal 5 and server 6.
Wherein, this, which intelligently dresses terminal 5, includes acquisition module 11, modular converter 12 and identification module 13.Further, take Business device 6 includes:Acquisition module 14 and training module 15.By acquisition module 11, modular converter 12, identification module 13, acquisition module 14 and training module 15 form a gesture identifying device 2.
In the present embodiment, when target identification module is when intelligently dressing end side, intelligence dresses terminal can be in local The sensing data of current gesture is identified, it is no longer necessary to be sent to server and be identified, server can be lowered Pressure.Further, since trained process needs substantial amounts of data, since the performance of server is better than intelligence wearing eventually End, so as to improve trained efficiency.
It should be noted that in the description of the present invention, term " first ", " second " etc. are only used for description purpose, without It is understood that indicate or imply relative importance.In addition, in the description of the present invention, unless otherwise indicated, the meaning of " multiple " It is two or more.
Any process described otherwise above or method description are construed as in flow chart or herein, represent to include Module, segment or the portion of the code of the executable instruction of one or more the step of being used to implement specific logical function or process Point, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, carry out perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
It should be appreciated that each several part module or combination thereof of the present invention are realized.In the above-described embodiment, Duo Gebu Software or firmware that rapid or method can in memory and by suitable instruction execution system be performed with storage is realized.Example Such as, if realized with hardware in another embodiment, any one of following technology well known in the art can be used Or their combination is realized:With for data-signal realize logic function logic gates discrete logic, Application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), field programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, the program upon execution, one or a combination set of the step of including embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, it can also That unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould The form that hardware had both may be employed in block is realized, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and is independent production marketing or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms is not Centainly refer to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be any One or more embodiments or example in combine in an appropriate manner.
Although the embodiment of the present invention has been shown and described above, it is to be understood that above-described embodiment is example Property, it is impossible to limitation of the present invention is interpreted as, those of ordinary skill in the art within the scope of the invention can be to above-mentioned Embodiment is changed, changes, replacing and modification.

Claims (20)

1. a kind of gesture identification method, which is characterized in that including:
The corresponding sensing data of current gesture is obtained from sensor;Wherein, the sensing data is used to characterize gesture Variation;
The sensing data is subjected to image conversion, obtains image data;
Described image data are input in Model of Target Recognition, the target of current gesture is determined by the Model of Target Recognition Gesture-type.
2. gesture identification method according to claim 1, which is characterized in that described that current gesture pair is obtained from sensor Before the sensing data answered, including:
Acquisition is for trained sample sensor data from the sensor;
The sample sensor data are subjected to image conversion, obtain sample image data;
The sample image data is input to the training that gesture identification is carried out in default identification model, to obtain the target Identification model.
3. gesture identification method according to claim 2, which is characterized in that described to be input to the sample image data It is trained in default identification model, to obtain the Model of Target Recognition, including:
The sample image data is input to the recognition training that gesture-type is carried out in the identification model;
Obtain the misclassification rate of the identification model;
If the misclassification rate is greater than default threshold value, the parameter of the identification model is adjusted;
The recognition training of gesture-type is continued until described to the identification model after adjustment based on the sample image data Misclassification rate is less than the threshold value;
The identification model when misclassification rate is less than the threshold value is determined as the Model of Target Recognition.
4. gesture identification method according to claim 3, which is characterized in that when the identification model is based on neutral net structure When building, the parameter of the adjustment identification model, including:
Adjust the network number of plies, learning rate and the convolution kernel of neutral net in the identification model.
5. gesture identification method according to claim 4, which is characterized in that the mistake for obtaining the identification model is known Rate, including:
Obtain the corresponding first kind label of the sample image data;
Obtain the corresponding Second Type label of the sample image data that the identification model identifies;
The error rate is counted according to the first kind label and the Second Type label.
6. according to claim 2-5 any one of them gesture identification methods, which is characterized in that by the sensing data and/ Or the sample sensor data carry out image conversion according to graphical format.
7. according to claim 2-5 any one of them gesture identification methods, which is characterized in that by the sensing data and/ Or the sample sensor data carry out image conversion according to represented as histograms.
8. gesture identification method according to claim 6, which is characterized in that described to carry out image turn according to graphical format It changes, including:
Within the period of setting, using the time as transverse axis, made with the sensing data and/or the sample sensor data For the longitudinal axis, two dimensional plot is formed.
9. gesture identification method according to claim 7, which is characterized in that described to carry out image turn according to represented as histograms It changes, including:
Within the period of setting, using the time as transverse axis, made with the sensing data and/or the sample sensor data Histogram is formed for the longitudinal axis.
10. a kind of gesture identifying device, which is characterized in that including:
Acquisition module, for obtaining the corresponding sensing data of current gesture from sensor;Wherein, the sensing data is used In the variation of characterization gesture;
Modular converter for the sensing data to be carried out image conversion, obtains image data;
Identification module for described image data to be input in Model of Target Recognition, is determined by the Model of Target Recognition The target gesture-type of current gesture.
11. gesture identifying device according to claim 10, which is characterized in that further include:Acquisition module and training module;
The acquisition module, for gathering the sample sensor data for training from the sensor;
The modular converter is additionally operable to the sample sensor data carrying out image conversion, obtains sample image data;
The training module, for the sample image data to be input to the knowledge that gesture-type is carried out in default identification model Not Xun Lian, to obtain the Model of Target Recognition.
12. gesture identifying device according to claim 11, which is characterized in that the training module, including:
Training unit, for the sample image data to be input to the identification instruction that gesture-type is carried out in the identification model Practice and the recognition training of gesture-type is continued until institute to the identification model after adjustment based on the sample image data Misclassification rate is stated less than the threshold value;
Acquiring unit, for obtaining the misclassification rate of the identification model;
Adjustment unit if being greater than default threshold value for the misclassification rate, adjusts the ginseng of the identification model Number;
Determination unit, for by the misclassification rate be less than the threshold value when the identification model be determined as the target identification mould Type.
13. gesture identifying device according to claim 12, which is characterized in that when the identification model is based on neutral net During structure, the adjustment unit is specifically used for:
Adjust the network number of plies, learning rate and the convolution kernel of neutral net in the identification model.
14. gesture identifying device according to claim 13, which is characterized in that the acquiring unit is specifically used for:
Obtain the corresponding first kind label of the sample image data;
Obtain the corresponding Second Type label of the sample image data that the identification model identifies;
The error rate is counted according to the first kind label and the Second Type label.
15. according to claim 11-14 any one of them gesture identifying devices, which is characterized in that the modular converter, specifically For:
The sensing data and/or the sample sensor data are subjected to image conversion according to graphical format.
16. according to claim 11-14 any one of them gesture identifying devices, which is characterized in that the modular converter, specifically For:
The sensing data and/or the sample sensor data are subjected to image conversion according to represented as histograms.
17. gesture identifying device according to claim 15, which is characterized in that the modular converter is specifically used for:
Within the period of setting, using the time as transverse axis, made with the sensing data and/or the sample sensor data For the longitudinal axis, two dimensional plot is formed.
18. gesture identifying device according to claim 16, which is characterized in that the modular converter is specifically used for:
Within the period of setting, using the time as transverse axis, made with the sensing data and/or the sample sensor data Histogram is formed for the longitudinal axis.
19. a kind of intelligence wearing terminal, which is characterized in that including:Gesture identification dress any one of claim 10-18 It puts.
20. a kind of server, which is characterized in that including:Gesture identifying device any one of claim 10-18.
CN201611041278.9A 2016-11-22 2016-11-22 Gesture identification method and device, intelligence wearing terminal and server Pending CN108089693A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611041278.9A CN108089693A (en) 2016-11-22 2016-11-22 Gesture identification method and device, intelligence wearing terminal and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611041278.9A CN108089693A (en) 2016-11-22 2016-11-22 Gesture identification method and device, intelligence wearing terminal and server

Publications (1)

Publication Number Publication Date
CN108089693A true CN108089693A (en) 2018-05-29

Family

ID=62170940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611041278.9A Pending CN108089693A (en) 2016-11-22 2016-11-22 Gesture identification method and device, intelligence wearing terminal and server

Country Status (1)

Country Link
CN (1) CN108089693A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241925A (en) * 2018-09-18 2019-01-18 深圳市格莱科技有限公司 A kind of smart pen
CN109858380A (en) * 2019-01-04 2019-06-07 广州大学 Expansible gesture identification method, device, system, gesture identification terminal and medium
CN111695408A (en) * 2020-04-23 2020-09-22 西安电子科技大学 Intelligent gesture information recognition system and method and information data processing terminal
CN112383804A (en) * 2020-11-13 2021-02-19 四川长虹电器股份有限公司 Gesture recognition method based on empty mouse track

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024151A (en) * 2010-12-02 2011-04-20 中国科学院计算技术研究所 Training method of gesture motion recognition model and gesture motion recognition method
CN102402680A (en) * 2010-09-13 2012-04-04 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN104656878A (en) * 2013-11-19 2015-05-27 华为技术有限公司 Method, device and system for recognizing gesture
US20150346833A1 (en) * 2014-06-03 2015-12-03 Beijing TransBorder Information Technology Co., Ltd. Gesture recognition system and gesture recognition method
CN105654037A (en) * 2015-12-21 2016-06-08 浙江大学 Myoelectric signal gesture recognition method based on depth learning and feature images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102402680A (en) * 2010-09-13 2012-04-04 株式会社理光 Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN102024151A (en) * 2010-12-02 2011-04-20 中国科学院计算技术研究所 Training method of gesture motion recognition model and gesture motion recognition method
CN104656878A (en) * 2013-11-19 2015-05-27 华为技术有限公司 Method, device and system for recognizing gesture
US20150346833A1 (en) * 2014-06-03 2015-12-03 Beijing TransBorder Information Technology Co., Ltd. Gesture recognition system and gesture recognition method
CN105654037A (en) * 2015-12-21 2016-06-08 浙江大学 Myoelectric signal gesture recognition method based on depth learning and feature images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐立波: ""基于MEMS惯性传感器的手势模式识别"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王伟栋: ""基于微惯性技术的数据手套研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241925A (en) * 2018-09-18 2019-01-18 深圳市格莱科技有限公司 A kind of smart pen
CN109858380A (en) * 2019-01-04 2019-06-07 广州大学 Expansible gesture identification method, device, system, gesture identification terminal and medium
CN111695408A (en) * 2020-04-23 2020-09-22 西安电子科技大学 Intelligent gesture information recognition system and method and information data processing terminal
CN112383804A (en) * 2020-11-13 2021-02-19 四川长虹电器股份有限公司 Gesture recognition method based on empty mouse track

Similar Documents

Publication Publication Date Title
CN108089693A (en) Gesture identification method and device, intelligence wearing terminal and server
CN104799838B (en) Monitor the method, apparatus of Intelligent worn device wearing and a kind of Intelligent worn device
CN105902257A (en) Sleep state analysis method and device and intelligent wearable equipment
CN107595243A (en) A kind of illness appraisal procedure and terminal device
CN104720748B (en) A kind of sleep stage determines method and system
CN107491638A (en) A kind of ICU user's prognosis method and terminal device based on deep learning model
CN104720821A (en) Method and smart clothing for achieving real-time posture monitoring
CN104182041B (en) Blink type determines method and blink type determination device
CN106228018A (en) A kind of health state evaluating method and evaluating system
CN107491511A (en) The autognosis method and device of robot
CN104049752A (en) Interaction method based on human body and interaction device based on human body
CN110443145A (en) The Human bodys' response of sensor-based separate type deployment is health management system arranged
CN109044280A (en) A kind of sleep stage method and relevant device
CN106667439B (en) A kind of noise-reduction method and device of electrocardiosignal
CN106730232A (en) A kind of intelligent awakening method and system
Aggelides et al. A gesture recognition approach to classifying allergic rhinitis gestures using wrist-worn devices: a multidisciplinary case study
CN206378818U (en) A kind of Hand gesture detection device based on wireless self-networking pattern
CN109199325A (en) A kind of sleep monitor method and device
CN106388146A (en) A cloud-computing-platform-based smart band and a system
CN108596999A (en) Filling graph method, apparatus, storage medium, processor and terminal
CN103942658A (en) Design flow management system for electric power engineering
CN111339838A (en) Pig behavior identification method and system based on information fusion
CN106445110A (en) State updating method and apparatus
KR102589471B1 (en) Apparatus and method for augmentating of data
CN106510622A (en) Sign data control method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180529