CN105068657B - The recognition methods of gesture and device - Google Patents
The recognition methods of gesture and device Download PDFInfo
- Publication number
- CN105068657B CN105068657B CN201510512311.0A CN201510512311A CN105068657B CN 105068657 B CN105068657 B CN 105068657B CN 201510512311 A CN201510512311 A CN 201510512311A CN 105068657 B CN105068657 B CN 105068657B
- Authority
- CN
- China
- Prior art keywords
- motion profile
- content
- hand
- wearable device
- sampling point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides recognition methods and the device of a kind of gesture.The embodiment of the present invention is by utilizing sensor device, the motion profile for the wearable device that the hand or arm for obtaining user are dressed, and then according to the motion profile of the wearable device, obtain the motion profile of the hand, and according to the motion profile of the hand, determine the content of appointed language, make it possible to export the content of the appointed language, due to being not necessarily to the images of gestures of image acquisition device user, therefore, it can be avoided in the prior art since image collecting device images of gestures collected is not to be apparent, the problem of can not carrying out the identifying processing of images of gestures caused by the images of gestures of user can not even be collected, to improve the reliability of gesture identification.
Description
[technical field]
The present invention relates to the recognition methods of content processing techniques more particularly to a kind of gesture and devices.
[background technique]
With the development of communication technology, terminal is integrated with more and more functions, so that the system function of terminal arranges
More and more corresponding applications (Application, APP) are contained in table.It can be related to some special services in some applications, specially
Door is supplied to special population use, for example, to gesture identification service provided by deaf-mute.These applications need to adopt using image
Acquisition means acquire arm posture image, that is, images of gestures of user for example, the camera etc. of terminal in real time, in turn, according to being adopted
The images of gestures of collection carries out the identifying processing of images of gestures, to export corresponding voice signal.
However, since the quality of image collecting device images of gestures collected is heavily dependent on the light of environment
Situation, therefore, in some environments, for example, the light of environment more dark etc., image collecting device images of gestures collected
It is not the images of gestures for being apparent, or even user can not being collected, in this way, not carrying out at the identification of images of gestures
Reason, so as to cause the reduction of the reliability of gesture identification.
[summary of the invention]
Many aspects of the invention provide recognition methods and the device of a kind of gesture, to improve the reliable of gesture identification
Property.
An aspect of of the present present invention provides a kind of recognition methods of gesture, comprising:
Using sensor device, the motion profile for the wearable device that the hand or arm for obtaining user are dressed;It is described
Sensor device is arranged on the wearable device;
According to the motion profile of the wearable device, the motion profile of the hand is obtained;
According to the motion profile of the hand, the content of appointed language is determined;
Export the content of the appointed language.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, it is described to utilize biography
Sensor arrangement, the motion profile for the wearable device that the hand or arm for obtaining user are dressed, comprising:
Using the sensor device, the location information of M sampling point position of the wearable device is obtained, M is big
In or equal to 1 integer;
According to the location information of the M sampling point position, the motion profile is obtained.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, it is described to utilize institute
Sensor device is stated, the location information of M sampling point position of the wearable device is obtained, comprising:
Using the sensor device, the wearable device is obtained in the space quaternary number ginseng of m-th of sampling point position
Number, m are the integer more than or equal to 2, and less than or equal to M;
According to the space quaternary number parameter, the acceleration on assigned direction is obtained;
According to the acceleration on the location information of the m-1 sampling point position, sample frequency and the assigned direction, obtain
The location information of m-th of sampling point position.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, the sensor
Device includes at least one of 3-axis acceleration sensor, three-axis gyroscope and three axis magnetometric sensors sensor.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, it is described according to institute
The motion profile for stating hand determines the content of appointed language, comprising:
The content of the appointed language is obtained using learning model according to the motion profile.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, it is described according to institute
Motion profile is stated, using learning model, obtains the content of the appointed language, comprising:
According to the motion profile, the characteristic in given plane is obtained;
The content of the appointed language is obtained using the learning model according to the characteristic in the given plane.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, the output institute
State the content of appointed language, comprising:
Export voice signal corresponding to the content of the appointed language;Or
Export operational order corresponding to the content of the appointed language.
An aspect of of the present present invention provides a kind of identification device of gesture, comprising:
Acquiring unit, for utilizing sensor device, the wearable device that the hand or arm for obtaining user are dressed
Motion profile;The sensor device is arranged on the wearable device;
Processing unit obtains the motion profile of the hand for the motion profile according to the wearable device;
Determination unit determines the content of appointed language for the motion profile according to the hand;
Output unit, for exporting the content of the appointed language.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, the acquisition are single
Member is specifically used for
Using the sensor device, the location information of M sampling point position of the wearable device is obtained, M is big
In or equal to 1 integer;And
According to the location information of the M sampling point position, the motion profile is obtained.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, the acquisition are single
Member is specifically used for
Using the sensor device, the wearable device is obtained in the space quaternary number ginseng of m-th of sampling point position
Number, m are the integer more than or equal to 2, and less than or equal to M;
According to the space quaternary number parameter, the acceleration on assigned direction is obtained;And
According to the acceleration on the location information of the m-1 sampling point position, sample frequency and the assigned direction, obtain
The location information of m-th of sampling point position.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, the sensor
Device includes at least one of 3-axis acceleration sensor, three-axis gyroscope and three axis magnetometric sensors sensor.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, it is described determining single
Member is specifically used for
The content of the appointed language is obtained using learning model according to the motion profile.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, it is described determining single
Member is specifically used for
According to the motion profile, the characteristic in given plane is obtained;And
The content of the appointed language is obtained using the learning model according to the characteristic in the given plane.
The aspect and any possible implementation manners as described above, it is further provided a kind of implementation, the output are single
Member is specifically used for
Export voice signal corresponding to the content of the appointed language;Or
Export operational order corresponding to the content of the appointed language.
As shown from the above technical solution, the embodiment of the present invention obtains the hand or arm of user by utilizing sensor device
The motion profile for the wearable device that portion is dressed, and then according to the motion profile of the wearable device, obtain the hand
Motion profile determine the content of appointed language, make it possible to export the specified language and according to the motion profile of the hand
The content of speech, due to being not necessarily to the images of gestures of image acquisition device user, can be avoided in the prior art due to figure
As acquisition device images of gestures collected is not to be apparent, or even can not collect nothing caused by the images of gestures of user
Method carries out the problem of identifying processing of images of gestures, to improve the reliability of gesture identification.
In addition, using technical solution provided by the invention, due to not further relating to complicated image recognition processes, energy
Enough effectively improve the real-time of the identification of gesture.
In addition, using technical solution provided by the invention, due to being not necessarily to the images of gestures of image acquisition device user,
So that being not necessarily to the light conditions of concern for the environment, therefore, it can be avoided and terminal is needed to carry out light filling to environment and lead in the prior art
The terminal of cause needs to increase the problem of additional processing operation, can be effectively reduced the processing load of terminal, while improving end
The performance at end.
In addition, using technical solution provided by the invention, by can be according to hand or the arm institute of acquired user
The motion profile of the wearable device of wearing exports the content of appointed language, can effectively improve the utilization rate of wearable device,
To improve user experience.
[Detailed description of the invention]
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is some realities of the invention
Example is applied, it for those of ordinary skill in the art, without any creative labor, can also be attached according to these
Figure obtains other attached drawings.
Fig. 1 is the flow diagram of the recognition methods for the gesture that one embodiment of the invention provides;
Fig. 2 be another embodiment of the present invention provides gesture identification device structural schematic diagram.
[specific embodiment]
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Whole other embodiments obtained without creative efforts, shall fall within the protection scope of the present invention.
It should be noted that terminal involved in the embodiment of the present invention can include but is not limited to mobile phone, individual digital
Assistant (Personal Digital Assistant, PDA), radio hand-held equipment, tablet computer (Tablet Computer),
PC (Personal Computer, PC), MP3 player, MP4 player, wearable device (for example, intelligent glasses,
Smartwatch, Intelligent bracelet etc.) etc..
In addition, the terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates may exist
Three kinds of relationships, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.Separately
Outside, character "/" herein typicallys represent the relationship that forward-backward correlation object is a kind of "or".
Fig. 1 is the flow diagram of the recognition methods for the gesture that one embodiment of the invention provides, as shown in Figure 1.
101, using sensor device, the motion profile for the wearable device that the hand or arm for obtaining user are dressed;
The sensor device is arranged on the wearable device.
102, according to the motion profile of the wearable device, the motion profile of the hand is obtained.
103, according to the motion profile of the hand, the content of appointed language is determined.
104, the content of the appointed language is exported.
It should be noted that some or all of 101~104 executing subject can be to be located locally the application of terminal,
It or can also be the plug-in unit or Software Development Kit (Software being arranged in the application of local terminal
Development Kit, SDK) etc. functional units, perhaps can also in network side server processing engine or
It can also be for positioned at the distributed system of network side, the present embodiment be to this without being particularly limited to.
It is understood that the application can be mounted in the local program (nativeApp) in terminal, or may be used also
To be a web page program (webApp) of browser in terminal, the present embodiment is to this without limiting.
In this way, by utilizing sensor device, the movement for the wearable device that the hand or arm for obtaining user are dressed
Track, and then according to the motion profile of the wearable device, the motion profile of the hand is obtained, and according to the hand
Motion profile determines the content of appointed language, makes it possible to export the content of the appointed language, due to being not necessarily to image collector
Therefore the images of gestures for setting acquisition user can be avoided in the prior art due to image collecting device images of gestures collected
It is not to be apparent, or even the identifying processing that images of gestures can not be carried out caused by the images of gestures of user can not be collected
Problem, to improve the reliability of gesture identification.
So-called wearable device is directly worn, or be integrated into user clothes or accessory one kind it is portable
Equipment, for example, Intelligent bracelet, smartwatch, intelligent necklace, intelligent glasses, intelligent ring, smart phone etc..Wearable device
Not only a kind of hardware device, even more realizes powerful function by technologies such as software support and data interactions, can wear
Wear equipment will life to us, perception bring very big transformation.
Optionally, in a possible implementation of the present embodiment, in 101, used sensor device can
To be fixed on wearable device, the relative position between wearable device immobilizes.In this way, the sensor dress
Exported sensing data is set, the state of the accurate description wearable device is capable of.
During a concrete implementation, the sensor device can be for the used of space quaternary number synthesis function
Property measuring unit first sensor device, specifically can use first sensor device, obtain user dressed it is wearable
The space quaternary number parameter of equipment.
Wherein, the first sensor device is the Inertial Measurement Unit that function is synthesized with space quaternary number, be can wrap
Include but be not limited at least one of 3-axis acceleration sensor, three-axis gyroscope and three axis magnetometric sensors sensor.This is used
Property measuring unit specifically can according to sensing data collected, carry out fusion treatment, to obtain the space four of wearable device
First number parameter.Correspondingly, if Inertial Measurement Unit only includes a kind of sensor, the sensing data of the wearable device can be with
For 3 axis sensing datas;If Inertial Measurement Unit includes two kinds of sensors, the sensing data of the wearable device can be 6
Axis sensing data;If Inertial Measurement Unit includes three kinds of sensors, the sensing data of the wearable device can pass for 9 axis
Feel data, the present embodiment is to this without being particularly limited to.
During another concrete implementation, the sensor device can be for without space quaternary number synthesis function
Inertial Measurement Unit second sensor device, specifically can use second sensor device, obtain that user dressed can
The sensing data of wearable device;And according to the sensing data, obtain the space quaternary number parameter.Specifically, specifically may be used
Fusion treatment is carried out, to obtain the space quaternary number parameter of wearable device according to sensing data collected.Detailed description
It may refer to related content in the prior art, details are not described herein again.
Wherein, the second sensor device is the Inertial Measurement Unit that function is synthesized without space quaternary number, can be with
Including but not limited at least one of 3-axis acceleration sensor, three-axis gyroscope and three axis magnetometric sensors sensor.Phase
Ying Di, if Inertial Measurement Unit only includes a kind of sensor, the sensing data of the wearable device can sense number for 3 axis
According to;If Inertial Measurement Unit includes two kinds of sensors, the sensing data of the wearable device can be 6 axis sensing datas;
If Inertial Measurement Unit includes three kinds of sensors, the sensing data of the wearable device can be 9 axis sensing datas, this reality
Example is applied to this without being particularly limited to.
It should be noted that during above-mentioned two concrete implementation, since sensor device may be subjected to surrounding
The influence of environment and displacement state, therefore, sensing data caused by sensor device need to be adjusted processing,
For example, calibration process and filtering processing etc., can participate in the generation and other calculating of space quaternary number parameter.In this way, energy
Enough guarantee the reliability of sensing data.
For example, for acceleration transducer and gyroscope, first wearable device can be horizontally arranged, then
Multiple sample datas are continuously taken, and calculate average value using these sample datas, this average value is as offset, subsequent acquisition
Sensing data subtract this offset as actual value.For magnetometric sensor, need to carry out horizontal alignment and inclination
Compensation.Horizontal alignment can use 8 word calibration methods;After horizontal alignment, it is also necessary to slope compensation is carried out, by calibrated
Acceleration transducer, obtain the inclination angle of wearable device, then compensated using the inclination angle.
Again for example, gyroscope to rotation angle detection be it is instantaneous and also be it is point-device, due to gyroscope
Measuring basis be itself, without system outside object of reference, the accumulated error of integral calculation angle can as time go on rapidly
Increase, so gyroscope can only operate in relatively short time scale.Acceleration transducer has system External reference object i.e. " weight
Power axis " can accurately export front and rear tilt angle and tilt angle, and this angle in the case where no external force acceleration
Accumulated error is not had, but acceleration transducer, when three-dimensional space does variable motion, its output is just inaccurate.Due to hand
Gesture movement is related to linear motion and rotary motion, it is necessary to gyroscope and acceleration transducer is used in combination with, Er Qiewei
The actual physical direction of determining wearable device during exercise, also needs to be positioned using magnetometric sensor.Therefore, in order to
More accurate calculated result such as space quaternary number parameter etc. is obtained, needs that three kinds of sensing datas is combined to be filtered, root
According to the error between estimated data and measurement data, and integral feedback method and Proportional Feedback method are utilized, to sky obtained
Between quaternary number parameter be adjusted processing.
Optionally, in a possible implementation of the present embodiment, in 101, it specifically can use the sensing
Device device obtains the location information of M sampling point position of the wearable device, and M is the integer more than or equal to 1, in turn,
Then the motion profile can be obtained according to the location information of the M sampling point position.
During a concrete implementation, the sensor device specifically can use, obtain the wearable device
In the space quaternary number parameter of m-th of sampling point position, m is the integer more than or equal to 2, and less than or equal to M, in turn, then
The acceleration on assigned direction can be obtained according to the space quaternary number parameter.It is then possible to according to the m-1 sampled point
Acceleration on the location information of position, sample frequency and the assigned direction obtains the position letter of m-th of sampling point position
Breath.
For example, the wearable device obtained can specifically be joined in the space quaternary number of m-th of sampling point position
Number is expressed as Q (q0,q1,q2,q3)=q0+q1i+q2j+q3K according to space quaternary number parameter, obtains body coordinate system b in turn
The transformation matrix of coordinates that (i.e. the coordinate system of wearable device) to navigational coordinate system R (i.e. geographic coordinate system) rotatesRotate
Matrix
It can remember
ForSince in the rotary course of R system to b system, coordinate system remains rectangular coordinate system, institute
Think orthogonal matrix, is denoted as
Then, acceleration sensing data obtained and spin matrix be can useObtain the acceleration on assigned direction
Degree.The assigned direction can be the direction of three reference axis in body coordinate system b.Body coordinate system b can generally be defined
Axis to the right, can be specifically denoted as the x-axis of terminal by the right-handed system being made of right, preceding, upper three directions of wearable device,
Forward axis is denoted as to the y-axis of terminal, upward axis is denoted as to the z-axis of terminal.
It is then possible to obtain the speed on the assigned direction according to the acceleration in sample frequency and the assigned direction
Degree, in turn, then can speed on the assigned direction, obtain the displacement on the assigned direction obtained, and according to the
Displacement on the location information and the assigned direction obtained of m-1 sampling point position obtains m-th of sampling point position
Location information.Wherein, the location information of the m-1 sampling point position, refers to the position of the sampled point before current sampling point
It sets.
The position of each sampled point can be described with the location information in body coordinate system b in three reference axis.The
The location information of i sampling point position can be expressed as (xi,yi,zi), i be more than or equal to 1, and it is whole less than or equal to M
Number.The location information of initial samples point can be set to (0,0,0).
It, can be further to the speed and/or displacement progress height on the assigned direction obtained in order to reduce drift
Pass filter processing.In this way, tracking the location information of each sampling point position by lasting sampling, wearable device can be generated
Motion profile.
Optionally, it in a possible implementation of the present embodiment, in 102, can specifically be worn according to described
The position that equipment is dressed is worn, processing is adjusted to the motion profile of the wearable device, to obtain the fortune of the hand
Dynamic rail mark.
It is the hand or wrist of human body for the position that the wearable device is dressed during a concrete implementation
Portion, then, then motion profile that can directly by the motion profile of the wearable device, as the hand.
It is the arm of human body for the position that the wearable device is dressed during another concrete implementation
Some position, then, then it can be according to the associated data between the position and the hand of human body, for example, the line along arm is long
Degree, angle between the line and assigned direction of arm etc., obtain regulation coefficient, and utilize the regulation coefficient, to it is described can
The motion profile of wearable device is adjusted processing, to obtain the motion profile of the hand.
It optionally,, specifically can be according to the movement in 103 in a possible implementation of the present embodiment
Track obtains the content of the appointed language using learning model.
During a concrete implementation, the appointed language can be machine language.So-called machine language is
Refer to the data that machine such as computer etc. can be interpreted directly.So, the content of machine language, then can be for based on machine language
Various operational orders.
During another concrete implementation, the appointed language can be natural language.So-called natural language,
Refer to the language exchanged between men, for example, the languages such as Chinese, English, German.So, the content of natural language, then
It can be the one or more vocabulary or sentence of the natural language of a specified languages or multiple languages.Wherein, so-called word
It converges, it can be understood as be all or particular range words or fixed phrase etc. in a kind of language.It, can for different language
To there is different vocabulary, for example, Chinese vocabulary, English glossary etc..So-called sentence is the basic unit of language performance, it by
Word, phrase are constituted, and can express a complete meaning.
It is understood that the appointed language, can be a kind of language, for example, oneself of machine language or any languages
Right language, or can also be bilingual or two or more language, the present embodiment is to this without being particularly limited to.
Specifically, specifically the characteristic in given plane can be obtained according to the motion profile, it in turn, then can be with
The content of the appointed language is obtained using the learning model according to the characteristic in the given plane.
For example, the given plane can be any two reference axis institute group in three reference axis in body coordinate system b
At plane, can be a plane, or can also be multiple planes, the present embodiment is to this without being particularly limited to.So,
It specifically can be by the motion profile in three reference axis in body coordinate system b one composed by any two reference axis
Projection process is carried out in plane or multiple planes, to obtain the characteristic in these planes.In this way, then can will be obtained
Characteristic inputs the learning model constructed in advance, obtains the content of the appointed language.
It should be noted that input learning model characteristic can for using a wearable device it is obtained,
Or can also be to be obtained using multiple wearable devices, the present embodiment is to this without being particularly limited to.
Specifically, preassigned training sample set can be specifically used, is trained, to construct learning model, to
Identify the content of appointed language.Wherein, training sample included in training sample set, can be for by the known sample of mark
This, in this way, can be directly trained using these known samples, to construct learning model;It or can be warp with a part
Cross mark known sample, another part be without through mark unknown sample, then, then can first with known sample into
Row training, to construct initial learning model, then, recycles initial learning model to evaluate and test unknown sample, to be known
Not as a result, can be then labeled in turn according to the recognition result of unknown sample to unknown sample, to form known sample, make
For the known sample newly increased, training is re-started using the known sample and original known sample that newly increase, with building
New learning model such as identifies until constructed learning model or known sample meet the cut-off condition of learning model
Accuracy rate is greater than or equal to pre-set accuracy rate threshold value or the quantity of known sample is greater than or equal to pre-set quantity
Threshold value etc., the present embodiment is to this without being particularly limited to.
Optionally, it in a possible implementation of the present embodiment, in 104, can specifically export described specified
Voice signal corresponding to the content of language.It specifically, can be specifically voice signal by the Content Transformation of the appointed language,
And play the voice signal.In this way, ordinary people is made to exchange used sign language between deaf-mute without grasping, it will be able to directly
It is exchanged with deaf-mute, exchange efficiency can be effectively improved.
Optionally, it in a possible implementation of the present embodiment, in 104, can specifically export described specified
Operational order corresponding to the content of language.Specifically, specifically the content of the appointed language can be encapsulated as specified format
Operational order, and send the operational order so that receiving device such as smart home device etc. is according to the operational order,
Execute operation.In such manner, it is possible to the gesture made using user, is controlled and received end equipment and executes corresponding operation, it can be effective
Improve the control efficiency of receiving device.
In the present embodiment, by utilizing sensor device, the wearable device that the hand or arm for obtaining user are dressed
Motion profile obtain the motion profile of the hand, and according to described and then according to the motion profile of the wearable device
The motion profile of hand determines the content of appointed language, makes it possible to export the content of the appointed language, due to being not necessarily to image
Therefore the images of gestures of acquisition device acquisition user can be avoided in the prior art due to image collecting device hand collected
Gesture image is not to be apparent, or even can not collect the identification that images of gestures can not be carried out caused by the images of gestures of user
The problem of processing, to improve the reliability of gesture identification.
In addition, using technical solution provided by the invention, due to not further relating to complicated image recognition processes, energy
Enough effectively improve the real-time of the identification of gesture.
In addition, using technical solution provided by the invention, due to being not necessarily to the images of gestures of image acquisition device user,
So that being not necessarily to the light conditions of concern for the environment, therefore, it can be avoided and terminal is needed to carry out light filling to environment and lead in the prior art
The terminal of cause needs to increase the problem of additional processing operation, can be effectively reduced the processing load of terminal, while improving end
The performance at end.
In addition, using technical solution provided by the invention, by can be according to hand or the arm institute of acquired user
The motion profile of the wearable device of wearing exports the content of appointed language, can effectively improve the utilization rate of wearable device,
To improve user experience.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules is not necessarily of the invention
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
Fig. 2 be another embodiment of the present invention provides gesture identification device structural schematic diagram, as shown in Figure 2.This reality
The identification device for applying the gesture of example may include acquiring unit 21, processing unit 22, determination unit 23 and output unit 24.Its
In, acquiring unit 21, for utilizing sensor device, the movement for the wearable device that the hand or arm for obtaining user are dressed
Track;The sensor device is arranged on the wearable device;Processing unit 22, for according to the wearable device
Motion profile obtains the motion profile of the hand;Determination unit 23, for the motion profile according to the hand, determination refers to
The content of attribute speech;Output unit 24, for exporting the content of the appointed language.
It should be noted that some or all of identification device of gesture provided by the present embodiment can be to be located locally
The application of terminal, or can also be the plug-in unit being arranged in the application of local terminal or Software Development Kit
Functional units such as (Software Development Kit, SDK), or can also be the processing in network side server
Engine, or can also be for positioned at the distributed system of network side, the present embodiment is to this without being particularly limited to.
It is understood that the application can be mounted in the local program (nativeApp) in terminal, or may be used also
To be a web page program (webApp) of browser in terminal, the present embodiment is to this without limiting.
Optionally, in a possible implementation of the present embodiment, the acquiring unit 21 specifically can be used for benefit
With the sensor device, the location information of M sampling point position of the wearable device is obtained, M is more than or equal to 1
Integer;And the location information according to the M sampling point position, obtain the motion profile.
Specifically, the acquiring unit 21, specifically can be used for using the sensor device, obtain described wearable set
The standby space quaternary number parameter in m-th of sampling point position, m are the integer more than or equal to 2, and less than or equal to M;According to institute
Space quaternary number parameter is stated, the acceleration on assigned direction is obtained;And according to the location information of the m-1 sampling point position,
Acceleration in sample frequency and the assigned direction obtains the location information of m-th of sampling point position.
Optionally, in a possible implementation of the present embodiment, the acquiring unit 21, the used biography
Sensor arrangement can include but is not limited at least one in 3-axis acceleration sensor, three-axis gyroscope and three axis magnetometric sensors
Kind sensor.
Optionally, in a possible implementation of the present embodiment, the determination unit 23 specifically can be used for root
The content of the appointed language is obtained using learning model according to the motion profile.
Specifically, the determination unit 23 specifically can be used for obtaining the spy in given plane according to the motion profile
Levy data;And the interior of the appointed language is obtained using the learning model according to the characteristic in the given plane
Hold.
Optionally, in a possible implementation of the present embodiment, the output unit 24 specifically can be used for defeated
Voice signal corresponding to the content of the appointed language out;Or operation corresponding to the content of the output appointed language refers to
It enables.
It should be noted that method in the corresponding embodiment of Fig. 1, it can be by the identification device of gesture provided in this embodiment
It realizes.Detailed description may refer to the related content in the corresponding embodiment of Fig. 1, and details are not described herein again.
In the present embodiment, sensor device is utilized by acquiring unit, what the hand or arm for obtaining user were dressed can
The motion profile of wearable device, and then the hand is obtained according to the motion profile of the wearable device by processing unit
Motion profile, and determine the content of appointed language according to the motion profile of the hand by determination unit, enables output unit
The content for enough exporting the appointed language, due to being not necessarily to the images of gestures of image acquisition device user, can be avoided
In the prior art since image collecting device images of gestures collected is not the hand for being apparent, or even can not collecting user
The problem of identifying processing of images of gestures can not be carried out caused by gesture image, to improve the reliability of gesture identification.
In addition, using technical solution provided by the invention, due to not further relating to complicated image recognition processes, energy
Enough effectively improve the real-time of the identification of gesture.
In addition, using technical solution provided by the invention, due to being not necessarily to the images of gestures of image acquisition device user,
So that being not necessarily to the light conditions of concern for the environment, therefore, it can be avoided and terminal is needed to carry out light filling to environment and lead in the prior art
The terminal of cause needs to increase the problem of additional processing operation, can be effectively reduced the processing load of terminal, while improving end
The performance at end.
In addition, using technical solution provided by the invention, by can be according to hand or the arm institute of acquired user
The motion profile of the wearable device of wearing exports the content of appointed language, can effectively improve the utilization rate of wearable device,
To improve user experience.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided by the present invention, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one
In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer
It is each that equipment (can be personal computer, server or the network equipment etc.) or processor (processor) execute the present invention
The part steps of embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-
Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. it is various
It can store the medium of program code.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (10)
1. a kind of recognition methods of gesture characterized by comprising
Using sensor device, the motion profile for the wearable device that the hand or arm for obtaining user are dressed;The sensing
Device device is arranged on the wearable device;
According to the motion profile of the wearable device, the motion profile of the hand is obtained;
According to the motion profile of the hand, the content of appointed language is determined;
Export the content of the appointed language;
Wherein, using sensor device, the motion profile for the wearable device that the hand or arm for obtaining user are dressed, packet
It includes:
The sensing data generated to the sensor device is adjusted, and the hand or arm for obtaining user are dressed wearable
The motion profile of equipment;
The motion profile according to the hand, determines the content of appointed language, comprising:
According to the motion profile, the characteristic in given plane is obtained;
The content of the appointed language is obtained using learning model according to the characteristic in the given plane.
2. the method according to claim 1, wherein it is described utilize sensor device, obtain user hand or
The motion profile for the wearable device that arm is dressed, comprising:
Using the sensor device, obtain the location information of M sampling point position of the wearable device, M be greater than or
Integer equal to 1;
According to the location information of the M sampling point position, the motion profile is obtained.
3. according to the method described in claim 2, can wear described in acquisition it is characterized in that, described utilize the sensor device
Wear the location information of M sampling point position of equipment, comprising:
Using the sensor device, the wearable device is obtained in the space quaternary number parameter of m-th of sampling point position, m
For the integer more than or equal to 2, and less than or equal to M;
According to the space quaternary number parameter, the acceleration on assigned direction is obtained;
According to the acceleration on the location information of the m-1 sampling point position, sample frequency and the assigned direction, m is obtained
The location information of a sampling point position.
4. the method according to claim 1, wherein the sensor device include 3-axis acceleration sensor,
At least one of three-axis gyroscope and three axis magnetometric sensors sensor.
5. method described in any claim according to claim 1~4, which is characterized in that the output appointed language
Content, comprising:
Export voice signal corresponding to the content of the appointed language;Or
Export operational order corresponding to the content of the appointed language.
6. a kind of identification device of gesture characterized by comprising
Acquiring unit, for utilizing sensor device, the movement for the wearable device that the hand or arm for obtaining user are dressed
Track;The sensor device is arranged on the wearable device;
Processing unit obtains the motion profile of the hand for the motion profile according to the wearable device;
Determination unit determines the content of appointed language for the motion profile according to the hand;
Output unit, for exporting the content of the appointed language;
Wherein, acquiring unit is specifically used for:
The sensing data generated to the sensor device is adjusted, and the hand or arm for obtaining user are dressed wearable
The motion profile of equipment;
The determination unit, is specifically used for
According to the motion profile, the characteristic in given plane is obtained;And
The content of the appointed language is obtained using learning model according to the characteristic in the given plane.
7. device according to claim 6, which is characterized in that the acquiring unit is specifically used for utilizing the sensor
Device, obtains the location information of M sampling point position of the wearable device, and M is the integer more than or equal to 1;And
According to the location information of the M sampling point position, the motion profile is obtained.
8. device according to claim 7, which is characterized in that the acquiring unit is specifically used for utilizing the sensor
Device, obtains the wearable device in the space quaternary number parameter of m-th of sampling point position, m be more than or equal to 2, and it is small
In or equal to M integer;
According to the space quaternary number parameter, the acceleration on assigned direction is obtained;And
According to the acceleration on the location information of the m-1 sampling point position, sample frequency and the assigned direction, m is obtained
The location information of a sampling point position.
9. device according to claim 6, which is characterized in that the sensor device include 3-axis acceleration sensor,
At least one of three-axis gyroscope and three axis magnetometric sensors sensor.
10. according to device described in claim 6~9 any claim, which is characterized in that the output unit, it is specific to use
In
Export voice signal corresponding to the content of the appointed language;Or
Export operational order corresponding to the content of the appointed language.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510512311.0A CN105068657B (en) | 2015-08-19 | 2015-08-19 | The recognition methods of gesture and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510512311.0A CN105068657B (en) | 2015-08-19 | 2015-08-19 | The recognition methods of gesture and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105068657A CN105068657A (en) | 2015-11-18 |
CN105068657B true CN105068657B (en) | 2019-01-15 |
Family
ID=54498043
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510512311.0A Active CN105068657B (en) | 2015-08-19 | 2015-08-19 | The recognition methods of gesture and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105068657B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105666497A (en) * | 2016-04-21 | 2016-06-15 | 奇弩(北京)科技有限公司 | Universal mechanical arm with gesture learning function |
CN106569621A (en) * | 2016-10-31 | 2017-04-19 | 捷开通讯(深圳)有限公司 | Method for interacting wearable device with terminal, wearable device and terminal |
CN107016347A (en) * | 2017-03-09 | 2017-08-04 | 腾讯科技(深圳)有限公司 | A kind of body-sensing action identification method, device and system |
CN110136706A (en) * | 2019-04-11 | 2019-08-16 | 北京宙心科技有限公司 | A kind of wearable smart machine and its working method |
CN110490059A (en) * | 2019-07-10 | 2019-11-22 | 广州幻境科技有限公司 | A kind of gesture identification method, system and the device of wearable intelligent ring |
CN112947771B (en) * | 2021-01-11 | 2022-11-25 | 上海龙旗科技股份有限公司 | Method, device and equipment for realizing space trajectory input |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101539994A (en) * | 2009-04-16 | 2009-09-23 | 西安交通大学 | Mutually translating system and method of sign language and speech |
CN103116576A (en) * | 2013-01-29 | 2013-05-22 | 安徽安泰新型包装材料有限公司 | Voice and gesture interactive translation device and control method thereof |
CN103578329A (en) * | 2013-10-25 | 2014-02-12 | 西安理工大学 | Intelligent sign language interpretation device and usage method thereof |
CN103760967A (en) * | 2013-09-29 | 2014-04-30 | 中山大学 | Finger curvature avatar control sensor |
CN104049753A (en) * | 2014-06-09 | 2014-09-17 | 百度在线网络技术(北京)有限公司 | Method and device for realizing mutual conversion between sign language information and text information |
CN104317403A (en) * | 2014-10-27 | 2015-01-28 | 黄哲军 | Wearable equipment for sign language recognition |
CN104410883A (en) * | 2014-11-29 | 2015-03-11 | 华南理工大学 | Mobile wearable non-contact interaction system and method |
-
2015
- 2015-08-19 CN CN201510512311.0A patent/CN105068657B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101539994A (en) * | 2009-04-16 | 2009-09-23 | 西安交通大学 | Mutually translating system and method of sign language and speech |
CN103116576A (en) * | 2013-01-29 | 2013-05-22 | 安徽安泰新型包装材料有限公司 | Voice and gesture interactive translation device and control method thereof |
CN103760967A (en) * | 2013-09-29 | 2014-04-30 | 中山大学 | Finger curvature avatar control sensor |
CN103578329A (en) * | 2013-10-25 | 2014-02-12 | 西安理工大学 | Intelligent sign language interpretation device and usage method thereof |
CN104049753A (en) * | 2014-06-09 | 2014-09-17 | 百度在线网络技术(北京)有限公司 | Method and device for realizing mutual conversion between sign language information and text information |
CN104317403A (en) * | 2014-10-27 | 2015-01-28 | 黄哲军 | Wearable equipment for sign language recognition |
CN104410883A (en) * | 2014-11-29 | 2015-03-11 | 华南理工大学 | Mobile wearable non-contact interaction system and method |
Also Published As
Publication number | Publication date |
---|---|
CN105068657A (en) | 2015-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105068657B (en) | The recognition methods of gesture and device | |
US10671842B2 (en) | Methods of determining handedness for virtual controllers | |
Wang et al. | Hear sign language: A real-time end-to-end sign language recognition system | |
CN105359054B (en) | Equipment is positioned and is orientated in space | |
AU2020273327A1 (en) | Systems and methods of swimming analysis | |
CN106648088B (en) | Motion Capture posture transient calibration method and its system | |
CN104536558B (en) | A kind of method of intelligence finger ring and control smart machine | |
CN110457414A (en) | Offline map processing, virtual objects display methods, device, medium and equipment | |
US20160077166A1 (en) | Systems and methods for orientation prediction | |
CN107368820B (en) | Refined gesture recognition method, device and equipment | |
CN210402266U (en) | Sign language translation system and sign language translation gloves | |
CN106970705A (en) | Motion capture method, device and electronic equipment | |
CN112633059B (en) | Fall remote monitoring system based on LabVIEW and MATLAB | |
CN109846487A (en) | Thigh measuring method for athletic posture and device based on MIMU/sEMG fusion | |
CN117523659A (en) | Skeleton-based multi-feature multi-stream real-time action recognition method, device and medium | |
CN105105757A (en) | Wearable human motion gesture track recording and assessment device | |
CN109871116A (en) | Device and method for identifying a gesture | |
Tsekleves et al. | Wii your health: a low-cost wireless system for home rehabilitation after stroke using Wii remotes with its expansions and blender | |
CN115904086A (en) | Sign language identification method based on wearable calculation | |
CN110236560A (en) | Six axis attitude detecting methods of intelligent wearable device, system | |
CN107301415A (en) | Gesture acquisition system | |
CN113873637A (en) | Positioning method, positioning device, terminal and storage medium | |
TW201830198A (en) | Sign language recognition method and system for converting user's sign language and gestures into sensed finger bending angle, hand posture and acceleration through data capturing gloves | |
CN207704451U (en) | Gesture acquisition system | |
Xia et al. | Real-time recognition of human daily motion with smartphone sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |