CN107704190A - Gesture identification method, device, terminal and storage medium - Google Patents
Gesture identification method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN107704190A CN107704190A CN201711076834.0A CN201711076834A CN107704190A CN 107704190 A CN107704190 A CN 107704190A CN 201711076834 A CN201711076834 A CN 201711076834A CN 107704190 A CN107704190 A CN 107704190A
- Authority
- CN
- China
- Prior art keywords
- gesture
- probability
- touch signal
- logic regression
- current
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000001960 triggered effect Effects 0.000 claims abstract description 20
- 238000012549 training Methods 0.000 claims description 30
- 230000006870 function Effects 0.000 claims description 25
- 238000013139 quantization Methods 0.000 claims description 17
- 241001269238 Data Species 0.000 claims description 10
- 238000011478 gradient descent method Methods 0.000 claims description 10
- 238000007477 logistic regression Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 230000005055 memory storage Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 9
- 230000008569 process Effects 0.000 abstract description 9
- 230000004069 differentiation Effects 0.000 abstract description 5
- 230000004044 response Effects 0.000 abstract description 5
- 238000012545 processing Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000000712 assembly Effects 0.000 description 3
- 238000000429 assembly Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000012092 media component Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
This application discloses a kind of gesture identification method, device, terminal and storage medium, belong to field of terminal technology.This method includes:When detecting the slide triggered by multiple touch points in the designated area in screen; touch signal is obtained according to sliding trace; based on the touch signal; by specifying Logic Regression Models to determine first gesture probability and second gesture probability; the first gesture probability refers to that current gesture is to raise the probability of gesture; the second gesture probability refers to the probability that current gesture is upper oarsman's gesture, and based on the first gesture probability and the second gesture probability, current gesture is identified.It that is to say, the application is by obtaining touch signal of multiple touch points in sliding process, and the touch signal is based on, by specifying Logic Regression Models to realize to upper oarsman's gesture and the differentiation and identification of raising gesture, to guarantee the gesture operation of correct response user.
Description
Technical field
The application is related to field of terminal technology, more particularly to a kind of gesture identification method, device, terminal and storage medium.
Background technology
At present, with the fast development of terminal technology, the interactive mode between user and terminal also becomes more and more various
Change.For example, user can operate terminal using various gestures, and the gesture is broadly divided into static gesture and dynamic gesture.This is quiet
State gesture includes but is not limited to click on gesture, and the dynamic gesture includes but is not limited to upper oarsman's gesture, raises gesture.Wherein, on this
Oarsman's gesture refers to the gesture that any position using hand is slided from the top of the lower direction screen of screen, and this to raise gesture usual
Refer to the gesture slided using diagonal of the fringe region of the back of the hand along an angular screen of screen.
The content of the invention
The embodiment of the present application provides a kind of gesture identification method, device, terminal and storage medium, can be used for upper stroke
Make a distinction and identify with two kinds of gestures are raised.The technical scheme is as follows:
First aspect, there is provided a kind of gesture identification method, methods described include:
When detecting the slide triggered by multiple touch points in the designated area in screen, according to sliding trace
Obtain touch signal;
Based on the touch signal, first gesture probability and second gesture probability are determined by specified Logic Regression Models,
The first gesture probability refers to that current gesture is to raise the probability of gesture, and the second gesture probability refers to current gesture
For the probability of upper oarsman's gesture;
Based on the first gesture probability and the second gesture probability, current gesture is identified.
Alternatively, it is described to be based on the touch signal, by specifying Logic Regression Models to determine first gesture probability and the
Before two gesture probability, in addition to:
The touch signal of multi collect difference gesture, obtains multiple sample datas, and the different gestures include described upper stroke
Gesture and described raise gesture;
Based on the multiple sample data, it is trained by default training pattern, obtains the specified logistic regression mould
Type.
Alternatively, the default training pattern includes loss function model and initialization logic regression model;
It is described to be based on the multiple sample data, it is trained by default training pattern, obtains the specified logic and return
Return model, including:
The multiple sample data is quantified, obtains sample quantization vector;
By sample quantization vector input into the loss function model, and using gradient descent method by determining
The minimum value of loss function model is stated to determine to estimate weight;
The estimation weight is inputted into the initialization logic regression model, obtains the specified logistic regression mould
Type.
Alternatively, it is described to be based on the first gesture probability and the second gesture probability, current gesture is known
Not, including:
Determine the maximum gesture probability in the first gesture probability and the second gesture probability;
It is gesture corresponding to the maximum gesture probability by current gesture identification.
Alternatively, the touch signal includes mean place information or alternate position spike information, and the mean place information refers to
The average value of all positional informations on the sliding trace, the alternate position spike information refer to the initial position of the sliding trace with
Location variation between end position.
Second aspect, there is provided a kind of gesture identifying device, described device include:
Acquisition module, the slide triggered by multiple touch points is detected in the designated area of screen for working as
When, touch signal is obtained according to sliding trace;
Determining module, for based on the touch signal, by specify Logic Regression Models determine first gesture probability and
Second gesture probability, the first gesture probability refer to that current gesture is to raise the probability of gesture, the second gesture probability
Refer to the probability that current gesture is upper oarsman's gesture;
Identification module, for based on the first gesture probability and the second gesture probability, being carried out to current gesture
Identification.
Alternatively, described device also includes:
Acquisition module, for the touch signal of multi collect difference gesture, obtain multiple sample datas, the different gestures
Including upper oarsman's gesture and described raise gesture;
Training module, for based on the multiple sample data, being trained by default training pattern, obtaining the finger
Determine Logic Regression Models.
Alternatively, the default training pattern includes loss function model and initialization logic regression model;The training
Module is used for:
The multiple sample data is quantified, obtains sample quantization vector;
By sample quantization vector input into the loss function model, and using gradient descent method by determining
The minimum value of loss function model is stated to determine to estimate weight;
The estimation weight is inputted into the initialization logic regression model, obtains the specified logistic regression mould
Type.
The third aspect, a kind of terminal, including memory, processor and it is stored on the memory and can be in the processing
The computer program run on device, it is characterised in that the processor is configured as performing described in above-mentioned any one of first aspect
Gesture identification method the step of.
Fourth aspect, there is provided a kind of computer-readable recording medium, be stored with the computer-readable recording medium
Instruction, when run on a computer so that computer performs the gesture identification method described in above-mentioned any one of first aspect
The step of.
The beneficial effect brought of technical scheme that the embodiment of the present application provides is:When being detected in the designated area in screen
During the slide triggered by multiple touch points, explanation is probably the slip behaviour that user is triggered using the fringe region of the back of the hand
Make, now, touch signal is obtained according to sliding trace.It is current by specifying Logic Regression Models to determine based on the touch signal
Gesture is the probability for raising gesture and the probability that current gesture is upper oarsman's gesture, that is, determines that first gesture probability and second gesture are general
Rate, afterwards, based on the first gesture probability and the second gesture probability, current gesture is identified.It that is to say, the application
By obtaining touch signal of multiple touch points in sliding process, and the touch signal is based on, by specifying logistic regression mould
Type is realized to upper oarsman's gesture and the differentiation and identification of raising gesture, to guarantee the gesture operation of correct response user.
Brief description of the drawings
In order to illustrate more clearly of the technical scheme in the embodiment of the present application, make required in being described below to embodiment
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present application, for
For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings
Accompanying drawing.
Fig. 1 is a kind of flow chart of gesture identification method according to an exemplary embodiment;
Fig. 2A is a kind of flow chart of gesture identification method according to another exemplary embodiment;
Fig. 2 B are a kind of schematic diagrames of designated area involved by Fig. 2A embodiments;
Fig. 2 C are a kind of schematic diagrames of multiple touch points involved by Fig. 2A embodiments;
Fig. 2 D are a kind of schematic diagrames for raising gesture involved by Fig. 2A embodiments;
Fig. 3 A are a kind of structured flowcharts of gesture identifying device according to an exemplary embodiment;
Fig. 3 B are a kind of structured flowcharts of gesture identifying device according to another exemplary embodiment;
Fig. 4 is a kind of block diagram of gesture identifying device 400 according to an exemplary embodiment.
Embodiment
To make the purpose, technical scheme and advantage of the application clearer, below in conjunction with accompanying drawing to the application embodiment party
Formula is described in further detail.
Before being described in detail to the gesture identification method that the embodiment of the present application provides, first the embodiment of the present application is related to
And application scenarios and implementation environment simply introduced.
First, to the invention relates to application scenarios simply introduced.
At present, terminal can support the operation of various gestures.For example, in practical application scene, when user wants upwards
When display content is checked in slip, upper oarsman's gesture can be used, for another example, when user wants in some public places to check some secrecy
Information when, can use raise gesture so that terminal raises gesture based on this, at a certain angle of screen, display needs what is shown
Content.But due to upper oarsman's gesture and raise gesture relatively, therefore, in order to correctly respond the practical operation of user
Demand is, it is necessary to make a distinction and identify to two kinds of gestures.Therefore, the embodiment of the present application provides a kind of gesture identification method,
The gesture identification can be used for upper stroke and raising both gestures and making a distinction and identify, it is implemented as follows the He of texts and pictures 1
Embodiment shown in Fig. 2A.
Secondly, to the invention relates to implementation environment simply introduced.
The gesture identification method that the application provides can apply in terminal, and the terminal can be configured with touch-screen, and should
Terminal can support the operation of various gestures, and the various gestures include upper oarsman's gesture and raise gesture.In practical application scene,
The equipment that the terminal is specifically as follows such as mobile phone, computer, computer etc, the embodiment of the present application are not limited this.
Fig. 1 is refer to, the Fig. 1 is a kind of flow chart of gesture identification method according to an exemplary embodiment, should
Gesture identification method can apply in terminal, and this method can realize step including several as follows:
Step 101:When detecting the slide triggered by multiple touch points in the designated area in screen, according to
Sliding trace obtains touch signal.
Step 102:Based on the touch signal, by specifying Logic Regression Models to determine first gesture probability and second gesture
Probability, the first gesture probability refer to that current gesture is to raise the probability of gesture, and the second gesture probability refers to current hand
Gesture is the probability of upper oarsman's gesture.
Step 103:Based on the first gesture probability and the second gesture probability, current gesture is identified.
In the embodiment of the present application, when detecting that the slip triggered by multiple touch points grasps in the designated area in screen
When making, explanation is probably the slide that user is triggered using the fringe region of the back of the hand, now, is obtained and touched according to sliding trace
Signal.Based on the touch signal, by specifying Logic Regression Models to determine that current gesture is to raise the probability of gesture and work as remote holder
Gesture is the probability of upper oarsman's gesture, that is, determines first gesture probability and second gesture probability, afterwards, based on the first gesture probability and
The second gesture probability, current gesture is identified.It that is to say, the application is by obtaining multiple touch points in sliding process
In touch signal, and the touch signal is based on, by specifying Logic Regression Models to realize to upper oarsman's gesture and raising gesture
Differentiation and identification, to guarantee the gesture operation of correct response user.
Alternatively, based on the touch signal, by specifying Logic Regression Models to determine first gesture probability and second gesture
Before probability, in addition to:
The touch signal of multi collect difference gesture, obtains multiple sample datas, and the different gestures include oarsman's gesture on this
Gesture is raised with this;
Based on the plurality of sample data, it is trained by default training pattern, obtains this and specify Logic Regression Models.
Alternatively, the default training pattern includes loss function model and initialization logic regression model;
Based on the plurality of sample data, it is trained by default training pattern, obtains this and specify Logic Regression Models, bag
Include:
The plurality of sample data is quantified, obtains sample quantization vector;
By sample quantization vector input into the loss function model, and using gradient descent method by determining the loss
The minimum value of function model come determine estimate weight;
The estimation weight is inputted into the initialization logic regression model, this is obtained and specifies Logic Regression Models.
Alternatively, based on the first gesture probability and the second gesture probability, current gesture is identified, including:
Determine the maximum gesture probability in the first gesture probability and the second gesture probability;
It is gesture corresponding to the maximum gesture probability by current gesture identification.
Alternatively, the touch signal includes mean place information or alternate position spike information, and the mean place information refers to the cunning
The average value of all positional informations on dynamic rail mark, the alternate position spike information refer to the initial position of the sliding trace and end position it
Between location variation.
Above-mentioned all optional technical schemes, can form the alternative embodiment of the application according to any combination, and the application is real
Example is applied no longer to repeat this one by one.
Fig. 2A is refer to, the Fig. 2A is a kind of flow of gesture identification method according to another exemplary embodiment
Figure, the gesture identification method can apply in terminal, and this method can realize step including several as follows:
Step 201:The touch signal of multi collect difference gesture, obtains multiple sample datas, and the different gestures include should
Upper oarsman's gesture raises gesture with this.
In order to upper stroke and raising both gestures and making a distinction and identify, need to be based respectively on two kinds of hands in advance
The touch signal of gesture triggering carries out model training, to obtain being subsequently based respectively on the touch signal that both gestures are triggered
The specified Logic Regression Models of the corresponding gesture of identification.Wherein, the process of model training is as described in step 201 to step 202.
It that is to say, it is necessary first to the touch signal of both different gestures of multi collect, multiple sample datas are obtained, so as to
Model training is carried out in being subsequently based on the plurality of sample data.
Wherein, the touch signal includes mean place information or alternate position spike information, and the mean place information refers to the slip
The average value of all positional informations on track, the alternate position spike information refer between the initial position of the sliding trace and end position
Location variation.
For example, in a kind of possible implementation, the touch signal can include mean place information.When terminal is based on
When different gestures collect touch signal, can the sliding trace based on different gestures, obtain sliding trace on positional information,
Further, touch point when each positional information can be the gesture contact screen is in corresponding touch coordinate at different moments.
It that is to say, terminal can obtain touch coordinate by condition of the time, for example, every prefixed time interval, obtain touching for touch point
Touch coordinate.Afterwards, the terminal determines the average value of acquired multiple touch coordinates, obtains mean place information, and this is put down
Equal positional information is defined as sample data.
Wherein, the prefixed time interval can also can be write from memory by user's self-defined setting according to the actual requirements by the terminal
Recognize setting, the embodiment of the present application is not limited this.For example, the prefixed time interval is 1 millisecond.
It should be noted that the foundation on coordinate system can be using the screen center of terminal as origin, or, can also
Be some drift angle using the screen of terminal as origin, the embodiment of the present application is not limited this.
For another example, in alternatively possible implementation, the touch signal can also include alternate position spike information, that is to say,
For each performed operation of different gestures, the touch signal that terminal can be triggered based on the gesture, sliding trace is gathered
Positional information corresponding to positional information corresponding to initial position and end position, afterwards, determine position corresponding to the initial position
Location variation between positional information corresponding to information and the end position, and identified location variation is defined as sample
Notebook data.
Certainly, it is necessary to explanation, be only here using the touch signal include mean place information or alternate position spike information as
Example illustrate, in practical application scene, the touch signal can also including touching intensity etc. information, at this point it is possible to pass through
Pressure inductor obtains the touching intensity information, and the embodiment of the present application is not limited this.
Step 202:Based on the plurality of sample data, it is trained by default training pattern, obtains this and specify logic to return
Return model.
After obtaining above-mentioned multiple sample datas, you can carry out model training using the plurality of sample data.Implementing
In, the default training pattern can include loss function model and initialization logic regression model, in that case, based on this
Multiple sample datas, it is trained by default training pattern, obtains this and specify the specific implementation of Logic Regression Models to wrap
Include the several steps in following (1)-(3):
(1) the plurality of sample data is quantified, obtains sample quantization vector.
In the specific implementation, can be quantified the plurality of sample data, for example, obtained sample quantization vector is X
={ xi, wherein i=1,2,3...n, the n is default positive integer, can be configured in advance by user, and for example, the n can be with
It is arranged to 10.After the plurality of sample data is quantified, the sample quantization vector of n dimensions is obtained.
(2) by sample quantization vector input into the loss function model, and it is somebody's turn to do using gradient descent method by determining
The minimum value of loss function model come determine estimate weight.
Wherein, the loss function model can be as shown in formula (1):
Wherein, yj∈ { 0,1 }, j=1,2, yjTwo tag along sort information are represented, for example, working as the yjWhen=1, classification letter is represented
Cease to raise gesture, as the yjWhen=0, classification information is represented as upper oarsman's gesture.W is unknown assessment parameter.
Afterwards, L (w) is minimized by gradient descent method, when seeking L (w) minimum value, you can obtain estimation power
Weight w.Wherein, during being minimized by gradient descent method to L (w), constantly it is trained based on the sample data,
So that the L (w) is minimum, it that is to say, the process minimized by gradient descent method to L (w) is actually constantly instructed
Experienced process.
(3) the estimation weight is inputted into the initialization logic regression model, obtains this and specify Logic Regression Models.
In the specific implementation, the initialization logic regression model can be as shown in formula (2) and (3), terminal can should
After estimation weight w is brought into initialization logic regression model (2) and (3), you can obtain specifying Logic Regression Models such as formula (4)
With formula (5) Suo Shi:
Wherein, P (Y=1 | x) and P (Y=0 | x) are represented raise and go up a stroke gesture probability corresponding to two kinds of gestures respectively.
Further, the terminal can store to the specified Logic Regression Models trained in advance, in order to follow-up
Logic Regression Models can be specified by this, the touch signal triggered based on different gestures, the different gestures are identified.
It should also be noted that, only it is so that the terminal is themselves based on different gestures progress model trainings as an example, in reality here
In the application scenarios of border, model training can also be carried out by other terminals with identical function, afterwards, be specified what is trained
Logic Regression Models are moved in the terminal.
After terminal storage has above-mentioned specified Logic Regression Models, SS later can carry out gesture identification, can specifically wrap
Following steps 203 are included to step 205.
Step 203:When detecting the slide triggered by multiple touch points in the designated area in screen, according to
Sliding trace obtains touch signal.
In practical implementations, the designated area can be configured by user is self-defined according to the actual requirements, for example, this refers to
Determine bottom-left quadrant, lower right region, upper left side region, upper right side region that region can be the screen etc., the application is real
Example is applied not limit this.
Fig. 2 B are refer to, are illustrated here so that the designated area is the bottom-left quadrant 21 of screen as an example.It that is to say, when
Terminal illustrates that user can be with when detecting the slide triggered by multiple touch points on the bottom-left quadrant 21 of the screen
By the fringe region upward sliding of the back of the hand, in order to identify that the gesture raises gesture or upper oarsman's gesture, terminal root
Touch signal is obtained according to sliding trace.
Wherein, the plurality of touch point can be the shape of the touching screen in multiple joints where the fringe region of the back of the hand of user
Into touch point.For example, as shown in 22 in Fig. 2 C.
Step 204:Based on the touch signal, by specifying Logic Regression Models to determine first gesture probability and second gesture
Probability, the first gesture probability refer to that current gesture is to raise the probability of gesture, and the second gesture probability refers to current hand
Gesture is the probability of upper oarsman's gesture.
In the specific implementation, the touch signal can be quantified, obtaining quantifying vector, afterwards, terminal can will
The quantization vector input arrived is into the specified Logic Regression Models shown in above-mentioned formula (4) and formula (5), to determine two kinds of hands
Gesture probability corresponding to gesture, respectively first gesture probability P (Y=1 | x) and second gesture probability P (Y=0 | x).
Further, in practical implementations, due to the characteristics of this raises gesture and upper oarsman's gesture is respectively provided with upward sliding, because
This, based on the touch signal, before determining first gesture probability and second gesture probability by specified Logic Regression Models, goes back
May determine that whether current gesture is that this raises one kind in gesture and upper oarsman's gesture, be that is to say, judge the gesture whether be
The gesture of upward sliding.
In the specific implementation, can based on sliding trace come judge the gesture whether be upward sliding gesture, for example, if
The initial position of the sliding trace is located at the lower zone of screen, and end position is located at the upper area of screen, then can determined
The gesture is the gesture of upward sliding.
Certainly, it is necessary to which explanation, is only to judge whether the gesture is upward sliding based on sliding trace with this here
Gesture exemplified by illustrate, in another embodiment, other manner can also be used to judge the gesture whether for upward sliding
Gesture, for example, it is also possible to be judged based on location variation, the embodiment of the present application is not limited this.
Step 205:Based on the first gesture probability and the second gesture probability, current gesture is identified.
In the specific implementation, based on the first gesture probability and the second gesture probability, current gesture is identified
Specific implementation can include:The maximum gesture probability in the first gesture probability and the second gesture probability is determined, will be current
Gesture identification be gesture corresponding to the maximum gesture probability.
It can be appreciated that gesture probability is bigger, illustrate the possibility that current gesture is gesture corresponding to the maximum gesture probability
It is bigger.For example, if the first gesture probability is more than the second gesture probability, illustrate to determine current hand after above-mentioned identification
Gesture is that the probability for raising gesture is more than the probability that current gesture is upper oarsman's gesture, therefore, can be by current gesture identification
Raise gesture., whereas if the first gesture probability is less than the second gesture probability, illustrate to determine currently after above-mentioned identification
Gesture be that the probability for raising gesture is less than the probability that current gesture is upper oarsman's gesture, therefore, current gesture can be known
Wei not upper oarsman's gesture.
For example, when the first gesture probability is 70%, and the second gesture probability is 30%, it can be seen that current gesture
To raise the probability of gesture more than the probability that current gesture is upper oarsman's gesture, therefore, current gesture identification can be raised
Gesture, for example, this raises gesture as shown in Figure 2 D.
In the embodiment of the present application, when detecting that the slip triggered by multiple touch points grasps in the designated area in screen
When making, explanation is probably the slide that user is triggered using the fringe region of the back of the hand, now, is obtained and touched according to sliding trace
Signal.Based on the touch signal, by specifying Logic Regression Models to determine that current gesture is to raise the probability of gesture and work as remote holder
Gesture is the probability of upper oarsman's gesture, that is, determines first gesture probability and second gesture probability, afterwards, based on the first gesture probability and
The second gesture probability, current gesture is identified.It that is to say, the application is by obtaining multiple touch points in sliding process
In touch signal, and the touch signal is based on, by specifying Logic Regression Models to realize to upper oarsman's gesture and raising gesture
Differentiation and identification, to guarantee the gesture operation of correct response user.
Referring to Fig. 3 A, Fig. 3 A are a kind of structured flowcharts of gesture identifying device according to an exemplary embodiment,
The device can be implemented in combination with by software, hardware or both, and the device can include:
Acquisition module 301, detect that the slip triggered by multiple touch points is grasped in the designated area of screen for working as
When making, touch signal is obtained according to sliding trace;
Determining module 302, for based on the touch signal, first gesture probability to be determined by specified Logic Regression Models
With second gesture probability, the first gesture probability refers to that current gesture is to raise the probability of gesture, and the second gesture is general
Rate refers to the probability that current gesture is upper oarsman's gesture;
Identification module 303, for based on the first gesture probability and the second gesture probability, entering to current gesture
Row identification.
Alternatively, Fig. 3 B are refer to, described device also includes:
Acquisition module 304, for the touch signal of multi collect difference gesture, obtain multiple sample datas, the difference
Gesture includes upper oarsman's gesture and described raises gesture;
Training module 305, for based on the multiple sample data, being trained by default training pattern, obtaining institute
State specified Logic Regression Models.
Alternatively, the default training pattern includes loss function model and initialization logic regression model;The training
Module 305 is used for:
The multiple sample data is quantified, obtains sample quantization vector;
By sample quantization vector input into the loss function model, and using gradient descent method by determining
The minimum value of loss function model is stated to determine to estimate weight;
The estimation weight is inputted into the initialization logic regression model, obtains the specified logistic regression mould
Type.
In the embodiment of the present application, when detecting that the slip triggered by multiple touch points grasps in the designated area in screen
When making, explanation is probably the slide that user is triggered using the fringe region of the back of the hand, now, is obtained and touched according to sliding trace
Signal.Based on the touch signal, by specifying Logic Regression Models to determine that current gesture is to raise the probability of gesture and work as remote holder
Gesture is the probability of upper oarsman's gesture, that is, determines first gesture probability and second gesture probability, afterwards, based on the first gesture probability and
The second gesture probability, current gesture is identified.It that is to say, the application is by obtaining multiple touch points in sliding process
In touch signal, and the touch signal is based on, by specifying Logic Regression Models to realize to upper oarsman's gesture and raising gesture
Differentiation and identification, to guarantee the gesture operation of correct response user.
Fig. 4 is a kind of block diagram of gesture identifying device 400 according to an exemplary embodiment.For example, device 400 can
To be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, it is good for
Body equipment, personal digital assistant etc..
Reference picture 4, device 400 can include following one or more assemblies:Processing component 402, memory 404, power supply
Component 406, multimedia groupware 408, audio-frequency assembly 410, the interface 412 of input/output (I/O), sensor cluster 414, and
Communication component 416.
The integrated operation of the usual control device 400 of processing component 402, such as communicated with display, call, data, phase
The operation that machine operates and record operation is associated.Processing component 402 can refer to including one or more processors 420 to perform
Order, to complete all or part of step of above-mentioned method.In addition, processing component 402 can include one or more modules, just
Interaction between processing component 402 and other assemblies.For example, processing component 402 can include multi-media module, it is more to facilitate
Interaction between media component 408 and processing component 402.
Memory 404 is configured as storing various types of data to support the operation in device 400.These data are shown
Example includes the instruction of any application program or method for being operated on device 400, contact data, telephone book data, disappears
Breath, picture, video etc..Memory 404 can be by any kind of volatibility or non-volatile memory device or their group
Close and realize, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM) are erasable to compile
Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash
Device, disk or CD.
Power supply module 406 provides power supply for the various assemblies of device 400.Power supply module 406 can include power management system
System, one or more power supplys, and other components associated with generating, managing and distributing power supply for device 400.
Multimedia groupware 408 is included in the screen of one output interface of offer between described device 400 and user.One
In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch-screen, to receive the input signal from user.Touch panel includes one or more touch sensings
Device is with the gesture on sensing touch, slip and touch panel.The touch sensor can not only sensing touch or sliding action
Border, but also detect and touched or the related duration and pressure of slide with described.In certain embodiments, more matchmakers
Body component 408 includes a front camera and/or rear camera.When device 400 is in operator scheme, such as screening-mode or
During video mode, front camera and/or rear camera can receive outside multi-medium data.Each front camera and
Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio-frequency assembly 410 is configured as output and/or input audio signal.For example, audio-frequency assembly 410 includes a Mike
Wind (MIC), when device 400 is in operator scheme, during such as call model, logging mode and speech recognition mode, microphone by with
It is set to reception external audio signal.The audio signal received can be further stored in memory 404 or via communication set
Part 416 is sent.In certain embodiments, audio-frequency assembly 410 also includes a loudspeaker, for exports audio signal.
I/O interfaces 412 provide interface between processing component 402 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor cluster 414 includes one or more sensors, and the state for providing various aspects for device 400 is commented
Estimate.For example, sensor cluster 414 can detect opening/closed mode of device 400, and the relative positioning of component, for example, it is described
Component is the display and keypad of device 400, and sensor cluster 414 can be with 400 1 components of detection means 400 or device
Position change, the existence or non-existence that user contacts with device 400, the orientation of device 400 or acceleration/deceleration and device 400
Temperature change.Sensor cluster 414 can include proximity transducer, be configured to detect in no any physical contact
The presence of neighbouring object.Sensor cluster 414 can also include optical sensor, such as CMOS or ccd image sensor, for into
As being used in application.In certain embodiments, the sensor cluster 414 can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 416 is configured to facilitate the communication of wired or wireless way between device 400 and other equipment.Device
400 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary implementation
In example, communication component 416 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel.
In one exemplary embodiment, the communication component 416 also includes near-field communication (NFC) module, to promote junction service.Example
Such as, in NFC module radio frequency identification (RFID) technology can be based on, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology,
Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 400 can be believed by one or more application specific integrated circuits (ASIC), numeral
Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, real shown in above-mentioned Fig. 1 or Fig. 2A for performing
The gesture identification method of example offer is provided.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided
Such as include the memory 404 of instruction, above-mentioned instruction can be performed to complete the above method by the processor 420 of device 400.For example,
The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
A kind of non-transitorycomputer readable storage medium, when the instruction in the storage medium is by the processing of mobile terminal
When device performs so that mobile terminal is able to carry out the gesture identification method that above-mentioned Fig. 1 or Fig. 2A illustrated embodiments provide.
A kind of computer program product for including instruction, when run on a computer so that computer performs above-mentioned
The gesture identification method that Fig. 1 or Fig. 2A illustrated embodiments provide.
It should be noted that:Above-described embodiment provide gesture identifying device when realizing gesture identification method, only more than
The division progress of each functional module is stated for example, in practical application, can be as needed and by above-mentioned function distribution by difference
Functional module complete, i.e., the internal structure of equipment is divided into different functional modules, with complete it is described above whole or
Person's partial function.In addition, the gesture identifying device that above-described embodiment provides belongs to same design with gesture identification method embodiment,
Its specific implementation process refers to embodiment of the method, repeats no more here.
One of ordinary skill in the art will appreciate that hardware can be passed through by realizing all or part of step of above-described embodiment
To complete, by program the hardware of correlation can also be instructed to complete, described program can be stored in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only storage, disk or CD etc..
The foregoing is only the preferred embodiment of the application, not to limit the application, it is all in spirit herein and
Within principle, any modification, equivalent substitution and improvements made etc., it should be included within the protection domain of the application.
Claims (10)
1. a kind of gesture identification method, it is characterised in that methods described includes:
When detecting the slide triggered by multiple touch points in the designated area in screen, obtained according to sliding trace
Touch signal;
It is described by specifying Logic Regression Models to determine first gesture probability and second gesture probability based on the touch signal
First gesture probability refers to that current gesture is to raise the probability of gesture, and the second gesture probability refers to that current gesture is upper
The probability of oarsman's gesture;
Based on the first gesture probability and the second gesture probability, current gesture is identified.
2. the method as described in claim 1, it is characterised in that it is described to be based on the touch signal, by specifying logistic regression
Before model determines first gesture probability and second gesture probability, in addition to:
The touch signal of multi collect difference gesture, obtains multiple sample datas, and the different gestures include upper oarsman's gesture
Gesture is raised with described;
Based on the multiple sample data, it is trained by default training pattern, obtains the specified Logic Regression Models.
3. method as claimed in claim 2, it is characterised in that the default training pattern is including loss function model and initially
Change Logic Regression Models;
It is described to be based on the multiple sample data, it is trained by default training pattern, obtains the specified logistic regression mould
Type, including:
The multiple sample data is quantified, obtains sample quantization vector;
By sample quantization vector input into the loss function model, and using gradient descent method by determining the damage
Lose function model minimum value come determine estimate weight;
The estimation weight is inputted into the initialization logic regression model, obtains the specified Logic Regression Models.
4. the method as described in claim 1, it is characterised in that described to be based on the first gesture probability and the second gesture
Probability, current gesture is identified, including:
Determine the maximum gesture probability in the first gesture probability and the second gesture probability;
It is gesture corresponding to the maximum gesture probability by current gesture identification.
5. the method as described in claim 1, it is characterised in that the touch signal includes mean place information or alternate position spike is believed
Breath, the mean place information refer to the average value of all positional informations on the sliding trace, and the alternate position spike information refers to
Location variation between the initial position and end position of the sliding trace.
6. a kind of gesture identifying device, it is characterised in that described device includes:
Acquisition module, for when detecting the slide triggered by multiple touch points in the designated area in screen, root
Touch signal is obtained according to sliding trace;
Determining module, for based on the touch signal, first gesture probability and second to be determined by specified Logic Regression Models
Gesture probability, the first gesture probability refer to that current gesture is to raise the probability of gesture, and the second gesture probability refers to
Current gesture is the probability of upper oarsman's gesture;
Identification module, for based on the first gesture probability and the second gesture probability, current gesture to be identified.
7. device as claimed in claim 6, it is characterised in that described device also includes:
Acquisition module, for the touch signal of multi collect difference gesture, multiple sample datas are obtained, the different gestures include
Upper oarsman's gesture and described raise gesture;
Training module, for based on the multiple sample data, being trained by default training pattern, obtaining described specify and patrol
Collect regression model.
8. device as claimed in claim 7, it is characterised in that the default training pattern is including loss function model and initially
Change Logic Regression Models;The training module is used for:
The multiple sample data is quantified, obtains sample quantization vector;
By sample quantization vector input into the loss function model, and using gradient descent method by determining the damage
Lose function model minimum value come determine estimate weight;
The estimation weight is inputted into the initialization logic regression model, obtains the specified Logic Regression Models.
9. a kind of terminal, it is characterised in that the terminal includes:Processor and memory, the memory storage have one or
Multiple computer programs, side of the realization as described in claim any one of 1-5 during computer program described in the computing device
Method.
A kind of 10. computer-readable recording medium, it is characterised in that instruction is stored with the computer-readable recording medium,
When run on a computer so that computer performs the method as described in claim any one of 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711076834.0A CN107704190B (en) | 2017-11-06 | 2017-11-06 | Gesture recognition method and device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711076834.0A CN107704190B (en) | 2017-11-06 | 2017-11-06 | Gesture recognition method and device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107704190A true CN107704190A (en) | 2018-02-16 |
CN107704190B CN107704190B (en) | 2020-07-10 |
Family
ID=61177907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711076834.0A Expired - Fee Related CN107704190B (en) | 2017-11-06 | 2017-11-06 | Gesture recognition method and device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107704190B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766043A (en) * | 2018-12-29 | 2019-05-17 | 华为技术有限公司 | The operating method and electronic equipment of electronic equipment |
CN110532755A (en) * | 2019-08-09 | 2019-12-03 | 北京三快在线科技有限公司 | A kind of method and device of computer implemented risk identification |
CN110688039A (en) * | 2019-09-25 | 2020-01-14 | 大众问问(北京)信息科技有限公司 | Control method, device and equipment for vehicle-mounted application and storage medium |
CN110703919A (en) * | 2019-10-11 | 2020-01-17 | 大众问问(北京)信息科技有限公司 | Method, device, equipment and storage medium for starting vehicle-mounted application |
CN114578959A (en) * | 2021-12-30 | 2022-06-03 | 惠州华阳通用智慧车载系统开发有限公司 | Gesture recognition method and system based on touch pad |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120225719A1 (en) * | 2011-03-04 | 2012-09-06 | Mirosoft Corporation | Gesture Detection and Recognition |
CN103577793A (en) * | 2012-07-27 | 2014-02-12 | 中兴通讯股份有限公司 | Gesture recognition method and device |
CN105988583A (en) * | 2015-11-18 | 2016-10-05 | 乐视致新电子科技(天津)有限公司 | Gesture control method and virtual reality display output device |
CN106203380A (en) * | 2016-07-20 | 2016-12-07 | 中国科学院计算技术研究所 | Ultrasound wave gesture identification method and system |
CN103870199B (en) * | 2014-03-31 | 2017-09-29 | 华为技术有限公司 | The recognition methods of user operation mode and handheld device in handheld device |
-
2017
- 2017-11-06 CN CN201711076834.0A patent/CN107704190B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120225719A1 (en) * | 2011-03-04 | 2012-09-06 | Mirosoft Corporation | Gesture Detection and Recognition |
CN103577793A (en) * | 2012-07-27 | 2014-02-12 | 中兴通讯股份有限公司 | Gesture recognition method and device |
CN103870199B (en) * | 2014-03-31 | 2017-09-29 | 华为技术有限公司 | The recognition methods of user operation mode and handheld device in handheld device |
CN105988583A (en) * | 2015-11-18 | 2016-10-05 | 乐视致新电子科技(天津)有限公司 | Gesture control method and virtual reality display output device |
CN106203380A (en) * | 2016-07-20 | 2016-12-07 | 中国科学院计算技术研究所 | Ultrasound wave gesture identification method and system |
Non-Patent Citations (1)
Title |
---|
王龙: "特征提取和卷积神经网络在手势识别中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109766043A (en) * | 2018-12-29 | 2019-05-17 | 华为技术有限公司 | The operating method and electronic equipment of electronic equipment |
CN110532755A (en) * | 2019-08-09 | 2019-12-03 | 北京三快在线科技有限公司 | A kind of method and device of computer implemented risk identification |
CN110688039A (en) * | 2019-09-25 | 2020-01-14 | 大众问问(北京)信息科技有限公司 | Control method, device and equipment for vehicle-mounted application and storage medium |
CN110703919A (en) * | 2019-10-11 | 2020-01-17 | 大众问问(北京)信息科技有限公司 | Method, device, equipment and storage medium for starting vehicle-mounted application |
CN114578959A (en) * | 2021-12-30 | 2022-06-03 | 惠州华阳通用智慧车载系统开发有限公司 | Gesture recognition method and system based on touch pad |
CN114578959B (en) * | 2021-12-30 | 2024-03-29 | 惠州华阳通用智慧车载系统开发有限公司 | Gesture recognition method and system based on touch pad |
Also Published As
Publication number | Publication date |
---|---|
CN107704190B (en) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106951884A (en) | Gather method, device and the electronic equipment of fingerprint | |
CN104503689B (en) | Application interface display methods and device | |
CN107704190A (en) | Gesture identification method, device, terminal and storage medium | |
CN106572299A (en) | Camera switching-on method and device | |
CN104036240B (en) | The localization method and device of human face characteristic point | |
CN107832741A (en) | The method, apparatus and computer-readable recording medium of facial modeling | |
CN106547466A (en) | Display control method and device | |
CN107688781A (en) | Face identification method and device | |
CN106778531A (en) | Face detection method and device | |
CN108319886A (en) | Fingerprint identification method and device | |
CN107992257A (en) | Split screen method and device | |
CN107562349A (en) | A kind of method and apparatus for performing processing | |
CN107241495A (en) | The split screen treating method and apparatus of terminal device | |
CN106802808A (en) | Suspension button control method and device | |
CN107529699A (en) | Control method of electronic device and device | |
CN106201108B (en) | Gloves control mode touch mode control method and device and electronic equipment | |
CN106527928A (en) | Screen capturing control device and method and intelligent terminal | |
CN107330391A (en) | Product information reminding method and device | |
CN104216969B (en) | Read flag method and device | |
CN107958239A (en) | Fingerprint identification method and device | |
CN104883603B (en) | Control method for playing back, system and terminal device | |
CN104902318B (en) | Control method for playing back and terminal device | |
CN104240274B (en) | Face image processing process and device | |
CN106775210A (en) | The method and apparatus that wallpaper is changed | |
JP2014206837A (en) | Electronic equipment, control method therefor and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200710 |