CN105573545A - Gesture correction method, apparatus and gesture input processing method - Google Patents

Gesture correction method, apparatus and gesture input processing method Download PDF

Info

Publication number
CN105573545A
CN105573545A CN201510846257.3A CN201510846257A CN105573545A CN 105573545 A CN105573545 A CN 105573545A CN 201510846257 A CN201510846257 A CN 201510846257A CN 105573545 A CN105573545 A CN 105573545A
Authority
CN
China
Prior art keywords
gesture
edge
edge gesture
user
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510846257.3A
Other languages
Chinese (zh)
Inventor
李鑫
朱冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201510846257.3A priority Critical patent/CN105573545A/en
Publication of CN105573545A publication Critical patent/CN105573545A/en
Priority to PCT/CN2016/106167 priority patent/WO2017088694A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment

Abstract

The invention discloses a gesture correction method, apparatus and a gesture input processing method. The gesture is applied to edge interaction of a mobile terminal. The gesture correction method comprises the steps of starting an edge gesture correction mode; repeatedly inputting a certain edge gesture operation according to a preset number of times; collecting characteristic values of the input edge gesture operations in each time, and recording the characteristic values; calculating an average value according to the recorded characteristic values of the input edge gesture operations in each time to obtain correction data. The gesture correction method has the following beneficial effects that the problem of nonstandard gesture operation caused by individual difference is solved; and meanwhile, the gestures of a user are corrected according to the collected user gesture characteristics or user habits, so that convenience is brought for the user to perform the edge input operation, and the user experience is improved.

Description

A kind of gesture calibration steps, device and gesture input processing method
Technical field
The present invention relates to technical field of mobile terminals, more particularly, relate to a kind of gesture calibration steps, device and gesture input processing method.
Background technology
Along with developing rapidly of the communication technology, the function of the mobile terminals such as mobile phone gets more and more, the amusement function that computer can realize, major part all can realize on mobile terminals, and people can see a film on mobile terminals, play games, browse webpage, Video chat etc.In order to improve the visual effect of mobile terminal, mobile terminal more and more trends towards screen enlarging development, but in view of the portability feature of mobile terminal, its size can not infinitely increase, with regard to needing the physical dimension making full use of mobile terminal to increase the utilization factor of screen, therefore there is narrow the frame even mobile terminal of Rimless in this.
Narrow frame or Rimless mobile terminal make full use of the physical dimension of mobile terminal, extend the screen size of mobile terminal greatly, meet the demand of user to giant-screen, make edge input operation variation simultaneously.But because the individual difference (as hand size, finger presses dynamics etc.) of user is when gripping narrow frame or Rimless mobile terminal, causes the scope touching screen edge to there is very big-difference, causing that edge gesture is nonstandard, discrimination is low.
Summary of the invention
The technical problem to be solved in the present invention is, calibrates the edge interaction gesture related in mobile terminal, and provide a kind of gesture calibration steps, device and gesture input processing method, described method comprises step:
Start edge gesture calibration mode;
Repeat to input the operation of a certain edge gesture according to preset times, gather the eigenwert of the described edge gesture operation of each input and record;
Eigenwert according to the described edge gesture operation of each record calculates its mean value, draws calibration data.
Alternatively, the described edge gesture obtaining input, gathers described edge gesture characteristic of correspondence value and record according to preset times, comprises step:
Drive layer to obtain gesture incoming event, and be reported to application framework layer; Application framework layer judges whether gesture incoming event is edge gesture, when described gesture incoming event is edge gesture, judged result is reported to application layer;
Application layer gathers the edge gesture of user according to judged result, gathers the eigenwert of described edge gesture according to preset times.
Alternatively, described edge gesture operation is holding operation, and the eigenwert of described edge gesture operation comprises coordinate figure corresponding to finger, and the eigenwert of the edge gesture operation that described basis records at every turn calculates its mean value, draws calibration data, comprises step:
Obtain the coordinate figure that in the operation of described edge gesture, each finger is corresponding respectively;
The coordinate figure calculating mean value corresponding to each finger, draws the calibration data of described edge gesture.
Alternatively, described edge gesture operation is slide, and the eigenwert of described edge gesture operation comprises the starting point coordinate value of slide and stops coordinate figure, and the eigenwert of the edge gesture operation that described basis records at every turn calculates its mean value, draw calibration data, comprise step:
Obtain the starting point coordinate value of finger sliding operation in the operation of described edge gesture respectively and stop coordinate figure;
To starting point coordinate value and termination coordinate figure calculating mean value respectively, draw the calibration data of described edge gesture.
Alternatively, the edge gesture characteristic of correspondence value that described basis records at every turn calculates its mean value, after drawing calibration data, also comprises step:
According to the calibration data of described edge gesture, determine the hot spot region of corresponding edge gesture operation, and described calibration data is stored in database, set up the corresponding relation with user.
The present invention also proposes a kind of gesture calibrating installation, is applied to mobile terminal edge mutual, it is characterized in that, comprising: start module, acquisition module, processing module, wherein,
Start module, for mobile terminal is opened edge gesture calibration mode;
Acquisition module, for repeating to input the operation of a certain edge gesture according to preset times, gathers the eigenwert of the described edge gesture operation of each input and record;
Processing module, the eigenwert for the described edge gesture operation according to each record calculates its mean value, draws calibration data.
Alternatively, described acquisition module, also comprises:
Driving layer, for obtaining gesture incoming event, and being reported to application framework layer; Application framework layer judges whether gesture incoming event is edge gesture, when described gesture incoming event is edge gesture, judged result is reported to application layer;
Application layer, for gathering the edge gesture of user according to judged result, gathers the eigenwert of described edge gesture according to preset times.
Alternatively, described processing module, also comprises:
First processing unit, for obtaining each finger characteristic of correspondence value in described edge gesture respectively;
Second processing unit, for each finger characteristic of correspondence value calculating mean value, draws the calibration data of described edge gesture.
Alternatively, described device also comprises:
Memory module, for the calibration data according to described edge gesture, determines the hot spot region of corresponding edge gesture operation, and described calibration data is stored in database, sets up the corresponding relation with user.
The present invention also proposes a kind of gesture input processing method, is applied to mobile terminal edge mutual, it is characterized in that, comprising: input equipment, driving layer, application framework layer, application layer, wherein,
Drive layer to obtain user and produce gesture incoming event by input equipment, and be reported to application framework layer;
Application framework layer judges whether gesture incoming event is edge gesture, when described gesture incoming event is edge gesture, judged result is reported to application layer;
Application layer gathers the edge gesture of user according to judged result, gathers the input information of described edge gesture according to preset times.
A kind of gesture calibration steps provided of the present invention is provided, there is following beneficial effect: solve edge input gesture and make the problem that gesture identification rate is not high due to user's individual difference or individual operating habit, accurately gesture can be identified in user's gesture operation hot-zone, facilitate customer edge input operation, promote Consumer's Experience.
Accompanying drawing explanation
Below in conjunction with accompanying drawingand embodiment the invention will be further described, in accompanying drawing:
fig. 1it is the screen area division signal of the mobile terminal of the embodiment of the present invention figure;
fig. 2it is the flow process of the gesture calibration steps of the embodiment of the present invention figure;
fig. 3in the embodiment of the present invention, be hold by one hand gesture calibration steps flow process figure;
fig. 4be the embodiment of the present invention be hold by one hand gesture information acquisition interface signal figure;
fig. 5it is the flow process of the right hand edge downslide gesture calibration steps of the embodiment of the present invention figure;
fig. 6it is the right hand edge downslide gesture information acquisition interface signal of the embodiment of the present invention figure;
fig. 7it is the gesture calibrating installation structural frames of the embodiment of the present invention figure;
fig. 8it is the software architecture signal of the mobile terminal of the embodiment of the present invention figure;
fig. 9it is the flow process of the input processing method of the embodiment of the present invention figure;
figure 10it is the hardware configuration signal of a kind of subscriber equipment that the embodiment of the present invention provides figure.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Referring now to accompanying drawingthe mobile terminal realizing each embodiment of the present invention is described.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desk-top computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Be described in detail below by way of specific embodiment.
See fig. 1for the screen area of the mobile terminal of one embodiment of the invention divides signal figure.Comprise at mobile terminal 100: C district (gray area) is input field, edge, and A district is normal Touch Zone, and B district is non-touch-control district.
In an embodiment of the present invention, the touch operation in A district, processes according to existing normal process mode, such as, clicks certain application in A district figurenamely mark opens this application etc.For the touch operation in C district, may be defined as edge and touch processing mode, such as, in definable C district, namely bilateral slip carries out terminal acceleration etc.B district is non-touch-control district, and such as, B district can be provided with key zone, receiver etc.
In an embodiment of the present invention, C district can adopt fixed form to divide or self-defined division.Fixed partition, namely the screen area of mobile terminal arrange regular length, fixed broadband region as C district.C district can comprise the subregion being positioned at subregion on the left of mobile terminal screen and right side, and its position is fixed at the both sides of the edge of mobile terminal, as Fig. 1shown in.Certainly, also only C district can be divided in the side edge of mobile terminal.
Self-defined division, the i.e. number in the region in C district, position and size, the setting that can customize, such as, can be set by user, also can by mobile terminal according to self-demand, the quantity in the region in adjustment C district, position and size.Usually, the fundamental figure in C district is designed to rectangle, as long as two of tablet pattern diagonal angle apex coordinates can determine position and the size in C district.
For meeting the use habit of different user to different application, the Duo Tao C district's plan of establishment be applied under different application scene also can be set.Such as, under system desktop, because figuremark occupy-place is more, and the C sector width of both sides arranges to obtain relative narrower; And when clicking camera figureafter mark enters camera applications, can arrange the C district quantity under this scene, position, size, when not affecting focusing, it is relatively wide that C sector width can be arranged.
The embodiment of the present invention is not restricted the division in C district, set-up mode.
See fig. 2, be the flow process of a kind of gesture calibration steps that the embodiment of the present invention provides figure, the gesture calibration steps of the embodiment of the present invention comprises:
S10, starts edge gesture calibration mode.
In one embodiment, mobile terminal device has edge gesture calibration function, and under normal circumstances, this edge gesture calibration function is in closed condition, and when after startup edge gesture calibration mode, then mobile terminal can gather the edge gesture of user.Before user uses gesture operation for the first time, need the input information of the operating habit collection user gesture according to user.The gesture-type of user includes but not limited to: sliding in left side edge, and sliding in right side edge, left side edge glides, and right side edge glides, bilateral downslide, bilateral upper cunning, and grip terminal screen corner, monolateral cunning back and forth, is hold by one hand etc.
S20, repeats to input the operation of a certain edge gesture according to preset times, gathers the eigenwert of the described edge gesture operation of each input and record.
In one embodiment, drive the incoming event that layer acquisition user is produced by input equipment, such as, the input operation event of being undertaken by touch-screen.In an embodiment of the present invention, incoming event comprises: normal incoming event (A district incoming event) and edge incoming event (C district incoming event).The input operations such as what normal incoming event was included in that A district carries out clicks, double-click, slip.Edge incoming event be included in sliding in the left side edge of carrying out in C district, left side edge glides, cunning in right side edge, right side edge downslide, bilateral upper cunning, bilateral downslide, gripping mobile phone corner, monolaterally to slide back and forth, hold, the input operation such as to be hold by one hand.Application framework layer judges whether gesture incoming event is edge gesture, when described gesture incoming event is edge gesture, judged result is reported to application layer, and application layer gathers the edge gesture of user according to judged result, gathers the eigenwert of described edge gesture according to preset times.
Should understand, when judging gesture incoming event, also can in the following ways: the incoming event driving layer to obtain user to be produced by input equipment, and judge whether gesture incoming event is edge incoming event (C district incoming event), if edge gesture incoming event, then report application framework layer, then be reported to application layer by application framework layer, application layer gathers the edge gesture of user according to judged result, gathers the eigenwert of described edge gesture according to preset times.
S30, the eigenwert according to the described edge gesture operation of each record calculates its mean value, draws calibration data.
In one embodiment, when user starts to input edge gesture at the edge of mobile terminal, obtain each finger characteristic of correspondence value in described edge gesture respectively; To each finger characteristic of correspondence value calculating mean value, draw the calibration data of described edge gesture.Wherein, edge gesture eigenwert comprises: the coordinate figure of finger and screen edge contact position, and/or the force value of finger presses, time value.According to the calibration data of described edge gesture, determine the hot spot region of corresponding edge gesture operation, and described calibration data is stored in database, set up the corresponding relation with user.When hot spot region receives gesture operation signal, reduce the testing conditions to this region, such as, reduce the detection threshold of detection threshold or the pressing dynamics detecting pixel.
See fig. 3for the flow process being hold by one hand gesture calibration steps of the embodiment of the present invention figure.The gesture calibration steps of the embodiment of the present invention comprises:
S11, starts edge gesture calibration mode.
In one embodiment, mobile terminal device has edge gesture calibration function, and under normal circumstances, this edge gesture calibration function is in closed condition, and when after startup edge gesture calibration mode, then mobile terminal can gather the edge gesture of user.Before user uses singlehanded gesture operation for the first time, need the input information of the operating habit collection user gesture according to user.
S12, what obtain user is hold by one hand gesture, is hold by one hand gesture characteristic of correspondence value and record according to preset times collection.
In one embodiment, to be hold by one hand gesture, gather respectively and the position coordinates C of each finger of recording user and mobile terminal screen edge contact point at every turn 1(downX 1, downY 1), C 2(downX 2, downY 2), C 3(downX 3, downY 3), C 4(downX 4, downY 4), C 5(downX 5, downY 5).See fig. 4, for the gesture information acquisition interface that is hold by one hand of the embodiment of the present invention is illustrated figure.To user, mobile terminal shows that arranges an interface, user's click arranges interface and enters gesture information setting, then carries out the collection of gesture feature value according to individual subscriber touch area or hand-held.Touch area when wherein the dash area of C district (gray area) represents that user is hold by one hand.User points out to carry out being hold by one hand according to mobile terminal and operates and repeat step, and when mobile terminal receives at every turn and is hold by one hand hand signal, each recording user finger presses the position of touch area respectively.
Further, the data of the input information of gesture also comprise: touch point area S c1, S c2, S c3, S c4, S c5, and the information such as the force value of the pressing of each touch point, time value.In like manner, touch point area S can be obtained cicalibration data, draw the calibration data of force value that each touch point presses, time value according to the force value of finger presses when being hold by one hand, the mean value of time value at every turn.
S13, the gesture characteristic of correspondence value that is hold by one hand according to each record calculates its mean value, draws the calibration data being hold by one hand gesture.
User is hold by one hand the mean value C of finger presses position coordinates i (wherein, i=1,2,3,4,5, represent the mean value of each finger presses position coordinates), namely the calibration data being hold by one hand finger presses position is considered as, the calibration data that this gesture produces is hold by one hand gesture calibration data as this user, according to the calibration data being hold by one hand gesture, determines the hot spot region that this edge gesture operates, and described calibration data is stored in database, set up the corresponding relation with user.When hot spot region receives gesture operation signal, reduce the testing conditions to this region, such as, reduce the detection threshold of detection threshold or the pressing dynamics detecting pixel.
The embodiment of the present invention is not restricted finger area pressed number.
Based on a kind of gesture calibration steps that the embodiment of the present invention provides, the gesture operation custom according to user carries out self-defined setting to mobile terminal edge input area, as Fig. 4shown in, position or area, the pressing dynamics difference of the dash area in C district (gray area) is caused due to user's individual difference or individual operating habit, by gathering user's gesture feature or user habit, determine the hot spot region of user's gesture operation, by reducing, gesture identification rate is promoted to the mobile phone testing conditions of hot spot region, facilitate customer edge input operation, promote Consumer's Experience.
See fig. 5, be the flow process of the right hand edge downslide gesture calibration steps of the embodiment of the present invention figure.The gesture calibration steps of the embodiment of the present invention comprises:
S21, starts edge gesture calibration mode.
In one embodiment, mobile terminal device has edge gesture calibration function, and under normal circumstances, this edge gesture calibration function is in closed condition, and when after startup edge gesture calibration mode, then mobile terminal can gather the edge gesture of user.Before user uses right hand edge downslide gesture operation for the first time, need the input information of the operating habit collection user gesture according to user.
S22, obtains the right hand edge downslide gesture of user, gathers right hand edge downslide gesture characteristic of correspondence value and record according to preset times.
In one embodiment, for right side edge downslide gesture, the gatherer process to gesture input information is described. as Fig. 6shown in, wherein the dash area of C district (gray area) represents the sliding area of user, user carries out repetition right hand edge downslide operation according to mobile terminal prompting, when mobile terminal receives right hand edge slip hand signal at every turn, each recording user finger presses the starting position coordinates A (downX of touch area respectively, downY), final position coordinate B (currentX, currentY) and sliding time downTime.
S23, the right hand edge downslide gesture characteristic of correspondence value according to each record calculates its mean value, draws the calibration data being hold by one hand gesture.
In one embodiment, for right hand edge slip gesture, the eigenwert gathering right hand edge downslide gesture comprises the starting position coordinates A (downX pointing and contact with screen right side edge, downY), final position coordinate B (currentX, currentY), sliding time downTime, calculate sliding distance L and sliding time downTime used according to the reference position of slide and final position, draw speed v=L/downTime during slip.Wherein, calculate distance and have the following two kinds method:
L = ( d o w n X - c u r r e n t X ) 2 + ( d o w n Y - c u r r e n t Y ) 2 , Or
L=|currentY–downY|。
The mean value of each user's slide starting position coordinates namely be considered as the calibration data of slide reference position, in like manner, the calibration data of slide final position can be drawn, according to the mean value of each slide speed draw the calibration data of sliding speed.The calibration data that this gesture produces is as the calibration data of right hand edge downslide gesture, according to the calibration data of right hand edge downslide gesture, determine the hot spot region that this edge gesture operates, and described calibration data is stored in database, set up the corresponding relation with user.When hot spot region receives gesture operation signal, reduce the testing conditions to this region, such as, reduce the detection threshold of detection threshold or the pressing dynamics detecting pixel.
Based on a kind of gesture calibration steps that the embodiment of the present invention provides, the gesture operation custom according to user carries out self-defined setting to mobile terminal edge input area, as Fig. 6shown in, the dash area in C district (gray area) may produce difference due to user's individual difference or individual operating habit, by gathering user's gesture feature or user habit, calibrates the gesture of user, facilitate customer edge input operation, promote Consumer's Experience.
See fig. 7, be the structure of the gesture calibrating installation of the embodiment of the present invention figure.The gesture calibrating installation of the embodiment of the present invention comprises:
Start module 11, for mobile terminal is opened edge gesture calibration mode.
In one embodiment, mobile terminal device has edge gesture calibration function, and under normal circumstances, this edge gesture calibration function is in closed condition, and when after startup edge gesture calibration mode, then mobile terminal can gather the edge gesture of user.Before user uses gesture operation for the first time, need the input information of the operating habit collection user gesture according to user.The gesture-type of user includes but not limited to: sliding in left side edge, and sliding in right side edge, left side edge glides, and right side edge glides, bilateral downslide, bilateral upper cunning, and grip terminal screen corner, monolateral cunning back and forth, is hold by one hand etc.
Acquisition module 12, for repeating to input the operation of a certain edge gesture according to preset times, gathers the eigenwert of the described edge gesture operation of each input and record.
In one embodiment, drive the incoming event that layer acquisition user is produced by input equipment, such as, the input operation event of being undertaken by touch-screen.In an embodiment of the present invention, incoming event comprises: normal incoming event (A district incoming event) and edge incoming event (C district incoming event).The input operations such as what normal incoming event was included in that A district carries out clicks, double-click, slip.Edge incoming event be included in sliding in the left side edge of carrying out in C district, left side edge glides, cunning in right side edge, right side edge downslide, bilateral upper cunning, bilateral downslide, gripping mobile phone corner, monolaterally to slide back and forth, hold, the input operation such as to be hold by one hand.Application framework layer judges whether gesture incoming event is edge gesture, when described gesture incoming event is edge gesture, judged result is reported to application layer, and application layer gathers the edge gesture of user according to judged result, gathers the eigenwert of described edge gesture according to preset times.
Processing module 13, the eigenwert for the described edge gesture operation according to each record calculates its mean value, draws calibration data.Processing module 13 also comprises:
First processing unit 130, for obtaining each finger characteristic of correspondence value in described edge gesture respectively.
To be hold by one hand gesture, to gather the data being hold by one hand the input information of gesture and comprise finger presses position coordinates C 1(downX 1, downY 1), C 2(downX 2, downY 2), C 3(downX 3, downY 3), C 4(downX 4, downY 4), C 5(downX 5, downY 5).
Second processing unit 131, for each finger characteristic of correspondence value calculating mean value, draws the calibration data of described edge gesture.
In one embodiment, each user is hold by one hand the mean value of finger presses position coordinates namely be considered as the calibration data being hold by one hand finger presses position, above-mentioned calibration data is stored in database, for setting up the corresponding relation of gesture and using forestland according to the operating habit of user.
Further, the eigenwert of gesture also can comprise: touch point area S c1, S c2, S c3, S c4, S c5, and the information such as the pressing dynamics of each touch point.In like manner, touch point area S can be obtained cicalibration data, according to the mean value of finger presses dynamics when being at every turn hold by one hand draw each touch point pressing dynamics calibration data.
Memory module 14, for the calibration data according to described edge gesture, determines the hot spot region of corresponding edge gesture operation, and described calibration data is stored in database, sets up the corresponding relation with user.
See fig. 8, the software architecture signal of the mobile terminal of the embodiment of the present invention figure.The software architecture of the mobile terminal of the embodiment of the present invention comprises: input equipment 201, driving layer 202, application framework layer 203 and application layer 204.
Input equipment 201 receives the input operation of user, changes physics input into electric signal TP, is passed to by TP and drives layer 202; Drive the position of layer 202 to input to resolve, obtain the parameter such as concrete coordinate, duration of touch point, this parameter is uploaded to application framework layer 203, application framework layer 203 realizes by corresponding interface to driving the communication of layer 202.Application framework layer 203 receives the parameter driving layer 202 to report, resolve, distinguish edge incoming event and normal incoming event, and by effectively inputting which application concrete upwards passing to application layer 204, perform different entering the operating instructions to meet application layer 204 according to different input operations.
Concrete, drive the incoming event that layer is produced by input equipment for obtaining user, and be reported to application framework layer.
Application framework layer is for judging that incoming event is edge incoming event, or normal incoming event, if normal incoming event then carries out processing and identification to normal incoming event, and recognition result is reported application layer; If edge incoming event then edge incoming event carries out processing and identification, and recognition result is reported application layer.
Application layer is used for performing according to the recognition result reported inputting instruction accordingly.
The mobile terminal of the embodiment of the present invention, owing to just carrying out the operation distinguishing A district and C district at application framework floor, and carries out the foundation of virtual unit at application framework layer, avoid and driving floor differentiation A district and C district to the dependence of hardware.
See fig. 9for the flow process of the input processing method of the embodiment of the present invention figure, comprise the following steps:
S1, drives the incoming event that layer acquisition user is produced by input equipment, and is reported to application framework layer.
Concrete, input equipment receives the input operation (i.e. incoming event) of user, changes physics input into electric signal, and by electrical signal transfer to driving layer.In embodiments of the present invention, incoming event comprises A district incoming event and C district incoming event.The input operations such as what A district incoming event was included in that A district carries out clicks, double-click, slip.C district incoming event be included in sliding in the left side edge of carrying out in C district, left side edge glides, cunning in right side edge, right side edge downslide, bilateral upper cunning, bilateral downslide, monolaterally to slide back and forth, hold, the input operation such as to be hold by one hand.
Drive layer to resolve input position according to the electric signal received, obtain the correlation parameter such as concrete coordinate, duration of touch point.This correlation parameter is reported to application framework layer.
In addition, if drive layer to adopt A agreement to report incoming event, then this step S1 also comprises:
For each touch point gives one for distinguishing the numbering (ID) of finger.
Thus, if drive layer to adopt A agreement to report incoming event, then the data reported comprise above-mentioned correlation parameter, and the numbering of touch point.
S2, application framework layer judges that incoming event is edge incoming event, or normal incoming event, if normal incoming event then performs step S3, if edge incoming event then performs step S4.
Concrete, according to the coordinate in the correlation parameter of incoming event, application framework layer can judge that it is edge incoming event or normal incoming event.First obtain the transverse axis coordinate of touch point, then the transverse axis coordinate (i.e. X-axis coordinate) (x) of touch point and C sector width (Wc) and touch-screen width (W) are compared.If Wc<x< (W-Wc), touch point is positioned at A district, and event is normal incoming event; Otherwise event is edge incoming event.If drive layer to adopt B agreement to report incoming event, then step S2 also specifically comprises: for the numbering (ID) for distinguishing finger is given in each touch point; All element informations (coordinate, duration, numbering etc.) of touch point are stored.
Thus, the embodiment of the present invention, by arranging touch point numbering, can realize distinguishing finger, compatible A agreement and B agreement; And all key elements of touch point (coordinate, numbering etc. of touch point) are stored, follow-up judgement edge input (such as, FIT) can provide convenient.
In one embodiment, edge incoming event and normal incoming event report adopted passage not identical.Edge incoming event adopts designated lane.
S3, application framework layer carries out processing and identification to normal incoming event, and recognition result is reported application layer.
S4, application framework layer edge incoming event carries out processing and identification, and recognition result is reported application layer.
Concrete, processing and identification comprises: touch point coordinate, duration, numbering etc. according to input operation carry out processing and identification, to determine input operation.Such as, can identify according to the coordinate of touch point, duration and numbering is the input operations such as the clicking of A district, slip, or the monolateral input operation such as sliding back and forth in C district.
S5, application layer performs according to the recognition result reported and inputs instruction accordingly.
Concrete, application layer comprise camera, figurethe application such as storehouse, screen locking.Input operation in the embodiment of the present invention comprises application layer and system-level, and system-level gesture process is also classified as application layer.Wherein, application layer is the manipulation of application programs, such as, and unlatching, closedown, volume control etc.System-level is manipulation to mobile terminal, such as, start, accelerate, switch between application, the overall situation returns.
In one embodiment, mobile terminal arranges and stores the input instruction corresponding from different input operations, comprising the input instruction corresponding with edge input operation and the input instruction corresponding with normal input operation.Application layer receives the recognition result of the edge incoming event reported, and namely calls corresponding input instruction to respond this edge input operation according to edge input operation; Application layer receives the recognition result of the normal incoming event reported, and namely calls corresponding input instruction to respond this normal input operation according to normal input operation.
Should be understood that the incoming event of the embodiment of the present invention comprises the input operation only in A district, the input operation only in C district and results from the input operation in A district and C district simultaneously.Thus, input instruction and also comprise the input instruction corresponding with this three classes incoming event.The combination that the embodiment of the present invention can realize A district and the input operation of C district controls mobile terminal, such as, input operation is the relevant position of simultaneously clicking A district and C district, corresponding input instruction is for closing a certain application, therefore, by clicking the input operation of A district and relevant position, C district simultaneously, the closedown to application can be realized.
The mobile terminal of the embodiment of the present invention can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desk-top computer etc.
Accordingly, the embodiment of the present invention also provides a kind of subscriber equipment, see figure 10for its hardware configuration is illustrated figure.See figure 10, subscriber equipment 1000 comprises touch-screen 100, controller 200, memory storage 310, GPS chip 320, communicator 330, video processor 340, audio process 350, button 360, microphone 370, camera 380, loudspeaker 390 and action sensor 400.
Touch-screen 100 can A district, B district and C district as mentioned above, or A district, B district, C district and T district.Touch-screen 100 can be implemented as various types of display, such as LCD (liquid crystal display), OLED (Organic Light Emitting Diode) display and PDP (plasma display panel).Touch-screen 100 can comprise driving circuit, and it can be embodied as, such as a-siTFT, LTPS (low temperature polycrystalline silicon) TFT and OTFT (organic tft), and back light unit.
Meanwhile, touch-screen 100 can comprise the touch sensor of the touch gestures for sensing user.Touch sensor can be implemented as various types of sensor, such as capacity type, resistance type or piezo type.Capacity type is by sensing microgalvanometer calculation touch coordinate value encourage by the health of user when a part (such as, the finger of the user) touch-surface of user's body being coated with the touch-screen of conductive material surperficial.According to resistance type, touch-screen comprises two battery lead plates, and when the user touches the screen by the electric current of sensing flowing when the upper plate at touch point place contacts with lower plate, calculates touch coordinate value.In addition, when input function supported by subscriber equipment 1000, touch-screen 100 can sense user's gesture of the input media for using such as pen and so on except user's finger.When input media is writing pencil (styluspen) comprising coil, subscriber equipment 1000 can comprise the magnetic sensor (not shown) for sensing magnetic field, and described magnetic field changes the degree of approach of magnetic sensor according to writing pencil interior loop.Thus, except sensing touch gesture, subscriber equipment 1000 can also sense close gesture, and namely writing pencil hovers over above subscriber equipment 1000.
Memory storage 310 can store various program needed for the operation of subscriber equipment 1000 and data.Such as, memory storage 310 can store program and the data of the various screens that above will show in each district (such as, A district, C district) for formation.
Controller 200 is by using the program and data displaying contents in each district of touch-screen 100 that are stored in memory storage 310.
Controller 200 comprises RAM210, ROM220, CPU230, GPU (Graphics Processing Unit) 240 and bus 250.RAM210, ROM220, CPU230 and GPU240 can be connected to each other by bus 250.
CPU (processor) 230 access to storage device 310 and use the operating system (OS) be stored in memory storage 310 perform startup.And CPU230 performs various operation by using various programs, content and the data be stored in memory storage 310.
ROM220 stores the command set being used for system and starting.When open command is transfused to and electric power is provided, the OS be stored in memory storage 310 is copied to RAM210 according to being stored in command set in ROM220 by CPU230, and by running OS start up system.When startup completes, the various program copies that are stored in memory storage 310 to RAM210, and are performed various operation by the reproducer run in RAM210 by CPU230.Specifically, GPU240 can comprise all by using counter (not shown) and the generation of renderer (not shown) as figurethe screen of the various objects that mark, image and text are such.Counter calculates the eigenwert that such as coordinate figure, form, size and color are such, wherein respectively according to the layout of screen color mark object.
GPS chip 320 is the unit from GPS (GPS) satellite reception gps signal, and calculates the current location of subscriber equipment 1000.When using Navigator or when asking the current location of user, controller 200 can by the position using GPS chip 320 to calculate user.
Communicator 330 is the unit according to various types of communication means and various types of external unit executive communication.Communicator 330 comprises WiFi chip 331, Bluetooth chip 332, wireless communication chips 333 and NFC chip 334.Controller 200 performs the communication with various external unit by using communicator 330.
WiFi chip 331 and Bluetooth chip 332 are respectively according to WiFi method and bluetooth approach executive communication.When using WiFi chip 331 or Bluetooth chip 332, such as service set identifier (servicesetidentifier, SSID) and the such various link informations of session key can first be received and dispatched, communication can be connected by using link information, and various information can be received and dispatched.Wireless communication chips 333 is the chips according to such as IEEE, Zigbee, 3G (third generation), the such various communication standard executive communications of 3GPP (third generation collaborative project) and LTE (Long Term Evolution).NFC chip 334 is that various RF-ID frequency span is 135 KHz, 13.56 megahertzes, 433 megahertzes, 860 ~ 960 megahertzes and 2.45 gigahertz (GHZ)s such as according to using NFC (near-field communication) method of 13.56 gigahertz bandwidth in the middle of various RF-ID frequency span to carry out the chip operated.
Video processor 340 is that process is included in the content that received by communicator 330 or is stored in the unit of the video data in the content in memory storage 310.Video processor 340 can perform the various image procossing for video data, such as decoding, convergent-divergent, noise filtering, frame rate conversion and resolution conversion.
Audio process 350 is that process is included in the content that received by communicator 330 or is stored in the unit of the voice data in the content in memory storage 310.Audio process 350 can perform the various process for voice data, such as decodes, amplifies and noise filtering.
Corresponding contents can be reproduced by driving video processor 340 and audio process 350 when running playback program Time Controller 200 for content of multimedia.
Loudspeaker 390 exports the voice data generated in audio process 350.
Button 360 can be various types of button, such as mechanical button or the touch pads formed on some regions the front of the main ectosome as subscriber equipment 1000, side or the back side or touch-wheel.
Microphone 370 is the unit receiving user speech or other sound and they are transformed to voice data.The user speech inputted by microphone 370 during controller 200 can be used in calling procedure, or they are transformed to voice data and are stored in memory storage 310.
Camera 380 is unit of control capturing still image according to user or video image.Camera 380 can be implemented as multiple unit, such as front camera and back side camera.As described below, camera 380 can be used as the device obtaining user images in the one exemplary embodiment of the sight of track user.
When providing camera 380 and microphone 370, controller 200 can according to the sound of the user inputted by microphone 370 or the user action executivecontrol function identified by camera 380.Therefore, subscriber equipment 1000 can operate under action control pattern or Voice command pattern.When operating under action control pattern, controller 200 takes user by activating camera 380, follows the tracks of the change of user action, and performs corresponding operation.When operating under Voice command pattern, controller 200 can operate to analyze the voice that inputted by microphone 370 and according to the user speech executivecontrol function analyzed under speech recognition mode.
In the subscriber equipment 1000 supporting action control pattern or Voice command pattern, in above-mentioned various one exemplary embodiment, use speech recognition technology or action recognition technology.Such as, when user perform that picture selects to mark in home screen to action like this or say corresponding to object voice command time, can determine to have selected corresponding object and the control operation with this object matching can be performed.
Action sensor 400 is unit of the movement of the main body of sensing user equipment 1000.Subscriber equipment 1000 can rotate or tilt along various direction.Action sensor 400 can sense by use in the so various sensors of such as geomagnetic sensor, gyro sensor and acceleration transducer one or more the moving characteristic that such as sense of rotation, angle and slope are such.
And, although figure 10in not shown, but according to one exemplary embodiment, subscriber equipment 1000 can also comprise can be connected with USB connector USB port, for connecting as earphone, mouse, LAN and reception and the various input port of various outer members the DMB chip processing DMB (DMB) signal and other sensors various.
As mentioned above, memory storage 310 can store various program.
Based on figure 10shown subscriber equipment, wherein, touch-screen, for receiving the input operation of user, changes physics input into electric signal to produce incoming event;
Processor, comprising: driver module, application framework module and application module;
Wherein, described driver module, for obtaining the incoming event that user is produced by input equipment, and is reported to described application framework module;
Described application framework module, for judging that incoming event is edge incoming event, or normal incoming event, if normal incoming event then carries out processing and identification to normal incoming event, and recognition result is reported described application module; If edge incoming event then edge incoming event carries out processing and identification, and recognition result is reported described application module;
Application module, inputs instruction accordingly for performing according to the recognition result reported.
Should be understood that the mobile terminal process edge incoming event of above-described embodiment and the principle of normal incoming event and details are equally applicable to the subscriber equipment of the embodiment of the present invention.
The mobile terminal of the embodiment of the present invention, input processing method and subscriber equipment, owing to just carrying out the operation distinguishing A district and C district at application framework floor, and the foundation of virtual unit is carried out at application framework layer, avoid and driving floor differentiation A district and C district to the dependence of hardware; By arranging touch point numbering, can realize distinguishing finger, compatible A agreement and B agreement; And accessible site is in the operating system of mobile terminal, applicable different hardware, different types of mobile terminal are portable good; All key elements (coordinate, numbering etc. of touch point) of touch point are stored, and follow-up judgement edge input (such as, FIT) can provide convenient.
Professional can also recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with electronic hardware, computer software or the combination of the two, in order to the interchangeability of hardware and software is clearly described, generally describe composition and the step of each example in the above description according to function.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
The software module that the method described in conjunction with embodiment disclosed herein or the step of method can use hardware, processor to perform, or the combination of the two is implemented.Software module can be placed in the storage medium of random access memory (RAM), internal memory, ROM (read-only memory) (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or other form arbitrarily.
Combine above accompanying drawingembodiments of the invention are described; but the present invention is not limited to above-mentioned embodiment; above-mentioned embodiment is only schematic; instead of it is restrictive; those of ordinary skill in the art is under enlightenment of the present invention; do not departing under the ambit that present inventive concept and claim protect, also can make a lot of form, these all belong within protection of the present invention.

Claims (10)

1. a gesture calibration steps, be applied to mobile terminal edge mutual, it is characterized in that, described method comprises:
Start edge gesture calibration mode;
Repeat to input the operation of a certain edge gesture according to preset times, gather the eigenwert of the described edge gesture operation of each input and record;
Eigenwert according to the described edge gesture operation of each record calculates its mean value, draws calibration data.
2. gesture calibration steps according to claim 1, is characterized in that, the described edge gesture obtaining input, gathers described edge gesture characteristic of correspondence value and record according to preset times, comprise step:
Drive layer to obtain gesture incoming event, and be reported to application framework layer; Application framework layer judges whether gesture incoming event is edge gesture, when described gesture incoming event is edge gesture, judged result is reported to application layer;
Application layer gathers the edge gesture of user according to judged result, gathers the eigenwert of described edge gesture according to preset times.
3. gesture calibration steps according to claim 1, it is characterized in that, described edge gesture operation is holding operation, the eigenwert of described edge gesture operation comprises coordinate figure corresponding to finger, the eigenwert of the edge gesture operation that described basis records at every turn calculates its mean value, draw calibration data, comprise step:
Obtain the coordinate figure that in the operation of described edge gesture, each finger is corresponding respectively;
The coordinate figure calculating mean value corresponding to each finger, draws the calibration data of described edge gesture.
4. gesture calibration steps according to claim 1, it is characterized in that, described edge gesture operation is slide, the eigenwert of described edge gesture operation comprises the starting point coordinate value of slide and stops coordinate figure, the eigenwert of the edge gesture operation that described basis records at every turn calculates its mean value, draw calibration data, comprise step:
Obtain the starting point coordinate value of finger sliding operation in the operation of described edge gesture respectively and stop coordinate figure;
To starting point coordinate value and termination coordinate figure calculating mean value respectively, draw the calibration data of described edge gesture.
5. the gesture calibration steps according to any one of claim 3 or 4, is characterized in that, the edge gesture characteristic of correspondence value that described basis records at every turn calculates its mean value, after drawing calibration data, also comprises step:
According to the calibration data of described edge gesture, determine the hot spot region of corresponding edge gesture operation, and described calibration data is stored in database, set up the corresponding relation with user.
6. a gesture calibrating installation, is applied to mobile terminal edge mutual, it is characterized in that, comprising: start module, acquisition module, processing module, wherein,
Start module, for mobile terminal is opened edge gesture calibration mode;
Acquisition module, for repeating to input the operation of a certain edge gesture according to preset times, gathers the eigenwert of the described edge gesture operation of each input and record;
Processing module, the eigenwert for the described edge gesture operation according to each record calculates its mean value, draws calibration data.
7. gesture calibrating installation according to claim 6, is characterized in that, described acquisition module, also comprises:
Driving layer, for obtaining gesture incoming event, and being reported to application framework layer; Application framework layer judges whether gesture incoming event is edge gesture, when described gesture incoming event is edge gesture, judged result is reported to application layer;
Application layer, for gathering the edge gesture of user according to judged result, gathers the eigenwert of described edge gesture according to preset times.
8. gesture calibrating installation according to claim 6, is characterized in that, described processing module, also comprises:
First processing unit, for obtaining each finger characteristic of correspondence value in described edge gesture respectively;
Second processing unit, for each finger characteristic of correspondence value calculating mean value, draws the calibration data of described edge gesture.
9. gesture calibrating installation according to claim 6, is characterized in that, described device also comprises:
Memory module, for the calibration data according to described edge gesture, determines the hot spot region of corresponding edge gesture operation, and described calibration data is stored in database, sets up the corresponding relation with user.
10. a gesture input processing method, is applied to mobile terminal edge mutual, it is characterized in that, comprising: input equipment, driving layer, application framework layer, application layer, wherein,
Drive layer to obtain user and produce gesture incoming event by input equipment, and be reported to application framework layer;
Application framework layer judges whether gesture incoming event is edge gesture, when described gesture incoming event is edge gesture, judged result is reported to application layer;
Application layer gathers the edge gesture of user according to judged result, gathers the input information of described edge gesture according to preset times.
CN201510846257.3A 2015-11-27 2015-11-27 Gesture correction method, apparatus and gesture input processing method Pending CN105573545A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510846257.3A CN105573545A (en) 2015-11-27 2015-11-27 Gesture correction method, apparatus and gesture input processing method
PCT/CN2016/106167 WO2017088694A1 (en) 2015-11-27 2016-11-16 Gesture calibration method and apparatus, gesture input processing method and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510846257.3A CN105573545A (en) 2015-11-27 2015-11-27 Gesture correction method, apparatus and gesture input processing method

Publications (1)

Publication Number Publication Date
CN105573545A true CN105573545A (en) 2016-05-11

Family

ID=55883759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510846257.3A Pending CN105573545A (en) 2015-11-27 2015-11-27 Gesture correction method, apparatus and gesture input processing method

Country Status (2)

Country Link
CN (1) CN105573545A (en)
WO (1) WO2017088694A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106527953A (en) * 2016-11-28 2017-03-22 努比亚技术有限公司 Mobile terminal and frame gesture operation method
WO2017088694A1 (en) * 2015-11-27 2017-06-01 努比亚技术有限公司 Gesture calibration method and apparatus, gesture input processing method and computer storage medium
CN109002215A (en) * 2018-07-27 2018-12-14 青岛海信移动通信技术股份有限公司 A kind of terminal with touch screen determines the method and terminal of touch initial position
WO2022199312A1 (en) * 2021-03-24 2022-09-29 Oppo广东移动通信有限公司 Gesture data acquisition method and apparatus, terminal, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113919390A (en) * 2021-09-29 2022-01-11 华为技术有限公司 Method for identifying touch operation and electronic equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482785A (en) * 2008-01-04 2009-07-15 苹果公司 Selective rejection of touch contacts in an edge region of a touch surface
CN101853133A (en) * 2010-05-31 2010-10-06 中兴通讯股份有限公司 Method and mobile terminal for automatically recognizing gestures
CN101882043A (en) * 2010-06-08 2010-11-10 苏州瀚瑞微电子有限公司 Method for improving touch precision of edge of capacitance type touch screen
CN102591567A (en) * 2011-01-06 2012-07-18 捷讯研究有限公司 Electronic device and method of controlling same
CN103034367A (en) * 2012-12-27 2013-04-10 杭州士兰微电子股份有限公司 Calibration method for touch screen
WO2013054516A1 (en) * 2011-10-14 2013-04-18 パナソニック株式会社 Input device, information terminal, input control method, and input control program
CN103558951A (en) * 2012-05-07 2014-02-05 瑟克公司 Method for distinguishing between edge swipe gestures that enter a touch sensor from an edge and other similar but non-edge swipe actions
CN103562838A (en) * 2011-05-27 2014-02-05 微软公司 Edge gesture
CN103809735A (en) * 2012-11-12 2014-05-21 腾讯科技(深圳)有限公司 Gesture recognition method and gesture recognition device
CN104335150A (en) * 2012-04-25 2015-02-04 Fogale纳米技术公司 Method for interacting with an apparatus implementing a capacitive control surface, interface and apparatus implementing this method
CN104601791A (en) * 2013-10-31 2015-05-06 大连易维立方技术有限公司 Method for identifying mobile phone operation gestures
CN104735256A (en) * 2015-03-27 2015-06-24 努比亚技术有限公司 Method and device for judging holding mode of mobile terminal
CN104777948A (en) * 2014-01-13 2015-07-15 上海和辉光电有限公司 Method and device for improving accuracy of edge coordinates of projection-type capacitive touch panel
CN105487705A (en) * 2015-11-20 2016-04-13 努比亚技术有限公司 Mobile terminal, input processing method and user equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
KR102358378B1 (en) * 2005-03-04 2022-02-08 애플 인크. Multi-functional hand-held device
US8432368B2 (en) * 2010-01-06 2013-04-30 Qualcomm Incorporated User interface methods and systems for providing force-sensitive input
CN102622225B (en) * 2012-02-24 2015-01-14 合肥工业大学 Multipoint touch application program development method supporting user defined gestures
CN105573545A (en) * 2015-11-27 2016-05-11 努比亚技术有限公司 Gesture correction method, apparatus and gesture input processing method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482785A (en) * 2008-01-04 2009-07-15 苹果公司 Selective rejection of touch contacts in an edge region of a touch surface
CN101853133A (en) * 2010-05-31 2010-10-06 中兴通讯股份有限公司 Method and mobile terminal for automatically recognizing gestures
CN101882043A (en) * 2010-06-08 2010-11-10 苏州瀚瑞微电子有限公司 Method for improving touch precision of edge of capacitance type touch screen
CN102591567A (en) * 2011-01-06 2012-07-18 捷讯研究有限公司 Electronic device and method of controlling same
CN103562838A (en) * 2011-05-27 2014-02-05 微软公司 Edge gesture
WO2013054516A1 (en) * 2011-10-14 2013-04-18 パナソニック株式会社 Input device, information terminal, input control method, and input control program
CN104335150A (en) * 2012-04-25 2015-02-04 Fogale纳米技术公司 Method for interacting with an apparatus implementing a capacitive control surface, interface and apparatus implementing this method
CN103558951A (en) * 2012-05-07 2014-02-05 瑟克公司 Method for distinguishing between edge swipe gestures that enter a touch sensor from an edge and other similar but non-edge swipe actions
CN103809735A (en) * 2012-11-12 2014-05-21 腾讯科技(深圳)有限公司 Gesture recognition method and gesture recognition device
CN103034367A (en) * 2012-12-27 2013-04-10 杭州士兰微电子股份有限公司 Calibration method for touch screen
CN104601791A (en) * 2013-10-31 2015-05-06 大连易维立方技术有限公司 Method for identifying mobile phone operation gestures
CN104777948A (en) * 2014-01-13 2015-07-15 上海和辉光电有限公司 Method and device for improving accuracy of edge coordinates of projection-type capacitive touch panel
CN104735256A (en) * 2015-03-27 2015-06-24 努比亚技术有限公司 Method and device for judging holding mode of mobile terminal
CN105487705A (en) * 2015-11-20 2016-04-13 努比亚技术有限公司 Mobile terminal, input processing method and user equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017088694A1 (en) * 2015-11-27 2017-06-01 努比亚技术有限公司 Gesture calibration method and apparatus, gesture input processing method and computer storage medium
CN106527953A (en) * 2016-11-28 2017-03-22 努比亚技术有限公司 Mobile terminal and frame gesture operation method
CN109002215A (en) * 2018-07-27 2018-12-14 青岛海信移动通信技术股份有限公司 A kind of terminal with touch screen determines the method and terminal of touch initial position
CN109002215B (en) * 2018-07-27 2021-03-19 青岛海信移动通信技术股份有限公司 Method for determining touch initial position of terminal with touch screen and terminal
WO2022199312A1 (en) * 2021-03-24 2022-09-29 Oppo广东移动通信有限公司 Gesture data acquisition method and apparatus, terminal, and storage medium

Also Published As

Publication number Publication date
WO2017088694A1 (en) 2017-06-01

Similar Documents

Publication Publication Date Title
US20180364865A1 (en) Touch control method, user equipment, input processing method, mobile terminal and intelligent terminal
US10397649B2 (en) Method of zooming video images and mobile display terminal
US20170322713A1 (en) Display apparatus and method for controlling the same and computer-readable recording medium
CN105487705B (en) Mobile terminal, input processing method and user equipment
US20200371685A1 (en) Graphical User Interface Display Method And Electronic Device
CN105573545A (en) Gesture correction method, apparatus and gesture input processing method
CN107045420A (en) Switching method and mobile terminal, the storage medium of application program
KR101990567B1 (en) Mobile apparatus coupled with external input device and control method thereof
CN107562361A (en) Message treatment method, device and terminal
US20150143291A1 (en) System and method for controlling data items displayed on a user interface
US10067666B2 (en) User terminal device and method for controlling the same
KR20170124933A (en) Display apparatus and method for controlling the same and computer-readable recording medium
CN107704189A (en) A kind of method, terminal and computer-readable recording medium for controlling terminal
CN102385452A (en) Information processing apparatus, parameter setting method, and program
CN104954549B (en) Electronic device and communication means
US20180181288A1 (en) Method for displaying information, and terminal equipment
CN103927080A (en) Method and device for controlling control operation
CN106648427A (en) Setting device and method for single hand mode of terminal
WO2012095058A1 (en) Touch screen and input control method
CN106445354A (en) Terminal equipment touch control method and terminal equipment touch control device
US11853543B2 (en) Method and apparatus for controlling display of video call interface, storage medium and device
CN108228034A (en) Control method, mobile terminal and the computer readable storage medium of mobile terminal
CN109800045A (en) A kind of display methods and terminal
CN107704190A (en) Gesture identification method, device, terminal and storage medium
CN106572207A (en) Terminal single hand mode identification device and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160511

RJ01 Rejection of invention patent application after publication