WO2017088694A1 - Gesture calibration method and apparatus, gesture input processing method and computer storage medium - Google Patents

Gesture calibration method and apparatus, gesture input processing method and computer storage medium Download PDF

Info

Publication number
WO2017088694A1
WO2017088694A1 PCT/CN2016/106167 CN2016106167W WO2017088694A1 WO 2017088694 A1 WO2017088694 A1 WO 2017088694A1 CN 2016106167 W CN2016106167 W CN 2016106167W WO 2017088694 A1 WO2017088694 A1 WO 2017088694A1
Authority
WO
WIPO (PCT)
Prior art keywords
gesture
edge
edge gesture
input
calibration data
Prior art date
Application number
PCT/CN2016/106167
Other languages
French (fr)
Chinese (zh)
Inventor
李鑫
朱冰
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017088694A1 publication Critical patent/WO2017088694A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment

Definitions

  • the present invention relates to the field of mobile terminal technologies, and in particular, to a gesture calibration method and apparatus, a gesture input processing method, a terminal, and a computer storage medium.
  • mobile phones and other mobile terminals have more and more functions.
  • Most of the entertainment functions that can be realized on computers can be realized on mobile terminals. People can watch movies and play games on mobile terminals. Browse web pages, video chats, and more.
  • mobile terminals tend to be more and more large-screen development.
  • the size of mobile terminals cannot be increased indefinitely, which requires full use of the external dimensions of mobile terminals to increase The utilization of the screen, so there is a narrow border or even a borderless mobile terminal.
  • the narrow-frame or borderless mobile terminal makes full use of the external dimensions of the mobile terminal, greatly expands the screen size of the mobile terminal, satisfies the user's demand for a large screen, and at the same time makes the edge input operation diversified.
  • the individual differences of the user such as the size of the palm, the strength of the finger pressing, etc.
  • the narrow border or the borderless mobile terminal is held, the range of contact with the edge of the screen is greatly different, resulting in the edge gesture being non-standard and the recognition rate being low.
  • the technical problem to be solved by the embodiment of the present invention is to calibrate the edge interaction gesture involved in the mobile terminal, and provide a gesture calibration method, a device, a gesture input processing method, a terminal, and a computer storage medium, and the method includes:
  • the average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
  • the acquiring an edge gesture of the input, collecting the feature value corresponding to the edge gesture according to the preset number of times, and recording includes:
  • the driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
  • the application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
  • the edge gesture operation is a holding operation
  • the feature value of the edge gesture operation includes a coordinate value corresponding to the finger
  • the feature value is calculated according to the feature value of the edge gesture operation recorded each time.
  • Average, resulting in calibration data including:
  • An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
  • the edge gesture operation is a sliding operation
  • the feature value of the edge gesture operation includes a start point coordinate value and a stop coordinate value of the slide operation
  • the edge gesture operation according to each record The eigenvalues are averaged to obtain calibration data, including:
  • the average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
  • the method further includes:
  • the embodiment of the invention further provides a gesture calibration device, which is applied to the edge interaction of the mobile terminal.
  • the method includes: a startup module, an acquisition module, and a processing module, where
  • the startup module is configured to enable the mobile terminal to turn on an edge gesture calibration mode
  • the acquiring module is configured to repeatedly input an edge gesture operation according to a preset number of times, and collect feature values of the edge gesture operation input each time and record;
  • a processing module configured to calculate an average value according to the feature value of the edge gesture operation recorded each time to obtain calibration data.
  • the acquiring module further includes:
  • the driving layer is configured to obtain a gesture input event and report to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
  • the application layer is configured to collect an edge gesture of the user according to the determination result, and collect the feature value of the edge gesture according to a preset number of times.
  • the processing module further includes:
  • a first processing unit configured to respectively acquire feature values corresponding to each finger in the edge gesture
  • the second processing unit is configured to calculate an average value of the feature values corresponding to each finger to obtain calibration data of the edge gesture.
  • the apparatus further includes:
  • a storage module configured to determine a hotspot area corresponding to the edge gesture operation according to the calibration data of the edge gesture, and store the calibration data in a database to establish a correspondence relationship with the user.
  • the startup module, the acquisition module, the processing module, the first processing unit, the second processing unit, and the storage module may use a central processing unit (CPU) when performing processing.
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA programmable logic array
  • the embodiment of the present invention further provides a gesture input processing method, which is applied to an edge interaction of a mobile terminal, and includes: an input device, a driver layer, an application framework layer, and an application layer, where
  • the driver layer acquires a gesture input event generated by the user through the input device, and reports the event to the application framework layer;
  • the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
  • the application layer collects the edge gesture of the user according to the judgment result, and collects the input information of the edge gesture according to a preset number of times.
  • the embodiment of the invention further provides a terminal, where the terminal includes:
  • a storage medium configured to store computer executable instructions
  • a processor configured to execute computer executable instructions stored on the storage medium, the computer executable instructions comprising:
  • the average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
  • the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
  • the driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
  • the application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
  • the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
  • An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
  • the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
  • the average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
  • the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions include:
  • the average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
  • the computer executable instructions further include:
  • the driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
  • the application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
  • the computer executable instructions further include:
  • An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
  • the computer executable instructions further include:
  • the average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
  • the computer executable instructions further include:
  • a gesture calibration method provided by an embodiment of the present invention has the following beneficial effects: the problem that the edge input gesture is not high due to individual differences or personal operation habits, and the gesture can be accurately recognized in the user gesture operation hot zone. It is convenient for users to input operations at the edge to enhance the user experience.
  • FIG. 1 is a schematic diagram of screen area division of a mobile terminal according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of a gesture calibration method according to an embodiment of the present invention.
  • FIG. 3 is a flow chart of a method for calibrating a one-handed gesture in an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a single-hand grip gesture information collection interface according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for calibrating a right edge sliding gesture according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a right edge sliding gesture information collection interface according to an embodiment of the present invention.
  • FIG. 7 is a structural block diagram of a gesture calibration apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a software architecture of a mobile terminal according to an embodiment of the present invention.
  • FIG. 9 is a flowchart of an input processing method according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of hardware of a user equipment according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram of screen area division of a mobile terminal according to an embodiment of the present invention.
  • 1 is a schematic diagram of a narrow-frame mobile terminal.
  • the mobile terminal 100 includes: a C area (gray area) is an edge input area, an area A is a normal touch area, and a B area is a non-display area.
  • FIG. 1 is only an embodiment of a specific application scenario. In an actual application, the embodiment of another specific application scenario is also applicable to a borderless mobile terminal. In the borderless mobile terminal, because there is no border, compared with the above narrow border, the edge input area of the borderless terminal extends the A area outward to the side edge of the terminal edge, and the edge side is the edge of the terminal housing entity. .
  • the edge input area is an area extending from the screen area to the side of the terminal edge. Since the embodiment of the invention is applicable to narrow The border or the borderless mobile terminal fully utilizes the external size of the mobile terminal, greatly expands the screen size of the mobile terminal, satisfies the user's demand for the large screen, and at the same time, the edge input operation is performed through the gesture calibration operation of the edge input area. diversification.
  • the B area may be a non-touch area or a touch area according to different applicable scenarios, and the B area is collectively referred to as a non-display area.
  • the B area When the B area is the touch area, only the operation mode is different from the A area and the C area. In this case, the B area does not display an image, but a virtual function key, such as a home key, can be set. When the B area is a non-touch area, the physical function button can be set in the B area.
  • the touch operation in the area A is processed according to the existing normal processing manner. For example, clicking an application icon in the area A opens the application.
  • the touch operation in the C area it can be defined as the edge touch processing mode.
  • the bilateral sliding in the C area can be defined, that is, the terminal acceleration is performed.
  • the B area is a non-display area.
  • the B area may be provided with a button area, an earpiece, and the like.
  • the C zone may be divided in a fixed manner or a custom partition.
  • Fixed division that is, a fixed-length, fixed-bandwidth area is set as the C area in the screen area of the mobile terminal.
  • the C area may include a partial area on the left side of the screen of the mobile terminal and a partial area on the right side, and the position is fixedly disposed on both side edges of the mobile terminal, as shown in FIG.
  • the C zone can also be divided only at one side edge of the mobile terminal.
  • Customized division that is, the number, location and size of the area of the C area, which can be customized, for example, can be set by the user, or the mobile terminal can adjust the number, position and size of the area of the C area according to its own needs.
  • the basic graphic design of the C area is a rectangle, and the position and size of the C area can be determined by inputting the coordinates of the two vertices of the diagonal of the graphic.
  • the embodiment of the present invention does not limit the division and setting manner of the C area.
  • FIG. 2 is a flowchart of a gesture calibration method according to an embodiment of the present invention.
  • the gesture calibration method according to an embodiment of the present invention includes:
  • the edge gesture calibration mode is activated.
  • the mobile terminal device has an edge gesture calibration function. Under normal circumstances, the edge gesture calibration function is in an off state.
  • the edge gesture calibration mode is activated, the mobile terminal can collect the user's edge gesture. Before the user first uses the gesture operation, the user input gesture information needs to be collected according to the user's operation habits.
  • the user's gesture types include, but are not limited to, the left edge is slid, the right edge is slid, the left edge is slid, the right edge is slid, the bilateral is slid, the bilateral is slid, the terminal screen is held at four corners, and the unilateral back and forth slides. Hand hold and so on.
  • S20 repeatedly inputting an edge gesture operation according to a preset number of times, and collecting feature values of the edge gesture operation input each time and recording.
  • the driver layer retrieves input events generated by the user through the input device, such as input operation events through the touch screen.
  • the input events include a normal input event (A zone input event) and an edge input event (C zone input event).
  • Normal input events include input operations such as click, double click, and slide in Area A.
  • Edge input events include sliding on the left edge of Zone C, sliding of the left edge, sliding of the right edge, sliding of the right edge, bilateral sliding, bilateral sliding, holding the four corners of the phone, sliding back and forth, holding one Input operation such as grip, one-handed grip.
  • the application framework layer determines whether the gesture input event is an edge gesture. When the gesture input event is an edge gesture, the judgment result is reported to the application layer, and the application layer collects the user's edge gesture according to the determination result, and collects the edge according to the preset number of times. The eigenvalue of the gesture.
  • the driver layer acquires an input event generated by the user through the input device, and determines whether the gesture input event is an edge input event (C-region input event), and if the edge gesture input event Then, it is reported to the application framework layer, and then reported to the application layer through the application framework layer.
  • the application layer collects the user's edge gesture according to the judgment result, and collects the feature value of the edge gesture according to the preset number of times.
  • the edge gesture feature value includes: a coordinate value of a contact position of the finger with the edge of the screen, and/or a pressure value and a time value of the finger press. And determining, according to the calibration data of the edge gesture, a hotspot area corresponding to the edge gesture operation, and storing the calibration data in a database to establish a correspondence relationship with the user.
  • the hot spot area receives the gesture operation signal, the detection condition of the area is lowered, for example, the detection threshold of the detection pixel point or the detection threshold of the pressing force is lowered.
  • FIG. 3 is a flowchart of a method for calibrating a one-handed gesture according to an embodiment of the present invention.
  • the gesture calibration method of the embodiment of the invention includes:
  • the edge gesture calibration mode is activated.
  • the mobile terminal device has an edge gesture calibration function. Under normal circumstances, the edge gesture calibration function is in an off state. When the edge gesture calibration mode is activated, the mobile terminal can collect the user's edge gesture. Before the user first uses the one-hand gesture operation, the input information of the user gesture needs to be collected according to the user's operating habits.
  • S12 Acquire a one-handed gesture of the user, and collect the feature value corresponding to the one-handed gesture according to the preset number of times and record.
  • FIG. 4 is a schematic diagram of a single-hand grip gesture information collection interface according to an embodiment of the present invention.
  • the mobile terminal presents a setting interface to the user, and the user clicks the setting interface to enter the gesture information setting, and then performs gesture feature value collection according to the user's personal touch area or handheld mode.
  • the shaded portion of the C area indicates the touch area when the user holds the one hand.
  • the user performs a one-hand grip operation according to the prompt of the mobile terminal and repeats the steps.
  • the mobile terminal receives the one-hand grip gesture signal, the mobile terminal records the position of the user's finger pressing the touch area each time.
  • the data of the input information of the gesture also includes: touch point area S C1 , S C2 , S C3 ,
  • the calibration data of the touch point area S Ci can be obtained, and the pressure value and the time value of each touch point are obtained according to the average value of the pressure value and the time value of the finger pressing each time the one hand is held. .
  • S13 Calculate an average value according to the feature value corresponding to the single-handed gesture of each record, and obtain calibration data of the one-handed hand gesture.
  • the user holds the gesture calibration data with one hand, determines the hotspot area of the edge gesture operation according to the calibration data of the one-handed gesture, and stores the calibration data in a database to establish a correspondence with the user.
  • the detection condition of the area is lowered, for example, the detection threshold of the detection pixel point or the detection threshold of the pressing force is lowered.
  • the embodiment of the present invention does not limit the number of finger pressing regions.
  • the edge input area of the mobile terminal is customized according to the gesture operation habit of the user.
  • the C area is caused by individual differences or personal operation habits.
  • the position or area of the shaded portion of the area and the pressing force are different.
  • the user's gesture feature or user habit is collected to determine the hot spot area of the user's gesture operation, and the gesture recognition rate is improved by lowering the mobile phone detection condition of the hot spot area, thereby facilitating the user's edge input. Operation to enhance the user experience.
  • FIG. 5 is a flowchart of a method for calibrating a right edge sliding gesture according to an embodiment of the present invention.
  • the gesture calibration method of the embodiment of the invention includes:
  • the mobile terminal device has an edge gesture calibration function, under normal conditions The edge gesture calibration function is in the off state.
  • the edge gesture calibration mode is activated, the mobile terminal can collect the user's edge gesture. Before the user first uses the right edge sliding gesture operation, the user input gesture information needs to be collected according to the user's operating habits.
  • S22 Acquire a right edge sliding gesture of the user, and collect feature values corresponding to the right edge sliding gesture according to the preset number of times and record.
  • the right edge sliding gesture is taken as an example to illustrate the process of collecting gesture input information.
  • the shaded portion of the C area represents the sliding area of the user, and the user repeats the right edge sliding operation according to the prompt of the mobile terminal, and each time the mobile terminal receives the right edge sliding gesture signal, each time Record the starting position coordinate A (downX, downY), the ending position coordinate B (currentX, currentY), and the sliding time downTime of the user's finger pressing the touch area.
  • the feature values of the right edge sliding gesture are collected: the starting position coordinate A (downX, downY) and the ending position coordinate B (currentX, currentY) of the finger contacting the right edge of the screen.
  • the average of the coordinates of the starting position of each user's sliding operation That is, it is regarded as the calibration data of the starting position of the sliding operation.
  • the calibration data of the end position of the sliding operation can be obtained, and the average value according to the speed of each sliding operation is obtained.
  • the calibration data of the sliding speed is obtained.
  • the calibration data generated by the gesture is used as calibration data of the right edge sliding gesture, and the hotspot region of the edge gesture operation is determined according to the calibration data of the right edge sliding gesture, and the calibration data is stored in a database to establish a correspondence with the user. .
  • the detection condition of the area is lowered, for example, the detection threshold of the detection pixel point or the detection threshold of the pressing force is lowered.
  • the edge input area of the mobile terminal is customized according to the gesture operation habit of the user. As shown in FIG. 6, the shaded portion of the C area (gray area) may be due to individual user differences. Or the personal operation habits are different. By collecting user gesture features or user habits, the user's gestures are calibrated to facilitate user edge input operations and enhance the user experience.
  • FIG. 7 is a structural diagram of a gesture calibration apparatus according to an embodiment of the present invention.
  • the gesture calibration apparatus of the embodiment of the invention includes:
  • the startup module 11 is configured to turn on the edge gesture calibration mode of the mobile terminal.
  • the mobile terminal device has an edge gesture calibration function. Under normal circumstances, the edge gesture calibration function is in an off state.
  • the edge gesture calibration mode is activated, the mobile terminal can collect the user's edge gesture. Before the user first uses the gesture operation, the user input gesture information needs to be collected according to the user's operation habits.
  • the user's gesture types include, but are not limited to, the left edge is slid, the right edge is slid, the left edge is slid, the right edge is slid, the bilateral is slid, the bilateral is slid, the terminal screen is held at four corners, and the unilateral back and forth slides. Hand hold and so on.
  • the obtaining module 12 is configured to repeatedly input an edge gesture operation according to a preset number of times, and collect feature values of the edge gesture operation input each time and record.
  • the driver layer retrieves input events generated by the user through the input device, such as input operation events through the touch screen.
  • the input events include a normal input event (A zone input event) and an edge input event (C zone input event).
  • Normal input events include input operations such as click, double click, and slide in Area A.
  • Edge input events include sliding on the left edge of Zone C, sliding of the left edge, sliding of the right edge, sliding of the right edge, bilateral sliding, bilateral sliding, holding the four corners of the phone, sliding back and forth, holding one Input operation such as grip, one-handed grip.
  • the application framework layer determines whether the gesture input event is an edge gesture, when When the gesture input event is an edge gesture, the judgment result is reported to the application layer, and the application layer collects the edge gesture of the user according to the determination result, and collects the feature value of the edge gesture according to the preset number of times.
  • the processing module 13 is configured to calculate an average value according to the feature value of the edge gesture operation recorded each time to obtain calibration data.
  • the processing module 13 further includes:
  • the first processing unit 130 is configured to acquire feature values corresponding to each finger in the edge gesture respectively.
  • the data of the input information of the one-handed holding gesture includes the finger pressing position coordinates C 1 (downX 1 , downY 1 ), C 2 (downX 2 , downY 2 ), C 3 (downX 3 , downY 3 ), C 4 (downX 4 , downY 4 ), C 5 (downX 5 , downY 5 ).
  • the second processing unit 131 is configured to calculate an average value of the feature values corresponding to each finger to obtain calibration data of the edge gesture.
  • each time the user holds the average of the finger press position coordinates with one hand That is, it is regarded as calibration data for holding the finger pressing position with one hand, and the calibration data is stored in the database, and is configured to establish a correspondence between the gesture and the usage mode according to the user's operating habits.
  • the feature value of the gesture may also include: touch point area S C1 , S C2 , S C3 ,
  • the calibration data of the touch point area S Ci can be obtained, and the calibration data of the pressing force of each touch point is obtained according to the average value of the finger pressing force when each one hand is held.
  • the storage module 14 is configured to determine a hotspot area corresponding to the edge gesture operation according to the calibration data of the edge gesture, and store the calibration data in a database to establish a correspondence relationship with the user.
  • the software architecture of the mobile terminal of the embodiment of the present invention includes: an input device 201, a driver layer 202, an application framework layer 203, and an application layer 204.
  • the input device 201 receives the input operation of the user, converts the physical input into an electrical signal TP, and transmits the TP to the driving layer 202; the driving layer 202 analyzes the input position to obtain parameters such as specific coordinates and duration of the touched point, and This parameter is uploaded to the application framework layer 203, and communication between the application framework layer 203 and the driver layer 202 can be implemented through a corresponding interface.
  • the application framework layer 203 receives the parameters reported by the driver layer 202, parses, distinguishes the edge input event and the normal input event, and passes the valid input to the specific application of the application layer 204 to meet the application layer 204 according to different Input operations perform different input operation instructions.
  • the driver layer is configured to obtain an input event generated by the user through the input device, and report to the application framework layer.
  • the application framework layer is configured to determine whether the input event is an edge input event or a normal input event. If it is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application layer; if the edge input event is input to the edge input The event is processed and identified, and the recognition result is reported to the application layer.
  • the application layer is configured to execute a corresponding input instruction based on the reported recognition result.
  • the mobile terminal in the embodiment of the present invention avoids the operation of distinguishing the A area and the C area in the application framework layer, and establishes the virtual device in the application framework layer, thereby avoiding the dependence of the driver layer on the hardware of the A area and the C area. .
  • FIG. 9 is a flowchart of an input processing method according to an embodiment of the present invention, including the following steps:
  • the driver layer acquires an input event generated by the user through the input device, and reports it to the application framework layer.
  • the input device receives an input operation (ie, an input event) of the user, converts the physical input into an electrical signal, and transmits the electrical signal to the driving layer.
  • the input events include an A zone input event and a C zone input event.
  • Input events in Zone A include input operations such as click, double click, and slide in Zone A.
  • the input events in Zone C include sliding on the left edge of Zone C, sliding on the left edge, slipping on the right edge, sliding on the right edge, bilaterally sliding, bilateral sliding, sliding on one side, holding a grip, one hand Hold and other input operations.
  • the driving layer analyzes the input position according to the received electrical signal to obtain related parameters such as specific coordinates and duration of the touched point.
  • the relevant parameters are reported to the application framework layer.
  • step S1 further includes:
  • a number (ID) for distinguishing the finger is assigned to each touch point.
  • the driver layer reports the input event by using the A protocol
  • the reported data includes the above related parameters and the number of the touched point.
  • the application framework layer determines whether the input event is an edge input event or a normal input event. If it is a normal input event, step S3 is performed, and if the edge input event is performed, step S4 is performed.
  • the application framework layer can determine whether it is an edge input event or a normal input event according to the coordinates in the relevant parameters of the input event. First, the horizontal axis coordinate of the touched point is acquired, and then the horizontal axis coordinate (ie, the X-axis coordinate) (x) of the touched point is compared with the C-zone width (Wc) and the touchscreen width (W). If Wc ⁇ x ⁇ (W-Wc), the touch point is in the A area, and the event is a normal input event; otherwise, the event is an edge input event.
  • the step S2 further includes: assigning a number (ID) for distinguishing the finger to each touch point; and performing all the element information (coordinates, duration, number, etc.) of the touch point. storage.
  • the finger by setting the touch point number, the finger can be distinguished, and the A protocol and the B protocol are compatible; and all the elements of the touch point (coordinates, numbers, and the like of the touch point) are stored, and the edge input can be subsequently determined (for example, , FIT) provides convenience.
  • the edge input event is not the same as the channel used for normal input event reporting. Edge input events use dedicated channels.
  • the application framework layer processes and identifies the normal input event, and reports the recognition result to the application layer.
  • the application framework layer processes and recognizes the edge input event, and reports the recognition result to the application layer.
  • the process identification includes: performing process identification according to touch point coordinates, duration, number, and the like of the input operation to determine an input operation. For example, based on the coordinates of the touch point, duration The interval and number can identify whether the input operation of the A area is clicked or swiped, or the input operation of the C area unilateral back and forth.
  • the application layer executes a corresponding input instruction according to the reported recognition result.
  • the application layer includes applications such as a camera, a gallery, and a lock screen.
  • the input operations in the embodiments of the present invention include an application level and a system level, and the system level gesture processing also classifies it as an application layer.
  • the application level is the manipulation of the application, for example, on, off, volume control, and the like.
  • the system level is the manipulation of the mobile terminal, for example, power on, acceleration, inter-application switching, global return, and the like.
  • the mobile terminal sets and stores input commands corresponding to different input operations, including input commands corresponding to edge input operations and input commands corresponding to normal input operations.
  • the application layer receives the recognition result of the reported edge input event, that is, the corresponding input instruction is invoked according to the edge input operation to respond to the edge input operation; the application layer receives the recognition result of the reported normal input event, that is, according to the normal input operation, the corresponding call is performed.
  • the input command responds to the normal input operation.
  • the input events of the embodiments of the present invention include input operations only in the A zone, input operations only in the C zone, and input operations simultaneously generated in the A zone and the C zone.
  • the input command also includes input commands corresponding to the three types of input events.
  • the embodiment of the present invention can implement the combination of the input operations of the A zone and the C zone to control the mobile terminal.
  • the input operation is to simultaneously click the corresponding positions of the A zone and the C zone
  • the corresponding input instruction is to close an application, therefore, The application can be closed by simultaneously clicking the input operations of the corresponding positions in the A zone and the C zone.
  • the mobile terminal of the embodiment of the present invention can be implemented in various forms.
  • the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA, a PAD, a PMP, a navigation device, and the like, and a fixed terminal such as a digital TV, a desktop computer, or the like.
  • the embodiment of the present invention further provides a user equipment
  • FIG. 10 is a schematic diagram of a hardware structure thereof.
  • the user equipment 1000 includes a touch screen 100, a controller 200, a storage device 310, a GPS chip 320, a communicator 330, a video processor 340, an audio processor 350, and Button 360, microphone 370, camera 380, speaker 390, and motion sensor 400.
  • the touch screen 100 may be the A zone, the B zone, and the C zone, or the A zone, the B zone, the C zone, and the T zone as described above.
  • the touch screen 100 can be implemented as various types of displays such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, and a plasma display panel (PDP).
  • the touch screen 100 may include a driving circuit that can be implemented, for example, as an a-si TFT, a low temperature polysilicon (LTPS) TFT, and an organic TFT (OTFT), and a backlight unit.
  • the touch screen 100 may include a touch sensor for sensing a touch gesture of a user.
  • the touch sensor can be implemented as various types of sensors, such as a capacitor type, a resistance type, or a piezoelectric type.
  • the capacitance type calculates a touch coordinate value by sensing a micro current excited by a user's body when a portion of the user's body (eg, a user's finger) is touched on the surface of the touch screen coated with the conductive material.
  • the touch screen includes two electrode plates, and the touch coordinate value is calculated by sensing a current flowing when the upper and lower plates at the touch point are in contact when the user touches the screen.
  • the touch screen 100 may sense a user gesture for using an input device such as a pen other than the user's finger.
  • the input device is a stylus pen including a coil
  • the user device 1000 may include a magnetic sensor (not shown) for sensing a magnetic field that changes according to the proximity of the coil within the stylus to the magnetic sensor .
  • the user device 1000 can also sense a proximity gesture, ie, the stylus hover over the user device 1000.
  • the storage device 310 can store various programs and data required for the operation of the user device 1000.
  • the storage device 310 can store programs and data for constructing various screens to be displayed on the respective areas (for example, the A area, the C area).
  • the controller 200 displays content on each area of the touch screen 100 by using programs and data stored in the storage device 310.
  • the controller 200 includes a RAM 210, a ROM 220, a CPU 230, a GPU (Graphics Processing Unit) 240, and a bus 250.
  • the RAM 210, the ROM 220, the CPU 230, and the GPU 240 may be connected to each other through a bus 250.
  • a processor (CPU) 230 accesses the storage device 310 and performs booting using an operating system (OS) stored in the storage device 310. Moreover, the CPU 230 performs various operations by using various programs, contents, and data stored in the storage device 310.
  • OS operating system
  • the ROM 220 stores a set of commands for system startup.
  • the CPU 230 copies the OS stored in the storage device 310 to the RAM 210 according to the command set stored in the ROM 220, and starts the system by running the OS.
  • the CPU 230 copies the various programs stored in the storage device 310 to the RAM 210, and performs various operations by running the copy program in the RAM 210.
  • the GPU 240 can generate a screen including various objects such as icons, images, and text by using a calculator (not shown) and a renderer (not shown).
  • the calculator calculates feature values such as coordinate values, format, size, and color, wherein the objects are color-coded according to the layout of the screen, respectively.
  • the GPS chip 320 is a unit that receives GPS signals from a GPS (Global Positioning System) satellite, and calculates the current location of the user equipment 1000.
  • GPS Global Positioning System
  • the controller 200 can calculate the location of the user by using the GPS chip 320.
  • the communicator 330 is a unit that performs communication with various types of external devices in accordance with various types of communication methods.
  • the communicator 330 includes a WiFi chip 331, a Bluetooth chip 332, a wireless communication chip 333, and an NFC chip 334.
  • the controller 200 performs communication with various external devices by using the communicator 330.
  • the WiFi chip 331 and the Bluetooth chip 332 perform communication according to the WiFi method and the Bluetooth method, respectively.
  • various connection information such as a service set identifier (SSID) and a session key may be first transmitted and received, communication may be connected by using connection information, and each of the communication information may be transmitted and received.
  • SSID service set identifier
  • the wireless communication chip 333 is a chip that performs communication in accordance with various communication standards such as IEEE, Zigbee, 3G (third generation), 3GPP (3rd Generation Partnership Project), and LTE (Long Term Evolution).
  • the NFC chip 334 is a chip that operates according to an NFC (Near Field Communication) method using a bandwidth of 13.56 MHz among various RF-ID bandwidths, and various RF-ID frequency bands such as 135 kHz, 13.56 MHz, 433 MHz, 860 ⁇ 960 trillion Hehe and 2.45 GHz.
  • NFC Near Field Communication
  • the video processor 340 is a unit that processes video data included in content received through the communicator 330 or content stored in the storage device 310.
  • Video processor 340 can perform various image processing for video data, such as decoding, scaling, noise filtering, frame rate conversion, and resolution conversion.
  • the audio processor 350 is a unit that processes audio data included in content received through the communicator 330 or content stored in the storage device 310.
  • the audio processor 350 can perform various processing for audio data, such as decoding, amplification, and noise filtering.
  • the controller 200 can reproduce the corresponding content by driving the video processor 340 and the audio processor 350 when the reproduction program is run for the multimedia content.
  • the speaker 390 outputs the audio data generated in the audio processor 350.
  • the button 360 can be various types of buttons, such as mechanical buttons or touch pads or touch wheels formed on some areas such as the front, side or back of the main outer body of the user device 1000.
  • the microphone 370 is a unit that receives user voice or other sounds and converts them into audio data.
  • the controller 200 can use user voices input through the microphone 370 during the call process, or convert them into audio data and store them in the storage device 310.
  • the camera 380 is a unit that captures a still image or a video image according to a user's control.
  • Camera 380 can be implemented as a plurality of units, such as a front camera and a rear camera. As described below, camera 380 can be used as a means of obtaining a user image in an exemplary embodiment that tracks the user's gaze.
  • the controller 200 can perform a control operation according to a user's voice input through the microphone 370 or a user motion recognized by the camera 380. Therefore, the user equipment 1000 can operate in an action control mode or a voice control mode.
  • the controller 200 photographs the user by activating the camera 380, tracks changes in user actions, and performs corresponding operations.
  • the controller 200 can operate in the voice recognition mode to analyze the voice input through the microphone 370 and according to the points The user voice is analyzed to perform control operations.
  • a voice recognition technology or a motion recognition technology is used in the above various exemplary embodiments. For example, when the user performs an action such as selecting an object marked on the home screen or speaking a voice command corresponding to the object, it may be determined that the corresponding object is selected and a control operation matching the object may be performed.
  • the motion sensor 400 is a unit that senses the movement of the body of the user device 1000.
  • User device 1000 can be rotated or tilted in various directions.
  • the motion sensor 400 can sense moving features such as a rotational direction, an angle, and a slope by using one or more of various sensors such as a geomagnetic sensor, a gyro sensor, and an acceleration sensor.
  • the user device 1000 may further include a USB port connectable to a USB connector, for connecting like a headphone, a mouse, a LAN, and receiving and processing a digital multimedia broadcast.
  • a USB port connectable to a USB connector, for connecting like a headphone, a mouse, a LAN, and receiving and processing a digital multimedia broadcast.
  • Various input ports of various external components such as DMB chips of (DMB) signals, and various other sensors.
  • DMB DMB chips of
  • the storage device 310 can store various programs.
  • the touch screen is configured to receive an input operation of the user, and convert the physical input into an electrical signal to generate an input event
  • the processor includes: a driving module, an application framework module, and an application module;
  • the driving module is configured to acquire an input event generated by the user through the input device, and report the event to the application framework module.
  • the application framework module is configured to determine whether the input event is an edge input event or a normal input event. If the input event is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application module; The event is processed and identified by the edge input event, and the recognition result is reported to the application module;
  • the application module is configured to execute a corresponding input instruction according to the reported recognition result.
  • the mobile terminal, the input processing method, and the user equipment in the embodiment of the present invention avoid the operation of distinguishing between the A area and the C area in the application framework layer, and the virtual device is established in the application framework layer, thereby avoiding distinguishing the A area in the driving layer.
  • the dependence of the C area on the hardware by setting the touch point number, the finger can be distinguished, compatible with the A protocol and the B protocol; and can be integrated into the operating system of the mobile terminal, and can be applied to different hardware, different types of mobile terminals, and can be transplanted.
  • Good all the elements of the touch point (coordinates, numbers, etc. of the touch point) are stored, which can be easily determined by subsequent judgment of the edge input (for example, FIT).
  • the steps of a method or method described in connection with the embodiments disclosed herein may be implemented in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of In the storage medium.
  • the gesture calibration scheme of the embodiment of the present invention repeatedly inputs an edge gesture operation according to a preset number of times by starting the edge gesture calibration mode, and collects the feature value of the edge gesture operation input each time and records, according to each record.
  • the eigenvalues of the edge gesture operation are calculated and averaged to obtain calibration data.

Abstract

A gesture calibration method and apparatus, a gesture input processing method, a terminal and a computer storage medium. The method comprises: starting an edge gesture calibration mode (S10); repeatedly inputting a certain edge gesture operation according to a pre-set number, collecting characteristic values of the edge gesture operation input each time and recording same (S20); and calculating an average value thereof according to the characteristic values of the edge gesture operation recorded each time to obtain calibration data (S30).

Description

手势校准方法、装置及手势输入处理方法、计算机存储介质Gesture calibration method, device and gesture input processing method, computer storage medium 技术领域Technical field
本发明涉及移动终端技术领域,更具体地说,涉及一种手势校准方法、装置及手势输入处理方法、终端、计算机存储介质。The present invention relates to the field of mobile terminal technologies, and in particular, to a gesture calibration method and apparatus, a gesture input processing method, a terminal, and a computer storage medium.
背景技术Background technique
随着通信技术的迅速发展,手机等移动终端的功能越来越多,电脑上能实现的娱乐功能,大部分都已经能在移动终端上实现,人们可以在移动终端上看电影、玩游戏、浏览网页、视频聊天等。为了提高移动终端的视觉效果,移动终端越来越趋向于大屏化发展,但鉴于移动终端的便携性特点,其尺寸不可能无限增大,这就需要充分利用移动终端的外形尺寸来增大屏幕的利用率,因此出现了窄边框甚至无边框的移动终端。With the rapid development of communication technology, mobile phones and other mobile terminals have more and more functions. Most of the entertainment functions that can be realized on computers can be realized on mobile terminals. People can watch movies and play games on mobile terminals. Browse web pages, video chats, and more. In order to improve the visual effect of mobile terminals, mobile terminals tend to be more and more large-screen development. However, in view of the portability characteristics of mobile terminals, the size of mobile terminals cannot be increased indefinitely, which requires full use of the external dimensions of mobile terminals to increase The utilization of the screen, so there is a narrow border or even a borderless mobile terminal.
窄边框或无边框移动终端充分利用移动终端的外形尺寸,极大的扩展了移动终端的屏幕尺寸,满足了用户对大屏幕的需求,同时使得边缘输入操作多样化。但是由于用户的个体差异(如手掌大小,手指按压力度等)在握持窄边框或无边框移动终端时,导致接触到屏幕边缘的范围存在很大差异,导致边缘手势不标准、识别率低。The narrow-frame or borderless mobile terminal makes full use of the external dimensions of the mobile terminal, greatly expands the screen size of the mobile terminal, satisfies the user's demand for a large screen, and at the same time makes the edge input operation diversified. However, due to the individual differences of the user (such as the size of the palm, the strength of the finger pressing, etc.), when the narrow border or the borderless mobile terminal is held, the range of contact with the edge of the screen is greatly different, resulting in the edge gesture being non-standard and the recognition rate being low.
发明内容Summary of the invention
本发明实施例要解决的技术问题在于,对移动终端中涉及的边缘交互手势进行校准,提供一种手势校准方法、装置及手势输入处理方法、终端及计算机存储介质,所述方法包括:The technical problem to be solved by the embodiment of the present invention is to calibrate the edge interaction gesture involved in the mobile terminal, and provide a gesture calibration method, a device, a gesture input processing method, a terminal, and a computer storage medium, and the method includes:
启动边缘手势校准模式;Initiating an edge gesture calibration mode;
按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录; Repeatingly inputting an edge gesture operation according to a preset number of times, collecting feature values of the edge gesture operation input each time and recording;
根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。The average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
在本发明实施例一实施方式中,所述获取输入的边缘手势,按照预设次数采集所述边缘手势对应的特征值并记录,包括:In an embodiment of the present invention, the acquiring an edge gesture of the input, collecting the feature value corresponding to the edge gesture according to the preset number of times, and recording, includes:
驱动层获取手势输入事件,并上报到应用框架层;应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。The application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
在本发明实施例一实施方式中,所述边缘手势操作是握持操作,所述边缘手势操作的特征值包括手指对应的坐标值,所述根据每次记录的边缘手势操作的特征值计算其平均值,得出校准数据,包括:In an embodiment of the present invention, the edge gesture operation is a holding operation, and the feature value of the edge gesture operation includes a coordinate value corresponding to the finger, and the feature value is calculated according to the feature value of the edge gesture operation recorded each time. Average, resulting in calibration data, including:
分别获取所述边缘手势操作中每个手指对应的坐标值;Obtaining coordinate values corresponding to each finger in the edge gesture operation respectively;
对每个手指对应的坐标值计算平均值,得出所述边缘手势的校准数据。An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
在本发明实施例一实施方式中,所述边缘手势操作是滑动操作,所述边缘手势操作的特征值包括滑动操作的起点坐标值和终止坐标值,所述根据每次记录的边缘手势操作的特征值计算其平均值,得出校准数据,包括:In an embodiment of the present invention, the edge gesture operation is a sliding operation, and the feature value of the edge gesture operation includes a start point coordinate value and a stop coordinate value of the slide operation, and the edge gesture operation according to each record The eigenvalues are averaged to obtain calibration data, including:
分别获取所述边缘手势操作中手指滑动操作的起点坐标值和终止坐标值;Obtaining a starting point coordinate value and a ending coordinate value of the finger sliding operation in the edge gesture operation respectively;
对起点坐标值和终止坐标值分别计算平均值,得出所述边缘手势的校准数据。The average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
在本发明实施例一实施方式中,所述根据每次记录的边缘手势对应的特征值计算其平均值,得出校准数据之后,还包括:In an embodiment of the present invention, after calculating the average value according to the feature value corresponding to the edge gesture of each record, and after obtaining the calibration data, the method further includes:
根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。And determining, according to the calibration data of the edge gesture, a hotspot area corresponding to the edge gesture operation, and storing the calibration data in a database to establish a correspondence relationship with the user.
本发明实施例还提出一种手势校准装置,应用于移动终端边缘交互, 其中,包括:启动模块、获取模块、处理模块,其中,The embodiment of the invention further provides a gesture calibration device, which is applied to the edge interaction of the mobile terminal. The method includes: a startup module, an acquisition module, and a processing module, where
启动模块,配置为将移动终端开启边缘手势校准模式;The startup module is configured to enable the mobile terminal to turn on an edge gesture calibration mode;
获取模块,配置为按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录;The acquiring module is configured to repeatedly input an edge gesture operation according to a preset number of times, and collect feature values of the edge gesture operation input each time and record;
处理模块,配置为根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。And a processing module configured to calculate an average value according to the feature value of the edge gesture operation recorded each time to obtain calibration data.
在本发明实施例一实施方式中,所述获取模块,还包括:In an embodiment of the present invention, the acquiring module further includes:
驱动层,配置为获取手势输入事件,并上报到应用框架层;应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The driving layer is configured to obtain a gesture input event and report to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
应用层,配置为根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。The application layer is configured to collect an edge gesture of the user according to the determination result, and collect the feature value of the edge gesture according to a preset number of times.
在本发明实施例一实施方式中,所述处理模块,还包括:In an embodiment of the present invention, the processing module further includes:
第一处理单元,配置为分别获取所述边缘手势中每个手指对应的特征值;a first processing unit, configured to respectively acquire feature values corresponding to each finger in the edge gesture;
第二处理单元,配置为对每个手指对应的特征值计算平均值,得出所述边缘手势的校准数据。The second processing unit is configured to calculate an average value of the feature values corresponding to each finger to obtain calibration data of the edge gesture.
在本发明实施例一实施方式中,所述装置还包括:In an embodiment of the present invention, the apparatus further includes:
存储模块,配置为根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。And a storage module, configured to determine a hotspot area corresponding to the edge gesture operation according to the calibration data of the edge gesture, and store the calibration data in a database to establish a correspondence relationship with the user.
所述启动模块、所述获取模块、所述处理模块、所述第一处理单元、所述第二处理单元、所述存储模块在执行处理时,可以采用中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Singnal Processor)或可编程逻辑阵列(FPGA,Field-Programmable Gate Array)实现。 The startup module, the acquisition module, the processing module, the first processing unit, the second processing unit, and the storage module may use a central processing unit (CPU) when performing processing. , digital signal processor (DSP, Digital Singnal Processor) or programmable logic array (FPGA, Field-Programmable Gate Array) implementation.
本发明实施例还提出一种手势输入处理方法,应用于移动终端边缘交互,其中,包括:输入设备、驱动层、应用框架层、应用层,其中,The embodiment of the present invention further provides a gesture input processing method, which is applied to an edge interaction of a mobile terminal, and includes: an input device, a driver layer, an application framework layer, and an application layer, where
驱动层获取用户通过输入设备产生手势输入事件,并上报到应用框架层;The driver layer acquires a gesture input event generated by the user through the input device, and reports the event to the application framework layer;
应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的输入信息。The application layer collects the edge gesture of the user according to the judgment result, and collects the input information of the edge gesture according to a preset number of times.
本发明实施例还提出一种终端,所述终端包括:The embodiment of the invention further provides a terminal, where the terminal includes:
存储介质,配置为存储计算机可执行指令;a storage medium configured to store computer executable instructions;
处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令包括:a processor configured to execute computer executable instructions stored on the storage medium, the computer executable instructions comprising:
启动边缘手势校准模式;Initiating an edge gesture calibration mode;
按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录;Repeatingly inputting an edge gesture operation according to a preset number of times, collecting feature values of the edge gesture operation input each time and recording;
根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。The average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
在本发明实施例一实施方式中,所述处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令还包括:In an embodiment of the present invention, the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
驱动层获取手势输入事件,并上报到应用框架层;应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。The application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
在本发明实施例一实施方式中,所述处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令还包括: In an embodiment of the present invention, the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
分别获取所述边缘手势操作中每个手指对应的坐标值;Obtaining coordinate values corresponding to each finger in the edge gesture operation respectively;
对每个手指对应的坐标值计算平均值,得出所述边缘手势的校准数据。An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
在本发明实施例一实施方式中,所述处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令还包括:In an embodiment of the present invention, the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
分别获取所述边缘手势操作中手指滑动操作的起点坐标值和终止坐标值;Obtaining a starting point coordinate value and a ending coordinate value of the finger sliding operation in the edge gesture operation respectively;
对起点坐标值和终止坐标值分别计算平均值,得出所述边缘手势的校准数据。The average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
在本发明实施例一实施方式中,所述处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令还包括:In an embodiment of the present invention, the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。And determining, according to the calibration data of the edge gesture, a hotspot area corresponding to the edge gesture operation, and storing the calibration data in a database to establish a correspondence relationship with the user.
本发明实施例还提出一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令包括:The embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions include:
启动边缘手势校准模式;Initiating an edge gesture calibration mode;
按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录;Repeatingly inputting an edge gesture operation according to a preset number of times, collecting feature values of the edge gesture operation input each time and recording;
根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。The average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
在本发明实施例一实施方式中,该计算机可执行指令还包括:In an embodiment of the present invention, the computer executable instructions further include:
驱动层获取手势输入事件,并上报到应用框架层;应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。The application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
在本发明实施例一实施方式中,该计算机可执行指令还包括: In an embodiment of the present invention, the computer executable instructions further include:
分别获取所述边缘手势操作中每个手指对应的坐标值;Obtaining coordinate values corresponding to each finger in the edge gesture operation respectively;
对每个手指对应的坐标值计算平均值,得出所述边缘手势的校准数据。An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
在本发明实施例一实施方式中,该计算机可执行指令还包括:In an embodiment of the present invention, the computer executable instructions further include:
分别获取所述边缘手势操作中手指滑动操作的起点坐标值和终止坐标值;Obtaining a starting point coordinate value and a ending coordinate value of the finger sliding operation in the edge gesture operation respectively;
对起点坐标值和终止坐标值分别计算平均值,得出所述边缘手势的校准数据。The average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
在本发明实施例一实施方式中,该计算机可执行指令还包括:In an embodiment of the present invention, the computer executable instructions further include:
根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。And determining, according to the calibration data of the edge gesture, a hotspot area corresponding to the edge gesture operation, and storing the calibration data in a database to establish a correspondence relationship with the user.
本发明实施例提供的一种手势校准方法,具有以下有益效果:解决了边缘输入手势由于用户个体差异或者个人操作习惯使得手势识别率不高的问题,在用户手势操作热区能够准确识别手势,方便用户边缘输入操作,提升用户体验。A gesture calibration method provided by an embodiment of the present invention has the following beneficial effects: the problem that the edge input gesture is not high due to individual differences or personal operation habits, and the gesture can be accurately recognized in the user gesture operation hot zone. It is convenient for users to input operations at the edge to enhance the user experience.
附图说明DRAWINGS
下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below in conjunction with the accompanying drawings and embodiments, in which:
图1是本发明实施例的移动终端的屏幕区域划分示意图;1 is a schematic diagram of screen area division of a mobile terminal according to an embodiment of the present invention;
图2是本发明实施例的手势校准方法的流程图;2 is a flowchart of a gesture calibration method according to an embodiment of the present invention;
图3是本发明实施例中单手握持手势校准方法流程图;3 is a flow chart of a method for calibrating a one-handed gesture in an embodiment of the present invention;
图4是本发明实施例的单手握持手势信息采集界面示意图;4 is a schematic diagram of a single-hand grip gesture information collection interface according to an embodiment of the present invention;
图5是本发明实施例的右边缘下滑手势校准方法的流程图;FIG. 5 is a flowchart of a method for calibrating a right edge sliding gesture according to an embodiment of the present invention; FIG.
图6是本发明实施例的右边缘下滑手势信息采集界面示意图;6 is a schematic diagram of a right edge sliding gesture information collection interface according to an embodiment of the present invention;
图7是本发明实施例的手势校准装置结构框图;7 is a structural block diagram of a gesture calibration apparatus according to an embodiment of the present invention;
图8是本发明实施例的移动终端的软件架构示意图;8 is a schematic diagram of a software architecture of a mobile terminal according to an embodiment of the present invention;
图9是本发明实施例的输入处理方法的流程图; 9 is a flowchart of an input processing method according to an embodiment of the present invention;
图10是本发明实施例提供的一种用户设备的硬件结构示意图。FIG. 10 is a schematic structural diagram of hardware of a user equipment according to an embodiment of the present invention.
具体实施方式detailed description
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。A mobile terminal embodying various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, the use of suffixes such as "module", "component" or "unit" for indicating an element is merely an explanation for facilitating the present invention, and does not have a specific meaning per se. Therefore, "module" and "component" can be used in combination.
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。The mobile terminal can be implemented in various forms. For example, the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like. Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
以下通过具体实施例进行详细说明。The details are described below by way of specific examples.
参见图1为本发明一实施例的移动终端的屏幕区域划分示意图。图1为窄边框移动终端的示意图,在移动终端100包括:C区(灰色区域)为边缘输入区,A区为正常触控区,B区为非显示区。需要指出的是:图1仅为一个具体应用场景的实施例,在实际应用中,另一个具体应用场景的实施例为本发明实施例公开的上述方案也同样适用于无边框的移动终端中,在无边框的移动终端中,由于无边框,相较于上述窄边框,无边框的终端的边缘输入区是将A区向外延伸至终端边缘侧边,边缘侧边为终端外壳实体的包边。可见:本发明实施例适用的具体应用场景非常广泛,所述边缘输入区为:从屏幕区域向外延伸到终端边缘侧边的区域。由于本发明实施例适用于窄 边框或无边框的移动终端,充分利用移动终端的外形尺寸,极大的扩展了移动终端的屏幕尺寸,满足了用户对大屏幕的需求,同时,通过边缘输入区的手势校准操作使得边缘输入操作多样化。另,对于上述B区而言,B区根据适用场景的不同,可以为非触控区,还可以为触控区,本文将B区统称为非显示区。其中,B区为触控区时,只是操作方式与A区和C区不同而已,此时B区不显示图像,但可以设定虚拟的功能键,例如home键等。B区为非触控区时,可以在B区设置实体功能按键。FIG. 1 is a schematic diagram of screen area division of a mobile terminal according to an embodiment of the present invention. 1 is a schematic diagram of a narrow-frame mobile terminal. The mobile terminal 100 includes: a C area (gray area) is an edge input area, an area A is a normal touch area, and a B area is a non-display area. It should be noted that FIG. 1 is only an embodiment of a specific application scenario. In an actual application, the embodiment of another specific application scenario is also applicable to a borderless mobile terminal. In the borderless mobile terminal, because there is no border, compared with the above narrow border, the edge input area of the borderless terminal extends the A area outward to the side edge of the terminal edge, and the edge side is the edge of the terminal housing entity. . It can be seen that the specific application scenarios applicable to the embodiments of the present invention are very broad. The edge input area is an area extending from the screen area to the side of the terminal edge. Since the embodiment of the invention is applicable to narrow The border or the borderless mobile terminal fully utilizes the external size of the mobile terminal, greatly expands the screen size of the mobile terminal, satisfies the user's demand for the large screen, and at the same time, the edge input operation is performed through the gesture calibration operation of the edge input area. diversification. In addition, for the B area, the B area may be a non-touch area or a touch area according to different applicable scenarios, and the B area is collectively referred to as a non-display area. When the B area is the touch area, only the operation mode is different from the A area and the C area. In this case, the B area does not display an image, but a virtual function key, such as a home key, can be set. When the B area is a non-touch area, the physical function button can be set in the B area.
在本发明的实施例中,A区内的触摸操作,按照现有的正常处理方式进行处理,例如,A区内单击某应用图标即开启该应用等。对于C区内的触摸操作,可定义为边缘触摸处理方式,例如,可定义C区内双边滑动即进行终端加速等。B区为非显示区,例如,B区可设置有按键区、听筒等。In the embodiment of the present invention, the touch operation in the area A is processed according to the existing normal processing manner. For example, clicking an application icon in the area A opens the application. For the touch operation in the C area, it can be defined as the edge touch processing mode. For example, the bilateral sliding in the C area can be defined, that is, the terminal acceleration is performed. The B area is a non-display area. For example, the B area may be provided with a button area, an earpiece, and the like.
在本发明的实施例中,C区可采用固定方式划分或自定义划分。固定划分,即在移动终端的屏幕区设置固定长度、固定宽带的区域作为C区。C区可包括位于移动终端屏幕左侧的部分区域和右侧的部分区域,其位置固定设于移动终端的两侧边缘,如图1所示。当然,也可仅在移动终端的一侧边缘处划分C区。In an embodiment of the invention, the C zone may be divided in a fixed manner or a custom partition. Fixed division, that is, a fixed-length, fixed-bandwidth area is set as the C area in the screen area of the mobile terminal. The C area may include a partial area on the left side of the screen of the mobile terminal and a partial area on the right side, and the position is fixedly disposed on both side edges of the mobile terminal, as shown in FIG. Of course, the C zone can also be divided only at one side edge of the mobile terminal.
自定义划分,即C区的区域的个数、位置及大小,可自定义的设置,例如,可由用户进行设定,也可由移动终端根据自身需求,调整C区的区域的数量、位置及大小。通常,C区的基本图形设计为矩形,只要输入图形对角的两个顶点坐标即可确定C区的位置和大小。Customized division, that is, the number, location and size of the area of the C area, which can be customized, for example, can be set by the user, or the mobile terminal can adjust the number, position and size of the area of the C area according to its own needs. . Generally, the basic graphic design of the C area is a rectangle, and the position and size of the C area can be determined by inputting the coordinates of the two vertices of the diagonal of the graphic.
为满足不同用户对不同应用的使用习惯,还可设置应用于不同应用场景下的多套C区设置方案。例如,在系统桌面下,因为图标占位较多,两侧的C区宽度设置得相对较窄;而当点击相机图标进入相机应用后,可设置此场景下的C区数量、位置、大小,在不影响对焦的情况下,C区宽度可设置的相对较宽。In order to meet the usage habits of different users for different applications, it is also possible to set multiple sets of C-zone setting schemes applied in different application scenarios. For example, under the system desktop, because the icons occupy more places, the width of the C area on both sides is relatively narrower; when the camera icon is clicked to enter the camera application, the number, position, and size of the C area in the scene can be set. The width of the C zone can be set relatively wide without affecting the focus.
本发明实施例对C区的划分、设置方式不作限制。 The embodiment of the present invention does not limit the division and setting manner of the C area.
参见图2,为本发明实施例提供的一种手势校准方法的流程图,本发明实施例的手势校准方法包括:FIG. 2 is a flowchart of a gesture calibration method according to an embodiment of the present invention. The gesture calibration method according to an embodiment of the present invention includes:
S10,启动边缘手势校准模式。S10, the edge gesture calibration mode is activated.
在一个实施例中,移动终端设备具有边缘手势校准功能,在正常情况下,该边缘手势校准功能处于关闭状态,当启动边缘手势校准模式后,则移动终端可以采集用户的边缘手势。用户初次使用手势操作之前,需要根据用户的操作习惯采集用户手势的输入信息。用户的手势类型包括但不限于:左侧边缘上滑,右侧边缘上滑,左侧边缘下滑,右侧边缘下滑,双边下滑,双边上滑,握持终端屏幕四角,单边来回滑,单手握持等等。In one embodiment, the mobile terminal device has an edge gesture calibration function. Under normal circumstances, the edge gesture calibration function is in an off state. When the edge gesture calibration mode is activated, the mobile terminal can collect the user's edge gesture. Before the user first uses the gesture operation, the user input gesture information needs to be collected according to the user's operation habits. The user's gesture types include, but are not limited to, the left edge is slid, the right edge is slid, the left edge is slid, the right edge is slid, the bilateral is slid, the bilateral is slid, the terminal screen is held at four corners, and the unilateral back and forth slides. Hand hold and so on.
S20,按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录。S20: repeatedly inputting an edge gesture operation according to a preset number of times, and collecting feature values of the edge gesture operation input each time and recording.
在一个实施例中,驱动层获取用户通过输入设备产生的输入事件,例如,通过触摸屏进行的输入操作事件。在本发明的实施例中,输入事件包括:正常输入事件(A区输入事件)和边缘输入事件(C区输入事件)。正常输入事件包括在A区进行的单击、双击、滑动等输入操作。边缘输入事件包括在C区进行的左侧边缘上滑、左侧边缘下滑、右侧边缘上滑、右侧边缘下滑、双边上滑、双边下滑、握持手机四角、单边来回滑、握一握、单手握持等输入操作。应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层,应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。In one embodiment, the driver layer retrieves input events generated by the user through the input device, such as input operation events through the touch screen. In an embodiment of the invention, the input events include a normal input event (A zone input event) and an edge input event (C zone input event). Normal input events include input operations such as click, double click, and slide in Area A. Edge input events include sliding on the left edge of Zone C, sliding of the left edge, sliding of the right edge, sliding of the right edge, bilateral sliding, bilateral sliding, holding the four corners of the phone, sliding back and forth, holding one Input operation such as grip, one-handed grip. The application framework layer determines whether the gesture input event is an edge gesture. When the gesture input event is an edge gesture, the judgment result is reported to the application layer, and the application layer collects the user's edge gesture according to the determination result, and collects the edge according to the preset number of times. The eigenvalue of the gesture.
应理解,在判断手势输入事件时,也可采用以下方式:驱动层获取用户通过输入设备产生的输入事件,并判断手势输入事件是否为边缘输入事件(C区输入事件),若是边缘手势输入事件,则上报给应用框架层,再通过应用框架层上报到应用层,应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。It should be understood that when judging the gesture input event, the following manner may also be adopted: the driver layer acquires an input event generated by the user through the input device, and determines whether the gesture input event is an edge input event (C-region input event), and if the edge gesture input event Then, it is reported to the application framework layer, and then reported to the application layer through the application framework layer. The application layer collects the user's edge gesture according to the judgment result, and collects the feature value of the edge gesture according to the preset number of times.
S30,根据每次记录的所述边缘手势操作的特征值计算其平均值,得出 校准数据。S30, calculating an average value according to the feature value of the edge gesture operation recorded each time, and obtaining Calibration data.
在一个实施例中,当用户在移动终端的边缘开始输入边缘手势时,分别获取所述边缘手势中每个手指对应的特征值;对每个手指对应的特征值计算平均值,得出所述边缘手势的校准数据。其中,边缘手势特征值包括:手指与屏幕边缘接触位置的坐标值,和/或手指按压的压力值、时间值。根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。当热点区域接收到手势操作信号时,降低对该区域的检测条件,例如,降低检测像素点的检测阈值或者按压力度的检测阈值。In an embodiment, when the user starts inputting an edge gesture at the edge of the mobile terminal, acquiring feature values corresponding to each finger in the edge gesture respectively; calculating an average value of the feature values corresponding to each finger, and obtaining the Calibration data for edge gestures. The edge gesture feature value includes: a coordinate value of a contact position of the finger with the edge of the screen, and/or a pressure value and a time value of the finger press. And determining, according to the calibration data of the edge gesture, a hotspot area corresponding to the edge gesture operation, and storing the calibration data in a database to establish a correspondence relationship with the user. When the hot spot area receives the gesture operation signal, the detection condition of the area is lowered, for example, the detection threshold of the detection pixel point or the detection threshold of the pressing force is lowered.
参见图3为本发明实施例的单手握持手势校准方法的流程图。本发明实施例的手势校准方法包括:FIG. 3 is a flowchart of a method for calibrating a one-handed gesture according to an embodiment of the present invention. The gesture calibration method of the embodiment of the invention includes:
S11,启动边缘手势校准模式。S11, the edge gesture calibration mode is activated.
在一个实施例中,移动终端设备具有边缘手势校准功能,在正常情况下,该边缘手势校准功能处于关闭状态,当启动边缘手势校准模式后,则移动终端可以采集用户的边缘手势。用户初次使用单手手势操作之前,需要根据用户的操作习惯采集用户手势的输入信息。In one embodiment, the mobile terminal device has an edge gesture calibration function. Under normal circumstances, the edge gesture calibration function is in an off state. When the edge gesture calibration mode is activated, the mobile terminal can collect the user's edge gesture. Before the user first uses the one-hand gesture operation, the input information of the user gesture needs to be collected according to the user's operating habits.
S12,获取用户的单手握持手势,按照预设次数采集单手握持手势对应的特征值并记录。S12: Acquire a one-handed gesture of the user, and collect the feature value corresponding to the one-handed gesture according to the preset number of times and record.
在一个实施例中,以单手握持手势为例,每次分别采集并记录用户每个手指与移动终端屏幕边缘接触点的位置坐标C1(downX1,downY1)、C2(downX2,downY2)、C3(downX3,downY3)、C4(downX4,downY4)、C5(downX5,downY5)。参见图4,为本发明实施例的单手握持手势信息采集界面示意图。移动终端向用户展示一个设置接口,用户点击设置接口进入手势信息设置,然后根据用户个人触摸区域或手持方式进行手势特征值采集。其中C区(灰色区域)的阴影部分表示用户单手握持时的触控区域。用户根据移动终端提示进行单手握持操作并重复步骤,移动终端每次接收 到单手握持手势信号时,分别每次记录用户手指按下触摸区域的位置。In one embodiment, taking the one-handed gesture as an example, each time the position coordinates C 1 (downX 1 , downY 1 ), C 2 (downX 2 ) of the contact point of each finger of the user with the screen edge of the mobile terminal are separately collected and recorded. , downY 2 ), C 3 (downX 3 , downY 3 ), C 4 (downX 4 , downY 4 ), C 5 (downX 5 , downY 5 ). FIG. 4 is a schematic diagram of a single-hand grip gesture information collection interface according to an embodiment of the present invention. The mobile terminal presents a setting interface to the user, and the user clicks the setting interface to enter the gesture information setting, and then performs gesture feature value collection according to the user's personal touch area or handheld mode. The shaded portion of the C area (gray area) indicates the touch area when the user holds the one hand. The user performs a one-hand grip operation according to the prompt of the mobile terminal and repeats the steps. Each time the mobile terminal receives the one-hand grip gesture signal, the mobile terminal records the position of the user's finger pressing the touch area each time.
进一步地,手势的输入信息的数据也包括:触控点面积SC1、SC2、SC3Further, the data of the input information of the gesture also includes: touch point area S C1 , S C2 , S C3 ,
SC4、SC5,以及每个触控点的按压的压力值、时间值等信息。同理,可得触控点面积SCi的校准数据,根据每次单手握持时手指按压的压力值、时间值的平均值得出每个触控点按压的压力值、时间值的校准数据。S C4 , S C5 , and the pressure value and time value of the pressing of each touch point. Similarly, the calibration data of the touch point area S Ci can be obtained, and the pressure value and the time value of each touch point are obtained according to the average value of the pressure value and the time value of the finger pressing each time the one hand is held. .
S13,根据每次记录的单手握持手势对应的特征值计算其平均值,得出单手握持手势的校准数据。S13: Calculate an average value according to the feature value corresponding to the single-handed gesture of each record, and obtain calibration data of the one-handed hand gesture.
用户单手握持手指按压位置坐标的平均值
Figure PCTCN2016106167-appb-000001
(其中,i=1,2,3,4,5,表示每个手指按压位置坐标的平均值),即视为单手握持手指按压位置的校准数据,该手势所产生的校准数据作为该用户单手握持手势校准数据,根据单手握持手势的校准数据,确定该边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。当热点区域接收到手势操作信号时,降低对该区域的检测条件,例如,降低检测像素点的检测阈值或者按压力度的检测阈值。
The average value of the user's one-handed finger holding position coordinates
Figure PCTCN2016106167-appb-000001
(where i=1, 2, 3, 4, 5, which represents the average value of the coordinates of each finger pressing position), that is, calibration data regarded as a one-handed gripping position of the finger, and the calibration data generated by the gesture serves as the calibration data The user holds the gesture calibration data with one hand, determines the hotspot area of the edge gesture operation according to the calibration data of the one-handed gesture, and stores the calibration data in a database to establish a correspondence with the user. When the hot spot area receives the gesture operation signal, the detection condition of the area is lowered, for example, the detection threshold of the detection pixel point or the detection threshold of the pressing force is lowered.
本发明实施例对手指按压区域个数不作限制。The embodiment of the present invention does not limit the number of finger pressing regions.
基于本发明实施例提供的一种手势校准方法,根据用户的手势操作习惯对移动终端边缘输入区域进行自定义设置,如图4所示,由于用户个体差异或者个人操作习惯而导致C区(灰色区域)的阴影部分的位置或面积、按压力度不同,通过采集用户手势特征或者用户习惯,确定用户手势操作的热点区域,通过降低对热点区域的手机检测条件来提升手势识别率,方便用户边缘输入操作,提升用户体验。Based on the gesture calibration method provided by the embodiment of the present invention, the edge input area of the mobile terminal is customized according to the gesture operation habit of the user. As shown in FIG. 4, the C area is caused by individual differences or personal operation habits. The position or area of the shaded portion of the area and the pressing force are different. The user's gesture feature or user habit is collected to determine the hot spot area of the user's gesture operation, and the gesture recognition rate is improved by lowering the mobile phone detection condition of the hot spot area, thereby facilitating the user's edge input. Operation to enhance the user experience.
参见图5,为本发明实施例的右边缘下滑手势校准方法的流程图。本发明实施例的手势校准方法包括:FIG. 5 is a flowchart of a method for calibrating a right edge sliding gesture according to an embodiment of the present invention. The gesture calibration method of the embodiment of the invention includes:
S21,启动边缘手势校准模式。S21, starting the edge gesture calibration mode.
在一个实施例中,移动终端设备具有边缘手势校准功能,在正常情况 下,该边缘手势校准功能处于关闭状态,当启动边缘手势校准模式后,则移动终端可以采集用户的边缘手势。用户初次使用右边缘下滑手势操作之前,需要根据用户的操作习惯采集用户手势的输入信息。In one embodiment, the mobile terminal device has an edge gesture calibration function, under normal conditions The edge gesture calibration function is in the off state. When the edge gesture calibration mode is activated, the mobile terminal can collect the user's edge gesture. Before the user first uses the right edge sliding gesture operation, the user input gesture information needs to be collected according to the user's operating habits.
S22,获取用户的右边缘下滑手势,按照预设次数采集右边缘下滑手势对应的特征值并记录。S22: Acquire a right edge sliding gesture of the user, and collect feature values corresponding to the right edge sliding gesture according to the preset number of times and record.
在一个实施例中,以右侧边缘下滑手势为例,说明对手势输入信息的采集过程。如图6所示,其中C区(灰色区域)的阴影部分表示用户的滑动区域,用户根据移动终端提示进行重复右边缘下滑操作,移动终端每次接收到右边缘滑动手势信号时,分别每次记录用户手指按下触摸区域的起始位置坐标A(downX,downY)、终止位置坐标B(currentX,currentY)以及滑动时间downTime。In one embodiment, the right edge sliding gesture is taken as an example to illustrate the process of collecting gesture input information. As shown in FIG. 6, the shaded portion of the C area (gray area) represents the sliding area of the user, and the user repeats the right edge sliding operation according to the prompt of the mobile terminal, and each time the mobile terminal receives the right edge sliding gesture signal, each time Record the starting position coordinate A (downX, downY), the ending position coordinate B (currentX, currentY), and the sliding time downTime of the user's finger pressing the touch area.
S23,根据每次记录的右边缘下滑手势对应的特征值计算其平均值,得出单手握持手势的校准数据。S23: Calculate an average value according to the feature value corresponding to the right edge sliding gesture of each record, and obtain calibration data of the one-hand holding gesture.
在一个实施例中,以右边缘滑动手势为例,采集右边缘下滑手势的特征值包括手指与屏幕右侧边缘接触的起始位置坐标A(downX,downY)、终止位置坐标B(currentX,currentY)、滑动时间downTime,根据滑动操作的起始位置和终止位置计算出滑动距离L和所用滑动时间downTime,得出滑动时的速度v=L/downTime。其中,计算距离有如下两种方法:In one embodiment, taking the right edge sliding gesture as an example, the feature values of the right edge sliding gesture are collected: the starting position coordinate A (downX, downY) and the ending position coordinate B (currentX, currentY) of the finger contacting the right edge of the screen. The sliding time downTime calculates the sliding distance L and the sliding time downTime according to the starting position and the ending position of the sliding operation, and obtains the speed v=L/downTime when sliding. Among them, there are two ways to calculate the distance:
Figure PCTCN2016106167-appb-000002
或者
Figure PCTCN2016106167-appb-000002
or
L=|currentY–downY|。L=|currentY–downY|.
每次用户滑动操作起始位置坐标的平均值
Figure PCTCN2016106167-appb-000003
即视为滑动操作起始位置的校准数据,同理,可得出滑动操作终止位置的校准数据,根据每次滑动操作速度的平均值
Figure PCTCN2016106167-appb-000004
得出滑动速度的校准数据。该手势所产生的校准数据作为右边缘下滑手势的校准数据,根据右边缘下滑手势的校准数据,确定该边缘手势操作的热点区域,并将所述校准数据存 储于数据库,建立与用户的对应关系。当热点区域接收到手势操作信号时,降低对该区域的检测条件,例如,降低检测像素点的检测阈值或者按压力度的检测阈值。
The average of the coordinates of the starting position of each user's sliding operation
Figure PCTCN2016106167-appb-000003
That is, it is regarded as the calibration data of the starting position of the sliding operation. Similarly, the calibration data of the end position of the sliding operation can be obtained, and the average value according to the speed of each sliding operation is obtained.
Figure PCTCN2016106167-appb-000004
The calibration data of the sliding speed is obtained. The calibration data generated by the gesture is used as calibration data of the right edge sliding gesture, and the hotspot region of the edge gesture operation is determined according to the calibration data of the right edge sliding gesture, and the calibration data is stored in a database to establish a correspondence with the user. . When the hot spot area receives the gesture operation signal, the detection condition of the area is lowered, for example, the detection threshold of the detection pixel point or the detection threshold of the pressing force is lowered.
基于本发明实施例提供的一种手势校准方法,根据用户的手势操作习惯对移动终端边缘输入区域进行自定义设置,如图6所示,C区(灰色区域)的阴影部分可能由于用户个体差异或者个人操作习惯而产生不同,通过采集用户手势特征或者用户习惯,对用户的手势进行校准,方便用户边缘输入操作,提升用户体验。Based on the gesture calibration method provided by the embodiment of the present invention, the edge input area of the mobile terminal is customized according to the gesture operation habit of the user. As shown in FIG. 6, the shaded portion of the C area (gray area) may be due to individual user differences. Or the personal operation habits are different. By collecting user gesture features or user habits, the user's gestures are calibrated to facilitate user edge input operations and enhance the user experience.
参见图7,为本发明实施例的手势校准装置的结构图。本发明实施例的手势校准装置包括:FIG. 7 is a structural diagram of a gesture calibration apparatus according to an embodiment of the present invention. The gesture calibration apparatus of the embodiment of the invention includes:
启动模块11,配置为将移动终端开启边缘手势校准模式。The startup module 11 is configured to turn on the edge gesture calibration mode of the mobile terminal.
在一个实施例中,移动终端设备具有边缘手势校准功能,在正常情况下,该边缘手势校准功能处于关闭状态,当启动边缘手势校准模式后,则移动终端可以采集用户的边缘手势。用户初次使用手势操作之前,需要根据用户的操作习惯采集用户手势的输入信息。用户的手势类型包括但不限于:左侧边缘上滑,右侧边缘上滑,左侧边缘下滑,右侧边缘下滑,双边下滑,双边上滑,握持终端屏幕四角,单边来回滑,单手握持等等。In one embodiment, the mobile terminal device has an edge gesture calibration function. Under normal circumstances, the edge gesture calibration function is in an off state. When the edge gesture calibration mode is activated, the mobile terminal can collect the user's edge gesture. Before the user first uses the gesture operation, the user input gesture information needs to be collected according to the user's operation habits. The user's gesture types include, but are not limited to, the left edge is slid, the right edge is slid, the left edge is slid, the right edge is slid, the bilateral is slid, the bilateral is slid, the terminal screen is held at four corners, and the unilateral back and forth slides. Hand hold and so on.
获取模块12,配置为按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录。The obtaining module 12 is configured to repeatedly input an edge gesture operation according to a preset number of times, and collect feature values of the edge gesture operation input each time and record.
在一个实施例中,驱动层获取用户通过输入设备产生的输入事件,例如,通过触摸屏进行的输入操作事件。在本发明的实施例中,输入事件包括:正常输入事件(A区输入事件)和边缘输入事件(C区输入事件)。正常输入事件包括在A区进行的单击、双击、滑动等输入操作。边缘输入事件包括在C区进行的左侧边缘上滑、左侧边缘下滑、右侧边缘上滑、右侧边缘下滑、双边上滑、双边下滑、握持手机四角、单边来回滑、握一握、单手握持等输入操作。应用框架层判断手势输入事件是否为边缘手势,当 所述手势输入事件为边缘手势时,将判断结果上报到应用层,应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。In one embodiment, the driver layer retrieves input events generated by the user through the input device, such as input operation events through the touch screen. In an embodiment of the invention, the input events include a normal input event (A zone input event) and an edge input event (C zone input event). Normal input events include input operations such as click, double click, and slide in Area A. Edge input events include sliding on the left edge of Zone C, sliding of the left edge, sliding of the right edge, sliding of the right edge, bilateral sliding, bilateral sliding, holding the four corners of the phone, sliding back and forth, holding one Input operation such as grip, one-handed grip. The application framework layer determines whether the gesture input event is an edge gesture, when When the gesture input event is an edge gesture, the judgment result is reported to the application layer, and the application layer collects the edge gesture of the user according to the determination result, and collects the feature value of the edge gesture according to the preset number of times.
处理模块13,配置为根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。处理模块13还包括:The processing module 13 is configured to calculate an average value according to the feature value of the edge gesture operation recorded each time to obtain calibration data. The processing module 13 further includes:
第一处理单元130,配置为分别获取所述边缘手势中每个手指对应的特征值。The first processing unit 130 is configured to acquire feature values corresponding to each finger in the edge gesture respectively.
以单手握持手势为例,采集单手握持手势的输入信息的数据包括手指按压位置坐标C1(downX1,downY1)、C2(downX2,downY2)、C3(downX3,downY3)、C4(downX4,downY4)、C5(downX5,downY5)。Taking the one-handed gesture as an example, the data of the input information of the one-handed holding gesture includes the finger pressing position coordinates C 1 (downX 1 , downY 1 ), C 2 (downX 2 , downY 2 ), C 3 (downX 3 , downY 3 ), C 4 (downX 4 , downY 4 ), C 5 (downX 5 , downY 5 ).
第二处理单元131,配置为对每个手指对应的特征值计算平均值,得出所述边缘手势的校准数据。The second processing unit 131 is configured to calculate an average value of the feature values corresponding to each finger to obtain calibration data of the edge gesture.
在一个实施例中,每次用户单手握持手指按压位置坐标的平均值
Figure PCTCN2016106167-appb-000005
即视为单手握持手指按压位置的校准数据,将上述校准数据存储于数据库中,配置为根据用户的操作习惯建立手势与使用模式的对应关系。
In one embodiment, each time the user holds the average of the finger press position coordinates with one hand
Figure PCTCN2016106167-appb-000005
That is, it is regarded as calibration data for holding the finger pressing position with one hand, and the calibration data is stored in the database, and is configured to establish a correspondence between the gesture and the usage mode according to the user's operating habits.
进一步地,手势的特征值也可以包括:触控点面积SC1、SC2、SC3Further, the feature value of the gesture may also include: touch point area S C1 , S C2 , S C3 ,
SC4、SC5,以及每个触控点的按压力度等信息。同理,可得触控点面积SCi的校准数据,根据每次单手握持时手指按压力度的平均值得出每个触控点按压力度的校准数据。S C4 , S C5 , and the pressing force of each touch point. Similarly, the calibration data of the touch point area S Ci can be obtained, and the calibration data of the pressing force of each touch point is obtained according to the average value of the finger pressing force when each one hand is held.
存储模块14,配置为根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。The storage module 14 is configured to determine a hotspot area corresponding to the edge gesture operation according to the calibration data of the edge gesture, and store the calibration data in a database to establish a correspondence relationship with the user.
参见图8,本发明实施例的移动终端的软件架构示意图。本发明实施例的移动终端的软件架构包括:输入设备201、驱动层202、应用框架层203和应用层204。 Referring to FIG. 8, a software architecture diagram of a mobile terminal according to an embodiment of the present invention is shown. The software architecture of the mobile terminal of the embodiment of the present invention includes: an input device 201, a driver layer 202, an application framework layer 203, and an application layer 204.
输入设备201接收到用户的输入操作,将物理输入转变为电信号TP,将TP传递至驱动层202;驱动层202对输入的位置进行解析,得到触摸点的具体坐标、持续时间等参数,将该参数上传至应用框架层203,应用框架层203与驱动层202的通信可通过相应的接口来实现。应用框架层203接收到驱动层202上报的参数,进行解析,区分边缘输入事件和正常输入事件,并将有效的输入向上传递给应用层204的具体哪一个应用,以满足应用层204根据不同的输入操作执行不同的输入操作指令。The input device 201 receives the input operation of the user, converts the physical input into an electrical signal TP, and transmits the TP to the driving layer 202; the driving layer 202 analyzes the input position to obtain parameters such as specific coordinates and duration of the touched point, and This parameter is uploaded to the application framework layer 203, and communication between the application framework layer 203 and the driver layer 202 can be implemented through a corresponding interface. The application framework layer 203 receives the parameters reported by the driver layer 202, parses, distinguishes the edge input event and the normal input event, and passes the valid input to the specific application of the application layer 204 to meet the application layer 204 according to different Input operations perform different input operation instructions.
具体的,驱动层配置为获取用户通过输入设备产生的输入事件,并上报到应用框架层。Specifically, the driver layer is configured to obtain an input event generated by the user through the input device, and report to the application framework layer.
应用框架层配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给应用层;若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给应用层。The application framework layer is configured to determine whether the input event is an edge input event or a normal input event. If it is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application layer; if the edge input event is input to the edge input The event is processed and identified, and the recognition result is reported to the application layer.
应用层配置为根据上报的识别结果执行相应的输入指令。The application layer is configured to execute a corresponding input instruction based on the reported recognition result.
本发明实施例的移动终端,由于在应用框架层才进行区分A区和C区的操作,且在应用框架层进行虚拟设备的建立,避免了在驱动层区分A区和C区对硬件的依赖。The mobile terminal in the embodiment of the present invention avoids the operation of distinguishing the A area and the C area in the application framework layer, and establishes the virtual device in the application framework layer, thereby avoiding the dependence of the driver layer on the hardware of the A area and the C area. .
参见图9为本发明实施例的输入处理方法的流程图,包括以下步骤:FIG. 9 is a flowchart of an input processing method according to an embodiment of the present invention, including the following steps:
S1,驱动层获取用户通过输入设备产生的输入事件,并上报到应用框架层。S1. The driver layer acquires an input event generated by the user through the input device, and reports it to the application framework layer.
具体的,输入设备接收到用户的输入操作(即输入事件),将物理输入转变为电信号,并将电信号传递至驱动层。在本发明实施例中,输入事件包括A区输入事件和C区输入事件。A区输入事件包括在A区进行的单击、双击、滑动等输入操作。C区输入事件包括在C区进行的左侧边缘上滑、左侧边缘下滑、右侧边缘上滑、右侧边缘下滑、双边上滑、双边下滑、单边来回滑、握一握、单手握持等输入操作。 Specifically, the input device receives an input operation (ie, an input event) of the user, converts the physical input into an electrical signal, and transmits the electrical signal to the driving layer. In an embodiment of the invention, the input events include an A zone input event and a C zone input event. Input events in Zone A include input operations such as click, double click, and slide in Zone A. The input events in Zone C include sliding on the left edge of Zone C, sliding on the left edge, slipping on the right edge, sliding on the right edge, bilaterally sliding, bilateral sliding, sliding on one side, holding a grip, one hand Hold and other input operations.
驱动层根据接收到的电信号对输入位置进行解析,得到触摸点的具体坐标、持续时间等相关参数。该相关参数被上报到应用框架层。The driving layer analyzes the input position according to the received electrical signal to obtain related parameters such as specific coordinates and duration of the touched point. The relevant parameters are reported to the application framework layer.
此外,若驱动层采用A协议上报输入事件,则该步骤S1还包括:In addition, if the driver layer reports the input event by using the A protocol, the step S1 further includes:
为每一触摸点赋予一用于区分手指的编号(ID)。A number (ID) for distinguishing the finger is assigned to each touch point.
由此,若驱动层采用A协议上报输入事件,则上报的数据包括上述相关参数,以及触摸点的编号。Therefore, if the driver layer reports the input event by using the A protocol, the reported data includes the above related parameters and the number of the touched point.
S2,应用框架层判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则执行步骤S3,若为边缘输入事件则执行步骤S4。S2. The application framework layer determines whether the input event is an edge input event or a normal input event. If it is a normal input event, step S3 is performed, and if the edge input event is performed, step S4 is performed.
具体的,应用框架层根据输入事件的相关参数中的坐标可判断其为边缘输入事件还是正常输入事件。首先获取触摸点的横轴坐标,然后将触摸点的横轴坐标(即X轴坐标)(x)与C区宽度(Wc)以及触摸屏宽度(W)进行比较。若Wc<x<(W-Wc)则触摸点位于A区,事件为正常输入事件;否则,事件为边缘输入事件。若驱动层采用B协议上报输入事件,则步骤S2还具体包括:为每一触摸点赋予用于区分手指的编号(ID);将触摸点的所有要素信息(坐标、持续时间、编号等)进行存储。Specifically, the application framework layer can determine whether it is an edge input event or a normal input event according to the coordinates in the relevant parameters of the input event. First, the horizontal axis coordinate of the touched point is acquired, and then the horizontal axis coordinate (ie, the X-axis coordinate) (x) of the touched point is compared with the C-zone width (Wc) and the touchscreen width (W). If Wc<x<(W-Wc), the touch point is in the A area, and the event is a normal input event; otherwise, the event is an edge input event. If the driving layer uses the B protocol to report the input event, the step S2 further includes: assigning a number (ID) for distinguishing the finger to each touch point; and performing all the element information (coordinates, duration, number, etc.) of the touch point. storage.
由此,本发明实施例通过设置触摸点编号,可实现区分手指,兼容A协议和B协议;且触摸点的所有要素(触摸点的坐标、编号等)被存储,可后续判断边缘输入(例如,FIT)提供便利。Therefore, in the embodiment of the present invention, by setting the touch point number, the finger can be distinguished, and the A protocol and the B protocol are compatible; and all the elements of the touch point (coordinates, numbers, and the like of the touch point) are stored, and the edge input can be subsequently determined (for example, , FIT) provides convenience.
在一个实施例中,边缘输入事件和正常输入事件上报所采用的通道不相同。边缘输入事件采用专用通道。In one embodiment, the edge input event is not the same as the channel used for normal input event reporting. Edge input events use dedicated channels.
S3,应用框架层对正常输入事件进行处理识别,并将识别结果上报给应用层。S3, the application framework layer processes and identifies the normal input event, and reports the recognition result to the application layer.
S4,应用框架层对边缘输入事件进行处理识别,并将识别结果上报给应用层。S4, the application framework layer processes and recognizes the edge input event, and reports the recognition result to the application layer.
具体的,处理识别包括:根据输入操作的触摸点坐标、持续时间、编号等进行处理识别,以确定输入操作。例如,根据触摸点的坐标、持续时 间和编号即可识别出是A区的单击、滑动等输入操作,还是C区的单边来回滑等输入操作。Specifically, the process identification includes: performing process identification according to touch point coordinates, duration, number, and the like of the input operation to determine an input operation. For example, based on the coordinates of the touch point, duration The interval and number can identify whether the input operation of the A area is clicked or swiped, or the input operation of the C area unilateral back and forth.
S5,应用层根据上报的识别结果执行相应的输入指令。S5. The application layer executes a corresponding input instruction according to the reported recognition result.
具体的,应用层包括相机、图库、锁屏等应用。本发明实施例中的输入操作包括应用级和系统级,系统级的手势处理也将其归类为应用层。其中,应用级为对应用程序的操控,例如,开启、关闭、音量控制等。系统级为对移动终端的操控,例如,开机、加速、应用间切换、全局返回等。Specifically, the application layer includes applications such as a camera, a gallery, and a lock screen. The input operations in the embodiments of the present invention include an application level and a system level, and the system level gesture processing also classifies it as an application layer. Among them, the application level is the manipulation of the application, for example, on, off, volume control, and the like. The system level is the manipulation of the mobile terminal, for example, power on, acceleration, inter-application switching, global return, and the like.
在一个实施例中,移动终端设置并存储有与不同的输入操作对应的输入指令,其中包括与边缘输入操作对应的输入指令和与正常输入操作对应的输入指令。应用层接收到上报的边缘输入事件的识别结果,即根据边缘输入操作调用相应的输入指令以响应该边缘输入操作;应用层接收到上报的正常输入事件的识别结果,即根据正常输入操作调用相应的输入指令以响应该正常输入操作。In one embodiment, the mobile terminal sets and stores input commands corresponding to different input operations, including input commands corresponding to edge input operations and input commands corresponding to normal input operations. The application layer receives the recognition result of the reported edge input event, that is, the corresponding input instruction is invoked according to the edge input operation to respond to the edge input operation; the application layer receives the recognition result of the reported normal input event, that is, according to the normal input operation, the corresponding call is performed. The input command responds to the normal input operation.
应理解,本发明实施例的输入事件包括仅在A区的输入操作、仅在C区的输入操作以及同时产生于A区和C区的输入操作。由此,输入指令也包括与这三类输入事件对应的输入指令。本发明实施例可实现A区和C区输入操作的组合对移动终端进行控制,例如,输入操作为同时单击A区和C区的相应位置,对应的输入指令为关闭某一应用,因此,通过同时单击A区和C区相应位置的输入操作,可实现对应用的关闭。It should be understood that the input events of the embodiments of the present invention include input operations only in the A zone, input operations only in the C zone, and input operations simultaneously generated in the A zone and the C zone. Thus, the input command also includes input commands corresponding to the three types of input events. The embodiment of the present invention can implement the combination of the input operations of the A zone and the C zone to control the mobile terminal. For example, the input operation is to simultaneously click the corresponding positions of the A zone and the C zone, and the corresponding input instruction is to close an application, therefore, The application can be closed by simultaneously clicking the input operations of the corresponding positions in the A zone and the C zone.
本发明实施例的移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA、PAD、PMP、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。The mobile terminal of the embodiment of the present invention can be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA, a PAD, a PMP, a navigation device, and the like, and a fixed terminal such as a digital TV, a desktop computer, or the like. .
相应的,本发明实施例还提供一种用户设备,参见图10为其硬件结构示意图。参见图10,用户设备1000包括触摸屏100、控制器200、存储装置310、GPS芯片320、通信器330、视频处理器340、音频处理器350、按 钮360、麦克风370、相机380、扬声器390和动作传感器400。Correspondingly, the embodiment of the present invention further provides a user equipment, and FIG. 10 is a schematic diagram of a hardware structure thereof. Referring to FIG. 10, the user equipment 1000 includes a touch screen 100, a controller 200, a storage device 310, a GPS chip 320, a communicator 330, a video processor 340, an audio processor 350, and Button 360, microphone 370, camera 380, speaker 390, and motion sensor 400.
触摸屏100可以如上所述A区、B区和C区,或A区、B区、C区和T区。触摸屏100可以实现为各种类型的显示器,诸如液晶显示器(LCD)、有机发光二极管(OLED)显示器和等离子体显示板(PDP)。触摸屏100可以包括驱动电路,其能够实现为,例如a-si TFT、低温多晶硅(LTPS)TFT和有机TFT(OTFT),和背光单元。The touch screen 100 may be the A zone, the B zone, and the C zone, or the A zone, the B zone, the C zone, and the T zone as described above. The touch screen 100 can be implemented as various types of displays such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, and a plasma display panel (PDP). The touch screen 100 may include a driving circuit that can be implemented, for example, as an a-si TFT, a low temperature polysilicon (LTPS) TFT, and an organic TFT (OTFT), and a backlight unit.
同时,触摸屏100可以包括用于感测用户的触摸手势的触摸传感器。触摸传感器可以实现为各种类型的传感器,诸如电容类型、电阻类型或者压电类型。电容类型通过当用户身体的一部分(例如,用户的手指)触摸表面上涂敷有导电材料的触摸屏的表面时感测由用户的身体激励的微电流计算触摸坐标值。根据电阻类型,触摸屏包括两个电极板,并且当用户触摸屏幕时通过感测当触摸点处的上板和下板接触时流动的电流,来计算触摸坐标值。此外,当用户设备1000支持笔输入功能时,触摸屏100可以感测用于使用除了用户手指之外诸如笔之类的输入装置的用户手势。当输入装置是包括线圈的手写笔(stylus pen)时,用户设备1000可以包括用于感测磁场的磁性传感器(未示出),所述磁场根据手写笔内线圈对磁性传感器的接近度而改变。由此,除了感测触摸手势之外,用户设备1000还可以感测接近的手势,即手写笔悬停在用户设备1000上方。Meanwhile, the touch screen 100 may include a touch sensor for sensing a touch gesture of a user. The touch sensor can be implemented as various types of sensors, such as a capacitor type, a resistance type, or a piezoelectric type. The capacitance type calculates a touch coordinate value by sensing a micro current excited by a user's body when a portion of the user's body (eg, a user's finger) is touched on the surface of the touch screen coated with the conductive material. According to the type of resistance, the touch screen includes two electrode plates, and the touch coordinate value is calculated by sensing a current flowing when the upper and lower plates at the touch point are in contact when the user touches the screen. Further, when the user device 1000 supports the pen input function, the touch screen 100 may sense a user gesture for using an input device such as a pen other than the user's finger. When the input device is a stylus pen including a coil, the user device 1000 may include a magnetic sensor (not shown) for sensing a magnetic field that changes according to the proximity of the coil within the stylus to the magnetic sensor . Thus, in addition to sensing the touch gesture, the user device 1000 can also sense a proximity gesture, ie, the stylus hover over the user device 1000.
存储装置310可以存储用户设备1000的操作所需的各种程序和数据。例如,存储装置310可以存储用于构成将在各区(例如,A区、C区)上显示的各种屏幕的程序和数据。The storage device 310 can store various programs and data required for the operation of the user device 1000. For example, the storage device 310 can store programs and data for constructing various screens to be displayed on the respective areas (for example, the A area, the C area).
控制器200通过使用存储在存储装置310中的程序和数据在触摸屏100的各区上显示内容。The controller 200 displays content on each area of the touch screen 100 by using programs and data stored in the storage device 310.
控制器200包括RAM 210、ROM 220、CPU 230、GPU(图形处理单元)240和总线250。RAM 210、ROM 220、CPU 230和GPU 240可以通过总线250彼此连接。 The controller 200 includes a RAM 210, a ROM 220, a CPU 230, a GPU (Graphics Processing Unit) 240, and a bus 250. The RAM 210, the ROM 220, the CPU 230, and the GPU 240 may be connected to each other through a bus 250.
处理器(CPU)230访问存储装置310并且使用存储在存储装置310中的操作系统(OS)执行启动。而且,CPU 230通过使用存储在存储装置310中的各种程序、内容和数据执行各种操作。A processor (CPU) 230 accesses the storage device 310 and performs booting using an operating system (OS) stored in the storage device 310. Moreover, the CPU 230 performs various operations by using various programs, contents, and data stored in the storage device 310.
ROM 220存储用于系统启动的命令集。当开启命令被输入并且电力被提供时,CPU 230根据存储在ROM 220中命令集将存储在存储装置310中的OS复制到RAM 210,并且通过运行OS启动系统。当启动完成时,CPU230将存储在存储装置310中的各种程序复制到RAM 210,并且通过运行RAM 210中的复制程序执行各种操作。具体地说,GPU 240可以通过使用计算器(未示出)和渲染器(未示出)生成包括诸如图标、图像和文本这样的各种对象的屏幕。计算器计算诸如坐标值、格式、大小和颜色这样的特征值,其中分别根据屏幕的布局用颜色标记对象。The ROM 220 stores a set of commands for system startup. When an open command is input and power is supplied, the CPU 230 copies the OS stored in the storage device 310 to the RAM 210 according to the command set stored in the ROM 220, and starts the system by running the OS. When the startup is completed, the CPU 230 copies the various programs stored in the storage device 310 to the RAM 210, and performs various operations by running the copy program in the RAM 210. Specifically, the GPU 240 can generate a screen including various objects such as icons, images, and text by using a calculator (not shown) and a renderer (not shown). The calculator calculates feature values such as coordinate values, format, size, and color, wherein the objects are color-coded according to the layout of the screen, respectively.
GPS芯片320是从GPS(全球定位系统)卫星接收GPS信号的单元,并且计算用户设备1000的当前位置。当使用导航程序时或者当请求用户的当前位置时,控制器200可以通过使用GPS芯片320计算用户的位置。The GPS chip 320 is a unit that receives GPS signals from a GPS (Global Positioning System) satellite, and calculates the current location of the user equipment 1000. When the navigation program is used or when the current location of the user is requested, the controller 200 can calculate the location of the user by using the GPS chip 320.
通信器330是根据各种类型的通信方法与各种类型的外部设备执行通信的单元。通信器330包括WiFi芯片331、蓝牙芯片332、无线通信芯片333和NFC芯片334。控制器200通过使用通信器330执行与各种外部设备的通信。The communicator 330 is a unit that performs communication with various types of external devices in accordance with various types of communication methods. The communicator 330 includes a WiFi chip 331, a Bluetooth chip 332, a wireless communication chip 333, and an NFC chip 334. The controller 200 performs communication with various external devices by using the communicator 330.
WiFi芯片331和蓝牙芯片332分别根据WiFi方法和蓝牙方法执行通信。当使用WiFi芯片331或者蓝牙芯片332时,诸如服务集标识符(service set identifier,SSID)和会话密钥这样的各种连接信息可以首先被收发,可以通过使用连接信息连接通信,并且可以收发各种信息。无线通信芯片333是根据诸如IEEE、Zigbee、3G(第三代)、3GPP(第三代合作项目)和LTE(长期演进)这样的各种通信标准执行通信的芯片。NFC芯片334是根据使用各种RF-ID频带宽度当中13.56兆赫带宽的NFC(近场通信)方法进行操作的芯片,各种RF-ID频带宽度诸如135千赫兹、13.56兆赫、433兆赫、860~960兆 赫和2.45吉赫。The WiFi chip 331 and the Bluetooth chip 332 perform communication according to the WiFi method and the Bluetooth method, respectively. When the WiFi chip 331 or the Bluetooth chip 332 is used, various connection information such as a service set identifier (SSID) and a session key may be first transmitted and received, communication may be connected by using connection information, and each of the communication information may be transmitted and received. Kind of information. The wireless communication chip 333 is a chip that performs communication in accordance with various communication standards such as IEEE, Zigbee, 3G (third generation), 3GPP (3rd Generation Partnership Project), and LTE (Long Term Evolution). The NFC chip 334 is a chip that operates according to an NFC (Near Field Communication) method using a bandwidth of 13.56 MHz among various RF-ID bandwidths, and various RF-ID frequency bands such as 135 kHz, 13.56 MHz, 433 MHz, 860 ~ 960 trillion Hehe and 2.45 GHz.
视频处理器340是处理包括在通过通信器330接收到的内容或者存储在存储装置310中的内容中的视频数据的单元。视频处理器340可以执行对于视频数据的各种图像处理,诸如解码、缩放、噪声过滤、帧速率变换和分辨率变换。The video processor 340 is a unit that processes video data included in content received through the communicator 330 or content stored in the storage device 310. Video processor 340 can perform various image processing for video data, such as decoding, scaling, noise filtering, frame rate conversion, and resolution conversion.
音频处理器350是处理包括在通过通信器330接收到的内容或者存储在存储装置310中的内容中的音频数据的单元。音频处理器350可以执行对于音频数据的各种处理,诸如解码、放大和噪声过滤。The audio processor 350 is a unit that processes audio data included in content received through the communicator 330 or content stored in the storage device 310. The audio processor 350 can perform various processing for audio data, such as decoding, amplification, and noise filtering.
当对于多媒体内容运行再现程序时控制器200可以通过驱动视频处理器340和音频处理器350再现相应内容。The controller 200 can reproduce the corresponding content by driving the video processor 340 and the audio processor 350 when the reproduction program is run for the multimedia content.
扬声器390输出在音频处理器350中生成的音频数据。The speaker 390 outputs the audio data generated in the audio processor 350.
按钮360可以是各种类型的按钮,诸如机械按钮或者在像用户设备1000的主要外体的正面、侧面或者背面这样的一些区域上形成的触摸垫或者触摸轮。The button 360 can be various types of buttons, such as mechanical buttons or touch pads or touch wheels formed on some areas such as the front, side or back of the main outer body of the user device 1000.
麦克风370是接收用户语音或者其它声音并且将它们变换为音频数据的单元。控制器200可以使用在呼叫过程期间通过麦克风370输入的用户语音,或者将它们变换为音频数据并且存储在存储装置310中。The microphone 370 is a unit that receives user voice or other sounds and converts them into audio data. The controller 200 can use user voices input through the microphone 370 during the call process, or convert them into audio data and store them in the storage device 310.
相机380是根据用户的控制捕获静止图像或者视频图像的单元。相机380可以实现为多个单元,诸如正面相机和背面相机。如下面所述,相机380可以用作在追踪用户的目光的示范性实施例中获得用户图像的装置。The camera 380 is a unit that captures a still image or a video image according to a user's control. Camera 380 can be implemented as a plurality of units, such as a front camera and a rear camera. As described below, camera 380 can be used as a means of obtaining a user image in an exemplary embodiment that tracks the user's gaze.
当提供相机380和麦克风370时,控制器200可以根据通过麦克风370输入的用户的声音或者由相机380识别的用户动作执行控制操作。因此,用户设备1000可以在动作控制模式或者语音控制模式下操作。当在动作控制模式下操作时,控制器200通过激活相机380拍摄用户,跟踪用户动作的改变,以及执行相应的操作。当在语音控制模式下操作时,控制器200可以在语音识别模式下操作以分析通过麦克风370输入的语音并且根据分 析的用户语音执行控制操作。When the camera 380 and the microphone 370 are provided, the controller 200 can perform a control operation according to a user's voice input through the microphone 370 or a user motion recognized by the camera 380. Therefore, the user equipment 1000 can operate in an action control mode or a voice control mode. When operating in the motion control mode, the controller 200 photographs the user by activating the camera 380, tracks changes in user actions, and performs corresponding operations. When operating in the voice control mode, the controller 200 can operate in the voice recognition mode to analyze the voice input through the microphone 370 and according to the points The user voice is analyzed to perform control operations.
在支持动作控制模式或者语音控制模式的用户设备1000中,在上述各种示范性实施例中使用语音识别技术或者动作识别技术。例如,当用户执行像选择在主页屏幕上标记的对象这样的动作或者说出相应于对象的语音命令时,可以确定选择了相应对象并且可以执行与该对象匹配的控制操作。In the user device 1000 supporting the action control mode or the voice control mode, a voice recognition technology or a motion recognition technology is used in the above various exemplary embodiments. For example, when the user performs an action such as selecting an object marked on the home screen or speaking a voice command corresponding to the object, it may be determined that the corresponding object is selected and a control operation matching the object may be performed.
动作传感器400是感测用户设备1000的主体的移动的单元。用户设备1000可以旋转或者沿各种方向倾斜。动作传感器400可以通过使用诸如地磁传感器、陀螺仪传感器和加速度传感器这样的各种传感器中的一个或多个来感测诸如旋转方向、角度和斜率这样的移动特征。The motion sensor 400 is a unit that senses the movement of the body of the user device 1000. User device 1000 can be rotated or tilted in various directions. The motion sensor 400 can sense moving features such as a rotational direction, an angle, and a slope by using one or more of various sensors such as a geomagnetic sensor, a gyro sensor, and an acceleration sensor.
而且,虽然在图10中未示出,但是根据示范性实施例,用户设备1000还可以包括能够与USB连接器连接的USB端口、用于连接像耳机、鼠标、LAN和接收并处理数字多媒体广播(DMB)信号的DMB芯片这样的各种外部元件的各种输入端口、以及各种其他传感器。Moreover, although not shown in FIG. 10, according to an exemplary embodiment, the user device 1000 may further include a USB port connectable to a USB connector, for connecting like a headphone, a mouse, a LAN, and receiving and processing a digital multimedia broadcast. Various input ports of various external components such as DMB chips of (DMB) signals, and various other sensors.
如上所述,存储装置310可以存储各种程序。As described above, the storage device 310 can store various programs.
基于图10所示的用户设备,其中,触摸屏,配置为接收用户的输入操作,将物理输入转变为电信号以产生输入事件;Based on the user equipment shown in FIG. 10, wherein the touch screen is configured to receive an input operation of the user, and convert the physical input into an electrical signal to generate an input event;
处理器,包括:驱动模块、应用框架模块和应用模块;The processor includes: a driving module, an application framework module, and an application module;
其中,所述驱动模块,配置为获取用户通过输入设备产生的输入事件,并上报到所述应用框架模块;The driving module is configured to acquire an input event generated by the user through the input device, and report the event to the application framework module.
所述应用框架模块,配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给所述应用模块;若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给所述应用模块;The application framework module is configured to determine whether the input event is an edge input event or a normal input event. If the input event is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application module; The event is processed and identified by the edge input event, and the recognition result is reported to the application module;
应用模块,配置为根据上报的识别结果执行相应的输入指令。The application module is configured to execute a corresponding input instruction according to the reported recognition result.
应理解,上述实施例的移动终端处理边缘输入事件和正常输入事件的原理和细节同样适用于本发明实施例的用户设备。 It should be understood that the principles and details of the processing of the edge input event and the normal input event by the mobile terminal of the above embodiment are also applicable to the user equipment of the embodiment of the present invention.
本发明实施例的移动终端、输入处理方法和用户设备,由于在应用框架层才进行区分A区和C区的操作,且在应用框架层进行虚拟设备的建立,避免了在驱动层区分A区和C区对硬件的依赖;通过设置触摸点编号,可实现区分手指,兼容A协议和B协议;且可集成到移动终端的操作系统中,可适用不同硬件、不同种类的移动终端,可移植性好;触摸点的所有要素(触摸点的坐标、编号等)被存储,可后续判断边缘输入(例如,FIT)提供便利。The mobile terminal, the input processing method, and the user equipment in the embodiment of the present invention avoid the operation of distinguishing between the A area and the C area in the application framework layer, and the virtual device is established in the application framework layer, thereby avoiding distinguishing the A area in the driving layer. And the dependence of the C area on the hardware; by setting the touch point number, the finger can be distinguished, compatible with the A protocol and the B protocol; and can be integrated into the operating system of the mobile terminal, and can be applied to different hardware, different types of mobile terminals, and can be transplanted. Good; all the elements of the touch point (coordinates, numbers, etc. of the touch point) are stored, which can be easily determined by subsequent judgment of the edge input (for example, FIT).
专业人员还可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。A person skilled in the art will also appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of both, in order to clearly illustrate the inter In the above description, the composition and steps of the examples have been generally described in terms of functions. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
结合本文中所公开的实施例描述的方法或方法的步骤可以用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或任意其它形式的存储介质中。The steps of a method or method described in connection with the embodiments disclosed herein may be implemented in hardware, a software module executed by a processor, or a combination of both. The software module can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of In the storage medium.
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本发明的保护之内。 The embodiments of the present invention have been described above with reference to the drawings, but the present invention is not limited to the specific embodiments described above, and the specific embodiments described above are merely illustrative and not restrictive, and those skilled in the art In the light of the present invention, many forms may be made without departing from the spirit and scope of the invention as claimed.
工业实用性Industrial applicability
本发明实施例的手势校准方案,通过启动边缘手势校准模式,按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录,根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。解决了边缘输入手势由于用户个体差异或者个人操作习惯使得手势识别率不高的问题,在用户手势操作热区能够准确识别手势,方便用户边缘输入操作,提升用户体验。 The gesture calibration scheme of the embodiment of the present invention repeatedly inputs an edge gesture operation according to a preset number of times by starting the edge gesture calibration mode, and collects the feature value of the edge gesture operation input each time and records, according to each record. The eigenvalues of the edge gesture operation are calculated and averaged to obtain calibration data. The problem that the edge input gesture is not high due to individual differences or personal operation habits is solved, and the gesture can be accurately recognized in the user gesture operation hot zone, which facilitates the user's edge input operation and improves the user experience.

Claims (20)

  1. 一种手势校准方法,应用于移动终端边缘交互,所述方法包括:A gesture calibration method is applied to edge interaction of a mobile terminal, and the method includes:
    启动边缘手势校准模式;Initiating an edge gesture calibration mode;
    按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录;Repeatingly inputting an edge gesture operation according to a preset number of times, collecting feature values of the edge gesture operation input each time and recording;
    根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。The average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
  2. 根据权利要求1所述的手势校准方法,其中,所述获取输入的边缘手势,按照预设次数采集所述边缘手势对应的特征值并记录,包括:The gesture calibration method according to claim 1, wherein the acquiring the input edge gesture, collecting the feature value corresponding to the edge gesture according to a preset number of times and recording, includes:
    驱动层获取手势输入事件,并上报到应用框架层;应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
    应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。The application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
  3. 根据权利要求1所述的手势校准方法,其中,所述边缘手势操作是握持操作,所述边缘手势操作的特征值包括手指对应的坐标值,所述根据每次记录的边缘手势操作的特征值计算其平均值,得出校准数据,包括:The gesture calibration method according to claim 1, wherein the edge gesture operation is a grip operation, and the feature value of the edge gesture operation includes coordinate values corresponding to the finger, the feature of the edge gesture operation according to each recording The values are calculated and the calibration data is obtained, including:
    分别获取所述边缘手势操作中每个手指对应的坐标值;Obtaining coordinate values corresponding to each finger in the edge gesture operation respectively;
    对每个手指对应的坐标值计算平均值,得出所述边缘手势的校准数据。An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
  4. 根据权利要求1所述的手势校准方法,其中,所述边缘手势操作是滑动操作,所述边缘手势操作的特征值包括滑动操作的起点坐标值和终止坐标值,所述根据每次记录的边缘手势操作的特征值计算其平均值,得出校准数据,包括:The gesture calibration method according to claim 1, wherein the edge gesture operation is a slide operation, and the feature value of the edge gesture operation includes a start point coordinate value and a stop coordinate value of the slide operation, the edge according to each record The eigenvalues of the gesture operation calculate the average value to obtain calibration data, including:
    分别获取所述边缘手势操作中手指滑动操作的起点坐标值和终止坐标值; Obtaining a starting point coordinate value and a ending coordinate value of the finger sliding operation in the edge gesture operation respectively;
    对起点坐标值和终止坐标值分别计算平均值,得出所述边缘手势的校准数据。The average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
  5. 根据权利要求3或4所述的手势校准方法,其中,所述根据每次记录的边缘手势对应的特征值计算其平均值,得出校准数据之后,还包括:The gesture calibration method according to claim 3 or 4, wherein the calculating the average value according to the feature value corresponding to the edge gesture of each record, and after obtaining the calibration data, further includes:
    根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。And determining, according to the calibration data of the edge gesture, a hotspot area corresponding to the edge gesture operation, and storing the calibration data in a database to establish a correspondence relationship with the user.
  6. 一种手势校准装置,应用于移动终端边缘交互,包括:启动模块、获取模块、处理模块,其中,A gesture calibration device is applied to an edge interaction of a mobile terminal, and includes: a startup module, an acquisition module, and a processing module, where
    启动模块,配置为将移动终端开启边缘手势校准模式;The startup module is configured to enable the mobile terminal to turn on an edge gesture calibration mode;
    获取模块,配置为按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录;The acquiring module is configured to repeatedly input an edge gesture operation according to a preset number of times, and collect feature values of the edge gesture operation input each time and record;
    处理模块,配置为根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。And a processing module configured to calculate an average value according to the feature value of the edge gesture operation recorded each time to obtain calibration data.
  7. 根据权利要求6所述的手势校准装置,其中,所述获取模块,还包括:The gesture calibrating apparatus according to claim 6, wherein the obtaining module further comprises:
    驱动层,配置为获取手势输入事件,并上报到应用框架层;应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The driving layer is configured to obtain a gesture input event and report to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
    应用层,配置为根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。The application layer is configured to collect an edge gesture of the user according to the determination result, and collect the feature value of the edge gesture according to a preset number of times.
  8. 根据权利要求6所述的手势校准装置,其中,所述处理模块,还包括:The gesture calibration apparatus according to claim 6, wherein the processing module further comprises:
    第一处理单元,配置为分别获取所述边缘手势中每个手指对应的特征值;a first processing unit, configured to respectively acquire feature values corresponding to each finger in the edge gesture;
    第二处理单元,配置为对每个手指对应的特征值计算平均值,得出所述边缘手势的校准数据。 The second processing unit is configured to calculate an average value of the feature values corresponding to each finger to obtain calibration data of the edge gesture.
  9. 根据权利要求6所述的手势校准装置,其中,所述装置还包括:The gesture calibration device of claim 6, wherein the device further comprises:
    存储模块,配置为根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。And a storage module, configured to determine a hotspot area corresponding to the edge gesture operation according to the calibration data of the edge gesture, and store the calibration data in a database to establish a correspondence relationship with the user.
  10. 一种手势输入处理方法,应用于移动终端边缘交互,包括:输入设备、驱动层、应用框架层、应用层,其中,A gesture input processing method is applied to an edge interaction of a mobile terminal, including: an input device, a driver layer, an application framework layer, and an application layer, where
    驱动层获取用户通过输入设备产生手势输入事件,并上报到应用框架层;The driver layer acquires a gesture input event generated by the user through the input device, and reports the event to the application framework layer;
    应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
    应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的输入信息。The application layer collects the edge gesture of the user according to the judgment result, and collects the input information of the edge gesture according to a preset number of times.
  11. 一种终端,所述终端包括:A terminal, the terminal comprising:
    存储介质,配置为存储计算机可执行指令;a storage medium configured to store computer executable instructions;
    处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令包括:a processor configured to execute computer executable instructions stored on the storage medium, the computer executable instructions comprising:
    启动边缘手势校准模式;Initiating an edge gesture calibration mode;
    按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录;Repeatingly inputting an edge gesture operation according to a preset number of times, collecting feature values of the edge gesture operation input each time and recording;
    根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。The average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
  12. 根据权利要求11所述的终端,其中,所述处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令还包括:The terminal of claim 11, wherein the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
    驱动层获取手势输入事件,并上报到应用框架层;应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层; The driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
    应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。The application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
  13. 根据权利要求11所述的终端,其中,所述处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令还包括:The terminal of claim 11, wherein the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
    分别获取所述边缘手势操作中每个手指对应的坐标值;Obtaining coordinate values corresponding to each finger in the edge gesture operation respectively;
    对每个手指对应的坐标值计算平均值,得出所述边缘手势的校准数据。An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
  14. 根据权利要求11所述的终端,其中,所述处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令还包括:The terminal of claim 11, wherein the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
    分别获取所述边缘手势操作中手指滑动操作的起点坐标值和终止坐标值;Obtaining a starting point coordinate value and a ending coordinate value of the finger sliding operation in the edge gesture operation respectively;
    对起点坐标值和终止坐标值分别计算平均值,得出所述边缘手势的校准数据。The average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
  15. 根据权利要求13或14所述的终端,其中,所述处理器,配置为执行存储在所述存储介质上的计算机可执行指令,所述计算机可执行指令还包括:The terminal of claim 13 or 14, wherein the processor is configured to execute computer executable instructions stored on the storage medium, the computer executable instructions further comprising:
    根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。And determining, according to the calibration data of the edge gesture, a hotspot area corresponding to the edge gesture operation, and storing the calibration data in a database to establish a correspondence relationship with the user.
  16. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令包括:A computer storage medium storing computer executable instructions, the computer executable instructions comprising:
    启动边缘手势校准模式;Initiating an edge gesture calibration mode;
    按照预设次数重复输入某一边缘手势操作,采集每次输入的所述边缘手势操作的特征值并记录;Repeatingly inputting an edge gesture operation according to a preset number of times, collecting feature values of the edge gesture operation input each time and recording;
    根据每次记录的所述边缘手势操作的特征值计算其平均值,得出校准数据。The average value is calculated from the feature values of the edge gesture operation recorded each time, and calibration data is obtained.
  17. 根据权利要求16所述的计算机存储介质,其中,该计算机可执行指令还包括: The computer storage medium of claim 16 wherein the computer executable instructions further comprise:
    驱动层获取手势输入事件,并上报到应用框架层;应用框架层判断手势输入事件是否为边缘手势,当所述手势输入事件为边缘手势时,将判断结果上报到应用层;The driver layer obtains the gesture input event and reports it to the application framework layer; the application framework layer determines whether the gesture input event is an edge gesture, and when the gesture input event is an edge gesture, the judgment result is reported to the application layer;
    应用层根据判断结果采集用户的边缘手势,按照预设次数采集所述边缘手势的特征值。The application layer collects the edge gesture of the user according to the judgment result, and collects the feature value of the edge gesture according to a preset number of times.
  18. 根据权利要求16所述的计算机存储介质,其中,该计算机可执行指令还包括:The computer storage medium of claim 16 wherein the computer executable instructions further comprise:
    分别获取所述边缘手势操作中每个手指对应的坐标值;Obtaining coordinate values corresponding to each finger in the edge gesture operation respectively;
    对每个手指对应的坐标值计算平均值,得出所述边缘手势的校准数据。An average value is calculated for the coordinate values corresponding to each finger to obtain calibration data of the edge gesture.
  19. 根据权利要求16所述的计算机存储介质,其中,该计算机可执行指令还包括:The computer storage medium of claim 16 wherein the computer executable instructions further comprise:
    分别获取所述边缘手势操作中手指滑动操作的起点坐标值和终止坐标值;Obtaining a starting point coordinate value and a ending coordinate value of the finger sliding operation in the edge gesture operation respectively;
    对起点坐标值和终止坐标值分别计算平均值,得出所述边缘手势的校准数据。The average value is calculated for the starting point coordinate value and the ending coordinate value, respectively, and the calibration data of the edge gesture is obtained.
  20. 根据权利要求18或19所述的计算机存储介质,其中,该计算机可执行指令还包括:The computer storage medium of claim 18 or 19, wherein the computer executable instructions further comprise:
    根据所述边缘手势的校准数据,确定对应边缘手势操作的热点区域,并将所述校准数据存储于数据库,建立与用户的对应关系。 And determining, according to the calibration data of the edge gesture, a hotspot area corresponding to the edge gesture operation, and storing the calibration data in a database to establish a correspondence relationship with the user.
PCT/CN2016/106167 2015-11-27 2016-11-16 Gesture calibration method and apparatus, gesture input processing method and computer storage medium WO2017088694A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510846257.3A CN105573545A (en) 2015-11-27 2015-11-27 Gesture correction method, apparatus and gesture input processing method
CN201510846257.3 2015-11-27

Publications (1)

Publication Number Publication Date
WO2017088694A1 true WO2017088694A1 (en) 2017-06-01

Family

ID=55883759

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/106167 WO2017088694A1 (en) 2015-11-27 2016-11-16 Gesture calibration method and apparatus, gesture input processing method and computer storage medium

Country Status (2)

Country Link
CN (1) CN105573545A (en)
WO (1) WO2017088694A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023051411A1 (en) * 2021-09-29 2023-04-06 华为技术有限公司 Method for recognizing touch operation, and electronic device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105573545A (en) * 2015-11-27 2016-05-11 努比亚技术有限公司 Gesture correction method, apparatus and gesture input processing method
CN106527953A (en) * 2016-11-28 2017-03-22 努比亚技术有限公司 Mobile terminal and frame gesture operation method
CN109002215B (en) * 2018-07-27 2021-03-19 青岛海信移动通信技术股份有限公司 Method for determining touch initial position of terminal with touch screen and terminal
CN113031775B (en) * 2021-03-24 2023-02-03 Oppo广东移动通信有限公司 Gesture data acquisition method and device, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
CN101432677A (en) * 2005-03-04 2009-05-13 苹果公司 Electronic device having display and surrounding touch sensitive bezel for user interface and control
CN102622225A (en) * 2012-02-24 2012-08-01 合肥工业大学 Multipoint touch application program development method supporting user defined gestures
CN102687100A (en) * 2010-01-06 2012-09-19 高通股份有限公司 User interface methods and systems for providing force-sensitive input
CN105573545A (en) * 2015-11-27 2016-05-11 努比亚技术有限公司 Gesture correction method, apparatus and gesture input processing method

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090174679A1 (en) * 2008-01-04 2009-07-09 Wayne Carl Westerman Selective Rejection of Touch Contacts in an Edge Region of a Touch Surface
CN101853133B (en) * 2010-05-31 2013-03-20 中兴通讯股份有限公司 Method and mobile terminal for automatically recognizing gestures
CN101882043A (en) * 2010-06-08 2010-11-10 苏州瀚瑞微电子有限公司 Method for improving touch precision of edge of capacitance type touch screen
EP2474894A1 (en) * 2011-01-06 2012-07-11 Research In Motion Limited Electronic device and method of controlling same
US20120304107A1 (en) * 2011-05-27 2012-11-29 Jennifer Nan Edge gesture
JP5497722B2 (en) * 2011-10-14 2014-05-21 パナソニック株式会社 Input device, information terminal, input control method, and input control program
FR2990020B1 (en) * 2012-04-25 2014-05-16 Fogale Nanotech CAPACITIVE DETECTION DEVICE WITH ARRANGEMENT OF CONNECTION TRACKS, AND METHOD USING SUCH A DEVICE.
US9395852B2 (en) * 2012-05-07 2016-07-19 Cirque Corporation Method for distinguishing between edge swipe gestures that enter a touch sensor from an edge and other similar but non-edge swipe actions
CN103809735B (en) * 2012-11-12 2017-06-30 腾讯科技(深圳)有限公司 A kind of method and device of gesture identification
CN103034367A (en) * 2012-12-27 2013-04-10 杭州士兰微电子股份有限公司 Calibration method for touch screen
CN104601791A (en) * 2013-10-31 2015-05-06 大连易维立方技术有限公司 Method for identifying mobile phone operation gestures
CN104777948B (en) * 2014-01-13 2018-04-17 上海和辉光电有限公司 Improve the method and device of Projected capacitive touch panel edge coordinate accuracy
CN104735256B (en) * 2015-03-27 2016-05-18 努比亚技术有限公司 Holding mode determination methods and the device of mobile terminal
CN105487705B (en) * 2015-11-20 2019-08-30 努比亚技术有限公司 Mobile terminal, input processing method and user equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
CN101432677A (en) * 2005-03-04 2009-05-13 苹果公司 Electronic device having display and surrounding touch sensitive bezel for user interface and control
CN102687100A (en) * 2010-01-06 2012-09-19 高通股份有限公司 User interface methods and systems for providing force-sensitive input
CN102622225A (en) * 2012-02-24 2012-08-01 合肥工业大学 Multipoint touch application program development method supporting user defined gestures
CN105573545A (en) * 2015-11-27 2016-05-11 努比亚技术有限公司 Gesture correction method, apparatus and gesture input processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023051411A1 (en) * 2021-09-29 2023-04-06 华为技术有限公司 Method for recognizing touch operation, and electronic device

Also Published As

Publication number Publication date
CN105573545A (en) 2016-05-11

Similar Documents

Publication Publication Date Title
US11749151B2 (en) Display apparatus and method for displaying
WO2017097097A1 (en) Touch control method, user equipment, input processing method, mobile terminal and intelligent terminal
US11054988B2 (en) Graphical user interface display method and electronic device
KR102519800B1 (en) Electronic device
TWI722995B (en) Electronic device with bent display and method for controlling thereof
WO2018103525A1 (en) Method and device for tracking facial key point, and storage medium
WO2017084470A1 (en) Mobile terminal, input processing method and user equipment, and computer storage medium
KR101995278B1 (en) Method and apparatus for displaying ui of touch device
WO2017088694A1 (en) Gesture calibration method and apparatus, gesture input processing method and computer storage medium
WO2017088131A1 (en) Method and apparatus for rapidly dividing screen, electronic device, display interface and storage medium
WO2016165568A1 (en) Method for scaling video image, and mobile terminal
CN109428969B (en) Edge touch method and device of double-screen terminal and computer readable storage medium
US10928948B2 (en) User terminal apparatus and control method thereof
US11157127B2 (en) User terminal apparatus and controlling method thereof
WO2019000287A1 (en) Icon display method and device
KR20150094479A (en) User terminal device and method for displaying thereof
WO2020119486A1 (en) Method for payment function switching and electronic device
KR102308201B1 (en) User terminal apparatus and control method thereof
US10095384B2 (en) Method of receiving user input by detecting movement of user and apparatus therefor
WO2015014135A1 (en) Mouse pointer control method and apparatus, and terminal device
WO2019047129A1 (en) Method for moving application icons, and terminal
KR102351634B1 (en) Terminal apparatus, audio system and method for controlling sound volume of external speaker thereof
WO2017084469A1 (en) Touch control method, user equipment, input processing method and mobile terminal
WO2022063034A1 (en) Input interface display method and terminal
WO2017035740A9 (en) Method for selecting text

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16867917

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16867917

Country of ref document: EP

Kind code of ref document: A1