WO2017084470A1 - 移动终端、输入处理方法及用户设备、计算机存储介质 - Google Patents

移动终端、输入处理方法及用户设备、计算机存储介质 Download PDF

Info

Publication number
WO2017084470A1
WO2017084470A1 PCT/CN2016/102779 CN2016102779W WO2017084470A1 WO 2017084470 A1 WO2017084470 A1 WO 2017084470A1 CN 2016102779 W CN2016102779 W CN 2016102779W WO 2017084470 A1 WO2017084470 A1 WO 2017084470A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
event
input event
application
layer
Prior art date
Application number
PCT/CN2016/102779
Other languages
English (en)
French (fr)
Inventor
宁耀东
李鑫
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017084470A1 publication Critical patent/WO2017084470A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas

Definitions

  • the present invention relates to the field of communications, and more particularly to a mobile terminal, an input processing method, and a user equipment, and a computer storage medium.
  • the terminal frame is narrower and narrower.
  • edge input technology for example, edge touch
  • the driver layer determines whether the touch occurs in the edge input region according to the touch point information.
  • the method of obtaining the touch point information by the driver layer is also highly targeted, which leads to the need for each type when judging the event type (whether it is an edge input event).
  • the input chip is modified and ported differently, and the workload is large and error-prone.
  • the driver layer when the driver layer reports an event, it can select either the A protocol or the B protocol.
  • the B protocol distinguishes the finger ID.
  • the implementation of the edge input relies on the finger ID, which is used to compare the data of two clicks before and after the same finger when multi-point input. Therefore, the prior art input scheme can only support the B protocol, while the driver using the A protocol cannot be supported.
  • the prior art input scheme has strong hardware dependency and cannot support the defects of the A protocol and the B protocol at the same time, and needs to be improved.
  • the technical problem to be solved by the present invention is to provide a mobile terminal, an input processing method, and a user for the defect that the input scheme of the mobile terminal of the prior art has strong hardware dependence.
  • a mobile terminal including:
  • the driver layer is configured to obtain an input event generated by the user through the input device, and report the event to the application framework layer;
  • the application framework layer is configured to determine whether the input event is an edge input event or a normal input event. If it is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application layer; if the edge input event is an edge Input events are processed and identified, and the recognition result is reported to the application layer;
  • the application layer is configured to execute a corresponding input instruction according to the reported recognition result.
  • the normal input event corresponds to a first input device object having a first device identification
  • the application framework layer is further configured to set a second input device object having a second device identifier for corresponding to the edge input event.
  • the driving layer reports an input event by using the A protocol or the B protocol. If the input event is reported by the A protocol, the event obtaining module is further configured to assign a number for distinguishing the finger to each touch point. ;
  • the application framework layer is further configured to assign a number for distinguishing the finger to each touch point.
  • the driver layer includes an event acquisition module configured to obtain an input event generated by a user through an input device.
  • the application framework layer includes an input reader
  • the mobile terminal further includes a device node disposed between the driver layer and the input reader, configured to notify the input reader to acquire an input event;
  • the input reader is configured to traverse the device node, obtain an input event, and report the event.
  • the application framework layer further includes: a first event processing module configured to perform coordinate calculation on the input event reported by the input reader and report the result;
  • the first determining module is configured to determine, according to the coordinate value reported by the first event processing module, whether the input event is an edge input event, and if not, the input event is reported.
  • the application framework layer further includes:
  • a second event processing module configured to perform coordinate calculation on the input event reported by the input reader and report the result
  • the second judging module is configured to determine whether the input event is an edge input event according to the coordinate value reported by the second event processing module, and if yes, report the input event.
  • the application framework layer further includes:
  • the event dispatching module is configured to report the event reported by the second determining module and the first determining module.
  • the application framework layer further includes:
  • the third judging module is configured to determine whether the event is an edge input event according to the device identifier included in the event reported by the event dispatching module, and if yes, report the event to the first application module, otherwise report to the second application module.
  • the first application module is configured to identify a normal input event according to a relevant parameter of the normal input event, and report the recognition result to the application layer;
  • the second application module is configured to identify an edge input event according to a related parameter of the edge input event and report the recognition result to the application layer.
  • the input device is a touch screen of the mobile terminal
  • the touch screen includes at least one edge input area and at least one normal input area.
  • the input device is a touch screen of the mobile terminal
  • the touch screen includes at least one edge input zone, at least one normal input zone, and at least one transition zone.
  • an input processing method including:
  • the driver layer acquires an input event generated by the user through the input device, and reports it to the application framework layer;
  • the application framework layer determines whether the input event is an edge input event or a normal input event. If it is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application layer. If the edge input event is an edge input event, the edge input event is performed. Processing the identification and reporting the recognition result to the application layer;
  • the application layer executes the corresponding input instruction according to the reported recognition result.
  • the method further includes:
  • the creating an input device object having a device identifier for each input event includes:
  • the normal input event corresponds to a touch screen having a first device identification; the application framework layer sets a second input device object having a second device identification corresponding to an edge input event.
  • the driving layer acquiring an input event generated by the user through the input device and reporting to the application framework layer includes:
  • the driving layer assigns a number for distinguishing the finger to each touch point, and reports the input event by using the A protocol protocol.
  • the driving layer acquiring an input event generated by the user through the input device and reporting to the application framework layer includes:
  • the driving layer reports the input event by using the B protocol
  • the method further includes:
  • the application framework layer assigns each touch point in the input event to distinguish a finger Numbering.
  • the method further includes:
  • the application framework layer converts the coordinates in the relevant parameters of the edge input event, performs reporting, and converts the coordinates in the relevant parameters of the normal input event, and acquires the current state of the mobile terminal, and performs the converted coordinates according to the current state. Reported after adjustment;
  • the application framework layer determines whether the input event is an edge input event according to the device identifier. If yes, the normal input event is identified according to the relevant parameters of the normal input event, and the recognition result is reported to the application layer; if not, the edge input event is correlated. The application layer that identifies the edge input event and reports the recognition result.
  • the application framework layer determines whether the input event is an edge input event or a normal input event includes:
  • a user equipment including:
  • An input device configured to receive a user input operation to convert the physical input into an electrical signal to generate an input event
  • the processor includes: a driving module, an application framework module, and an application module;
  • the driving module is configured to acquire an input event generated by the user through the input device, and report the event to the application framework module.
  • the application framework module is configured to determine whether the input event is an edge input event or a normal input event. If the input event is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application module; The event is processed and identified by the edge input event, and the recognition result is reported to the application module;
  • the application module is configured to execute a corresponding input instruction according to the reported recognition result.
  • a computer storage medium storing a computer program for executing the input processing method described above.
  • the mobile terminal, the input processing method, the user equipment, and the computer storage medium embodying the present invention since the operations of distinguishing the A area and the C area are performed at the application framework layer, and the virtual equipment is established in the application framework layer, the driving layer is avoided.
  • Differentiate the dependence of A and C on hardware set the touch point number to distinguish fingers, compatible with A protocol and B protocol; and integrate into the operating system of mobile terminal, applicable to different hardware and different types of mobile terminals
  • the portability is good; all the elements of the touch point (coordinates, numbers, etc. of the touch points) are stored, and the subsequent determination of the edge input (for example, FIT) can be facilitated.
  • FIG. 1 is a schematic diagram of screen area division of a mobile terminal according to a first embodiment of the present invention
  • FIG. 2 is a schematic diagram of a software architecture of a mobile terminal according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
  • FIG. 4 is a schematic flow chart of determining an edge input event in an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of determining an input event according to a device identifier according to an embodiment of the present invention
  • FIG. 6 is a flowchart of an input processing method according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an effect of opening a camera application of a mobile terminal by using an input processing method according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of screen area division of a mobile terminal according to a second embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing the hardware structure of a user equipment according to an embodiment of the present invention.
  • FIG. 1 is a schematic diagram of screen area division of a mobile terminal according to a first embodiment of the present invention.
  • the C area 101 is an edge input area
  • the A area 100 is a normal input area
  • the B area 102 is a non-input area.
  • the input operation in the A area is processed according to the existing normal processing manner. For example, clicking an application icon in the A area 100 starts the application.
  • the input operation in the C area 101 it can be defined as an edge input processing mode.
  • a bilateral sliding in the C area 101 can be defined, that is, terminal acceleration is performed.
  • the B area 102 is a non-input area.
  • the B area 102 may be provided with a button area, an earpiece, and the like.
  • the C zone may be divided in a fixed manner or a custom partition.
  • Fixed division that is, a fixed-length, fixed-bandwidth area is set as the C-zone 101 in the screen area of the mobile terminal.
  • the C area 101 may include a partial area on the left side of the screen of the mobile terminal and a partial area on the right side, the positions of which are fixedly disposed on both side edges of the mobile terminal, as shown in FIG.
  • the C zone 101 can also be divided only at one side edge of the mobile terminal.
  • the custom division that is, the number, location, and size of the area of the C area 101
  • the basic graphic design of the C area 101 is a rectangle, and the position and size of the C area can be determined by inputting the coordinates of the two vertices of the diagonal of the graphic.
  • the embodiment of the present invention does not limit the division and setting manner of the C area.
  • the software architecture of the mobile terminal in the embodiment of the present invention includes: an input device 201, a driving layer 202, and an application framework layer 203. And application layer 204.
  • the input device 201 receives the input operation of the user, converts the physical input into an electrical signal TP, and transmits the TP to the driving layer 202; the driving layer 202 analyzes the input position to obtain parameters such as specific coordinates and duration of the touched point, and This parameter is uploaded to the application framework layer 203, and communication between the application framework layer 203 and the driver layer 202 can be implemented through a corresponding interface.
  • the application framework layer 203 receives the parameters reported by the driver layer 202, parses, distinguishes the edge input event and the normal input event, and passes the valid input to the specific application of the application layer 204 to meet the application layer 204 according to different Input operations perform different input operation instructions.
  • the driver layer 202 is configured to acquire an input event generated by the user through the input device, and report the event to the application framework layer 203.
  • the application framework layer 203 is configured to determine whether the input event is an edge input event or a normal input event. If it is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application layer; The input event is processed for recognition, and the recognition result is reported to the application layer 204.
  • the application layer 204 is configured to execute a corresponding input instruction based on the reported recognition result.
  • the mobile terminal in the embodiment of the present invention avoids the operation of distinguishing the A area and the C area in the application framework layer, and establishes the virtual device in the application framework layer, thereby avoiding the dependence of the driver layer on the hardware of the A area and the C area. .
  • FIG. 3 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
  • the input device 201 is configured to receive input from a user.
  • the input device 201 can be a touch screen, a touch sensor panel (a touch panel provided with a discrete capacitive sensor, a resistive sensor, a force sensor, an optical sensor, or the like), a non-touch input device (eg, an infrared input device, etc.) Wait.
  • the input device includes a touch screen 2010.
  • the driver layer 202 includes an event acquisition module 2020.
  • a device section is provided between the drive layer 202 and the application frame layer 203 Point 2021.
  • the application framework layer 203 includes an input reader 2030, a first event processing module 2031, a second event processing module 2032, a first determining module 2033, a second determining module 2034, and an event dispatching module 2035, and a third determining module. 2036.
  • the driving layer 202 includes an event obtaining module 2001 configured to acquire an input event generated by the user through the input device 201, for example, an input operation event through the touch screen.
  • the input events include a normal input event (A zone input event) and an edge input event (C zone input event).
  • Normal input events include input operations such as click, double click, and slide in Area A.
  • Edge input events include sliding on the left edge of Zone C, sliding of the left edge, sliding of the right edge, sliding of the right edge, bilateral sliding, bilateral sliding, holding the four corners of the phone, sliding back and forth, holding one Input operation such as grip, one-handed grip.
  • the event acquisition module 2001 is further configured to acquire related parameters such as coordinates, duration, and the like of the touch point of the input operation. If the input event is reported by the A protocol, the event obtaining module 2001 is further configured to assign a number (ID) for distinguishing the finger to each touch point. Therefore, if the input event is reported by the A protocol, the reported data includes parameters such as coordinates of the touched point, duration, and the number of the touched point.
  • ID a number
  • a device node 2011 is disposed between the driver layer 202 and the input reader 2030, and is configured to notify the input reader 2030 of the application framework layer 203 to acquire an input event.
  • the input reader 2030 is configured to traverse the device node, obtain an input event, and report it. If the driver layer 202 reports an input event using the B protocol, the input reader 2030 is further configured to assign a number (ID) for distinguishing the finger for each touch point. In an embodiment of the invention, the input reader 2030 is further configured to store all of the element information (coordinates, duration, number, etc.) of the touch point.
  • each input event creates an input device object with a device identification.
  • a first input device object can be created for a normal input event with a first identity. the first The input device object corresponds to the actual hardware touch screen.
  • the application framework layer 203 further includes a second input device object (for example, an EDGE device), which is a virtual device, that is, an empty device, and has a second identifier for use with the edge.
  • the input event corresponds.
  • the edge input event may also correspond to the first input device object having the first identity and the normal control event corresponding to the second input device object having the second identity.
  • the first event processing module 2031 is configured to process an input event reported by the input reader 2030, for example, coordinate calculation of a touch point.
  • the second event processing module 2032 is configured to process an input event reported by the input reader 2030, for example, coordinate calculation of a touch point.
  • the first determining module 2033 is configured to determine whether the event is an edge input event based on the coordinate value (X value), and if not, upload the event to the event dispatching module 2035.
  • the second determination module 2034 is configured to determine whether the event is an edge input event based on the coordinate value (X value), and if so, upload the event to the event dispatch module 2035.
  • the first determining module 2033 acquires the horizontal axis coordinate of the touched point when determining whether the event is an edge input event, and sets the horizontal axis coordinate (ie, the X-axis coordinate) (x) and the C-zone width (Wc) of the touched point. And the touch screen width (W) for comparison. Specifically, if Wc ⁇ x ⁇ (W-Wc), the touch point is located in the A area, and the event is a normal input event; otherwise, the event is an edge input event; if the event is not an edge input event (ie, a normal input event), the event is Reported to the event dispatch module 2035. Similarly, the second judging module 2034 judges whether the event is an edge input event according to the manner shown in FIG. 4, and if the result of the judgment is that the event is an edge input event, the event is reported to the event dispatching module 2035.
  • the judgment flow shown in FIG. 4 is based on the touch screen of the mobile terminal as shown in FIG. 1, that is, the mobile terminal includes the C area 101 located at the left and right edges, and the A area 100 located in the middle. Therefore, when the coordinate setting is performed along the coordinate system shown in FIG. 1, if Wc ⁇ x ⁇ (W-Wc) determines that the touch point is in zone A.
  • the judgment formula (Wc ⁇ x ⁇ (W-Wc)) may be adjusted according to the division of the mobile terminal area, for example, if the mobile terminal includes only one C area 101 located at the left edge, and the width thereof is Wc, when Wc ⁇ x ⁇ W, the touch point is located in the A area; otherwise, the touch point is located in the C area. If the mobile terminal includes only one C area 101 located at the right edge and its width is Wc, then when x ⁇ (W-Wc), the touch point is located in the A area; otherwise, the touch point is located in the C area.
  • the event dispatch module 2035 is configured to report the edge input event and/or the A zone input event to the third determination module 2036.
  • the edge input event is not the same as the channel used for the A zone input event escalation. Edge input events are reported on dedicated channels.
  • the event dispatching module 2035 is further configured to convert the coordinates in the relevant parameters of the edge input event, perform the reporting, and convert the coordinates in the relevant parameters of the normal input event, and obtain the current state of the mobile terminal, according to the current state. The converted coordinates are adjusted and reported.
  • Converting the coordinates includes mapping the coordinate transformation of the touch screen to the coordinates of the display of the mobile terminal.
  • the coordinates of the A area are adjusted. Specifically, the current state of the mobile terminal is obtained, and the converted coordinates are adjusted according to the current state, including:
  • the coordinates are reduced and moved by a certain ratio compared with the coordinates of the normal state, and therefore, the converted coordinates are scaled down and moved.
  • the coordinates are switched between the horizontal and vertical coordinates as compared with the coordinates of the normal state. Therefore, the converted coordinates are switched between the horizontal and vertical coordinates.
  • the coordinates are proportionally converted to two or more coordinates compared to the coordinates of the normal state, and therefore, the converted coordinates are correspondingly converted.
  • the parameters of the input event are adjusted according to the detected state of the mobile terminal (for example, horizontal and vertical screen, one-hand operation, split screen, etc.). For example, if you are working with one hand, scale the coordinates Make a reduction.
  • the event dispatch module 2035 is implemented by inputdispatcher::dispatchmotion().
  • the third judging module 2036 is configured to determine whether the event is an edge input event according to the device identifier (ID). If it belongs, it is reported to the first application module 2037, otherwise it is reported to the second application module 2038.
  • the third determining module 2036 first obtains the device identifier, and determines whether the device is a touch screen type device according to the device identifier; if yes, further determines whether the device identifier is the C region device identifier, that is, the second input.
  • the identifier of the device object if yes, is determined to be an edge input event, and if not, it is determined to be a normal input event.
  • the device identifier is an A-zone device identifier, that is, an identifier corresponding to the first input device, and if yes, determining that the device is a normal input event, and if not, determining that the device is an edge. Enter the event.
  • the first application module 2037 is configured to process an input event related to the input of the A zone. Specifically, the process includes: performing processing identification according to the touch point coordinates, duration, number, and the like of the input operation, The recognition result is reported to the application layer.
  • the second application module 2038 is configured to process an input event related to the input of the C zone. Specifically, the process includes: performing process identification according to the touch point coordinates, duration, and number of the processing operation, and reporting the recognition result to the application layer. For example, according to the coordinates, duration and number of the touched point, it can be recognized whether the input operation is a click or slide of the A area, or a single side of the C area.
  • the application layer 204 includes applications such as a camera, a gallery, a lock screen, etc. (Application 1, Application 2, ).
  • the input operations in the embodiments of the present invention include an application level and a system level, and the system level gesture processing also classifies it as an application layer.
  • the application level is the manipulation of the application, for example, on, off, volume control, and the like.
  • the system level is the manipulation of the mobile terminal, for example, power on, acceleration, inter-application switching, global return, and the like.
  • the application layer can obtain the input event of the C area by registering the Listener of the C area event, or can obtain the input event of the A area by registering the Listener of the A area event.
  • the mobile terminal sets and stores input commands corresponding to different input operations, including input commands corresponding to edge input operations and input commands corresponding to normal input operations.
  • the application layer receives the recognition result of the reported edge input event, that is, responds to the edge input operation by invoking a corresponding input instruction according to the edge input operation.
  • the application layer receives the recognition result of the reported normal input event, that is, calls the corresponding input instruction according to the normal input operation in response to the normal input operation.
  • the input events of the embodiments of the present invention include input operations only in the A zone, input operations only in the C zone, and input operations simultaneously generated in the A zone and the C zone.
  • the input command also includes input commands corresponding to the three types of input events.
  • the embodiment of the present invention can implement the combination of the input operations of the A zone and the C zone to control the mobile terminal.
  • the input operation is to simultaneously click the corresponding positions of the A zone and the C zone
  • the corresponding input instruction is to close an application, therefore, The application can be closed by simultaneously clicking the input operations of the corresponding positions in the A zone and the C zone.
  • the mobile terminal in the embodiment of the present invention avoids the operation of distinguishing the A area and the C area in the application framework layer, and establishes the virtual device in the application framework layer, thereby avoiding the dependence of the driver layer on the hardware of the A area and the C area.
  • the finger By setting the touch point number, the finger can be distinguished, compatible with the A protocol and the B protocol; and because the input reader 2030, the first event processing module 2031, the second event processing module 2032, the first determining module 2033, and the second determination
  • the functions of the module 2034 and the event dispatching module 2035, the third determining module 2036, the first application module 2037, the second application module 2038, etc. can be integrated into the operating system of the mobile terminal, and can be applied to different hardware and different types of mobile terminals.
  • the portability is good; the Input Reader automatically saves all the elements of a touch point (coordinates, numbers, etc. of the touch points) to facilitate subsequent evaluation of edge inputs (for example, FIT).
  • FIG. 6 is a flowchart of an input processing method according to an embodiment of the present invention, including the following steps:
  • the S1 and the driver layer acquire an input event generated by the user through the input device, and report it to the application framework layer.
  • the input device receives an input operation (ie, an input event) of the user, converts the physical input into an electrical signal, and transmits the electrical signal to the driving layer.
  • the input events include an A zone input event and a C zone input event.
  • Input events in Zone A include input operations such as click, double click, and slide in Zone A.
  • the input events in Zone C include sliding on the left edge of Zone C, sliding on the left edge, slipping on the right edge, sliding on the right edge, bilaterally sliding, bilateral sliding, sliding on one side, holding a grip, one hand Hold and other input operations.
  • the driving layer analyzes the input position according to the received electrical signal to obtain related parameters such as specific coordinates and duration of the touched point.
  • the relevant parameters are reported to the application framework layer.
  • step S1 further includes:
  • a number (ID) for distinguishing the finger is assigned to each touch point.
  • the driver layer reports the input event by using the A protocol
  • the reported data includes the above related parameters and the number of the touched point.
  • the application framework layer determines whether the input event is an edge input event or a normal input event. If it is a normal input event, step S3 is performed, and if the edge input event is performed, step S4 is performed.
  • the application framework layer can determine whether it is an edge input event or a normal input event according to the coordinates in the relevant parameters of the input event.
  • the horizontal axis coordinate of the touched point is first acquired, and then the horizontal axis coordinate (ie, X-axis coordinate) (x) of the touched point is compared with the C-zone width (Wc) and the touch screen width (W). If Wc ⁇ x ⁇ (W-Wc), the touch point is in the A area, and the event is a normal input event; otherwise, the event is an edge input event.
  • the step S2 further includes: assigning a number (ID) for distinguishing the finger to each touch point; and performing all the element information (coordinates, duration, number, etc.) of the touch point. storage.
  • the finger by setting the touch point number, the finger can be distinguished, and the A protocol and the B protocol are compatible; and all the elements of the touch point (coordinates, numbers, and the like of the touch point) are stored, and the edge input can be subsequently determined (for example, , FIT) provides convenience.
  • the channel used for edge input events and normal input event reporting is not the same.
  • Edge input events use dedicated channels.
  • the application framework layer processes and identifies the normal input event, and reports the recognition result to the application layer.
  • the application framework layer processes and recognizes the edge input event, and reports the recognition result to the application layer.
  • the process identification includes: performing process identification according to touch point coordinates, duration, number, and the like of the input operation to determine an input operation. For example, according to the coordinates, duration, and number of the touched point, it is possible to recognize whether the input operation of the A area is clicked or swiped, or the input operation of the single-sided back and forth of the C area.
  • the application layer executes a corresponding input instruction according to the reported recognition result.
  • the application layer includes applications such as a camera, a gallery, and a lock screen.
  • the input operations in the embodiments of the present invention include an application level and a system level, and the system level gesture processing also classifies it as an application layer.
  • the application level is the manipulation of the application, for example, on, off, volume control, and the like.
  • the system level is the manipulation of the mobile terminal, for example, power on, acceleration, inter-application switching, global return, and the like.
  • the mobile terminal sets and stores input commands corresponding to different input operations, including input commands corresponding to edge input operations and input commands corresponding to normal input operations.
  • the application layer receives the recognition result of the reported edge input event, that is, the corresponding input instruction is invoked according to the edge input operation to respond to the edge input operation; the application layer receives the recognition result of the reported normal input event, that is, according to the normal input operation, the corresponding call is performed.
  • the input command responds to the normal input operation.
  • the input events of the embodiments of the present invention include input operations only in the A zone, input operations only in the C zone, and input operations simultaneously generated in the A zone and the C zone.
  • the input command also includes input commands corresponding to the three types of input events.
  • the embodiment of the present invention can implement the combination of the input operations of the A zone and the C zone to control the mobile terminal.
  • the input operation is to simultaneously click the corresponding positions of the A zone and the C zone, and the corresponding input instruction is to close an application, therefore, By clicking A at the same time The input operation of the corresponding position in the zone and the C zone enables the closing of the application.
  • the input processing method of the embodiment of the present invention further includes:
  • a first input device object can be created for a normal input event having a first identity.
  • the first input device object corresponds to the input device touch screen.
  • the application framework layer sets a second input device object.
  • the second input device object (for example, a FIT device) is a virtual device, that is, an empty device, and has a second identifier for corresponding to an edge input event. It should be understood that the edge input event may also correspond to the first input device object having the first identity and the normal control event corresponding to the second input device object having the second identity.
  • the input processing method of the embodiment of the present invention further includes:
  • the current state of the mobile terminal includes a horizontal and vertical screen, a one-hand operation, a split screen, and the like.
  • the horizontal and vertical screens can be detected by a gyroscope or the like in the mobile terminal.
  • One-hand operation and split screen can be detected by obtaining relevant setting parameters of the mobile terminal.
  • Converting the coordinates includes mapping the coordinate transformation of the touch screen to the coordinates of the display of the mobile terminal.
  • the coordinates of the A area are adjusted. Specifically, the current state of the mobile terminal is obtained, and the converted coordinates are adjusted according to the current state, including:
  • the coordinates are reduced and moved by a certain ratio compared with the coordinates of the normal state, and therefore, the converted coordinates are scaled down and moved.
  • the coordinates are switched between the horizontal and vertical coordinates as compared with the coordinates of the normal state. Therefore, the converted coordinates are switched between the horizontal and vertical coordinates.
  • the coordinates are proportionally converted to two in comparison with the coordinates of the normal state. One or more coordinates, so the converted coordinates are converted accordingly.
  • step S21 can be implemented by inputdispatcher::dispatchmotion().
  • step S22 Determine, according to the device identifier, whether the input event is an edge input event, if yes, execute step S3, and if not, perform step S4.
  • the identifier of the second input device object is determined to be an edge input event, and if not, it is determined to be a normal input event.
  • the device identifier is an A-zone device identifier, that is, an identifier corresponding to the first input device, and if yes, determining that the device is a normal input event, and if not, determining that the device is an edge. Enter the event.
  • the hardware layer is distinguished from the A area and the C area by the driving layer.
  • Dependency By setting the touch point number, it can realize the finger differentiation, compatible with the A protocol and the B protocol; and can be integrated into the operating system of the mobile terminal, and can be applied to different hardwares and different kinds of mobile terminals, and the portability is good; All elements (coordinates, numbers, etc. of touch points) are stored, and subsequent determination of edge input (for example, FIT) facilitates.
  • FIG. 7 it is a schematic diagram of an effect of opening a camera application of a mobile terminal by using an input processing method according to an embodiment of the present invention.
  • the figure on the left side of FIG. 7 is a schematic diagram of the main interface of the mobile terminal, wherein the area 1010 is a touch point preset in the edge input area (C area 101) to enable an input operation of turning on the camera function. Specifically, clicking on the area 1010 can enable the camera to be turned on. Then, in the mobile terminal, the input command is stored as: turning on the camera, which corresponds to the input operation of the click area 1010.
  • the user clicks on the area 1010 of the touch screen, and the driver layer acquires the input.
  • the event is reported to the application framework layer.
  • the application framework layer can determine that the input event is an edge input event according to the coordinates of the touch point.
  • the application framework layer processes and recognizes the edge input event, and recognizes the input operation as the click region 1010 according to the touch point coordinates, duration, and encoding.
  • the application framework layer reports the recognition result to the application layer, and the application layer executes an input instruction to turn on the camera.
  • FIG. 8 is a schematic diagram of screen division of a mobile terminal according to a second embodiment of the present invention.
  • the transition region 103 (T region) is added at the edge of the screen of the mobile terminal.
  • the current sliding is considered to be an edge gesture; if the input event starts from the C area and deviates to the A area, the edge gesture is considered to be ended. Normal input event; if the input event starts from the T area or the A area, regardless of sliding to any area of the screen, the current sliding is considered to be a normal input event.
  • the reporting process of the input event in this embodiment is the same as the input processing method described in the foregoing embodiment.
  • the only difference is that when the application framework layer processes and recognizes the edge input event, it is necessary to determine according to the above three conditions to determine an accurate Enter the event.
  • the mobile terminal of the embodiment of the present invention can be implemented in various forms.
  • the terminal described in the present invention may include, for example, a mobile phone, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (PDA), a tablet computer (PAD), a portable multimedia player (PMP), a navigation device.
  • PDA personal digital assistant
  • PAD tablet computer
  • PMP portable multimedia player
  • Mobile terminals of the like and fixed terminals such as digital TVs, desktop computers, and the like.
  • the embodiment of the present invention further provides a user equipment
  • FIG. 9 is a schematic diagram of its hardware structure.
  • the user equipment 1000 includes a touch screen 100, a controller 200, a storage device 310, a global positioning system (GPS) chip 320, a communicator 330, a video processor 340, an audio processor 350, a button 360, a microphone 370, and a camera 380.
  • GPS global positioning system
  • the touch screen 100 can be divided into A area, B area, and C area, or A area, B area, as described above. Zone C and Zone T.
  • the touch screen 100 can be implemented as various types of displays such as a liquid crystal display (LCD), an organic light emitting diode (OLED) display, and a plasma display panel (PDP).
  • the touch screen 100 may include a driving circuit that can be implemented, for example, as an a-si TFT, a low temperature polysilicon (LTPS) TFT, and an organic TFT (OTFT), and a backlight unit.
  • the touch screen 100 may include a touch sensor for sensing a touch gesture of a user.
  • the touch sensor can be implemented as various types of sensors, such as a capacitor type, a resistance type, or a piezoelectric type.
  • the capacitance type calculates a touch coordinate value by sensing a micro current excited by a user's body when a portion of the user's body (eg, a user's finger) is touched on the surface of the touch screen coated with the conductive material.
  • the touch screen includes two electrode plates, and the touch coordinate value is calculated by sensing a current flowing when the upper and lower plates at the touch point are in contact when the user touches the screen.
  • the touch screen 100 may sense a user gesture for using an input device such as a pen other than the user's finger.
  • the input device is a stylus pen including a coil
  • the user device 1000 may include a magnetic sensor (not shown) for sensing a magnetic field that changes according to the proximity of the coil within the stylus to the magnetic sensor .
  • the user device 1000 can also sense a proximity gesture, ie, the stylus hover over the user device 1000.
  • the storage device 310 can store various programs and data required for the operation of the user device 1000.
  • the storage device 310 can store programs and data for constructing various screens to be displayed on the respective areas (for example, the A area, the C area).
  • the controller 200 displays content on each area of the touch screen 100 by using programs and data stored in the storage device 310.
  • the controller 200 includes a RAM 210, a ROM 220, a processor (CPU) 230, a graphics processing unit (GPU) 240, and a bus 250.
  • the RAM 210, the ROM 220, the CPU 230, and the GPU 240 may be connected to each other through a bus 250.
  • the CPU 230 accesses the storage device 310 and uses the operating system stored in the storage device 310
  • the system (OS) performs startup.
  • the CPU 230 performs various operations by using various programs, contents, and data stored in the storage device 310.
  • the ROM 220 stores a set of commands for system startup.
  • the CPU 230 copies the OS stored in the storage device 310 to the RAM 210 according to the command set stored in the ROM 220, and starts the system by running the OS.
  • the CPU 230 copies the various programs stored in the storage device 310 to the RAM 210, and performs various operations by running the copy program in the RAM 210.
  • the GPU 240 can generate a screen including various objects such as icons, images, and text by using a calculator (not shown) and a renderer (not shown).
  • the calculator calculates feature values such as coordinate values, format, size, and color, wherein the objects are color-coded according to the layout of the screen, respectively.
  • the GPS chip 320 is a unit that receives GPS signals from GPS satellites and calculates the current location of the user equipment 1000. When the navigation program is used or when the current location of the user is requested, the controller 200 can calculate the location of the user by using the GPS chip 320.
  • the communicator 330 is a unit that performs communication with various types of external devices in accordance with various types of communication methods.
  • the communicator 330 includes a WiFi chip 331, a Bluetooth chip 332, a wireless communication chip 333, and an NFC chip 334.
  • the controller 200 performs communication with various external devices by using the communicator 330.
  • the WiFi chip 331 and the Bluetooth chip 332 perform communication according to the WiFi method and the Bluetooth method, respectively.
  • various connection information such as a service set identifier (SSID) and a session key may be first transmitted and received, communication may be connected by using connection information, and each of the communication information may be transmitted and received.
  • SSID service set identifier
  • the wireless communication chip 333 is a chip that performs communication in accordance with various communication standards such as IEEE, Zigbee, Third Generation (3G), Third Generation Partnership Project (3GPP), and Long Term Evolution (LTE).
  • the NFC chip 334 is a chip that operates according to a Near Field Communication (NFC) method using a 13.56 MHz bandwidth among various RF-ID bandwidths, and various RF-ID frequency bands such as 135 kHz, 13.56 MHz, and 433 megabytes. He, 860-960 MHz and 2.45 GHz.
  • NFC Near Field Communication
  • the video processor 340 is a unit that processes video data included in content received through the communicator 330 or content stored in the storage device 310.
  • Video processor 340 can perform various image processing for video data, such as decoding, scaling, noise filtering, frame rate conversion, and resolution conversion.
  • the audio processor 350 is a unit that processes audio data included in content received through the communicator 330 or content stored in the storage device 310.
  • the audio processor 350 can perform various processing for audio data, such as decoding, amplification, and noise filtering.
  • the controller 200 can reproduce the corresponding content by driving the video processor 340 and the audio processor 350 when the reproduction program is run for the multimedia content.
  • the speaker 390 outputs the audio data generated in the audio processor 350.
  • the button 360 can be various types of buttons, such as mechanical buttons or touch pads or touch wheels formed on some areas such as the front, side or back of the main outer body of the user device 1000.
  • the microphone 370 is a unit that receives user voice or other sounds and converts them into audio data.
  • the controller 200 can use user voices input through the microphone 370 during the call process, or convert them into audio data and store them in the storage device 310.
  • the camera 380 is a unit that captures a still image or a video image according to a user's control.
  • Camera 380 can be implemented as a plurality of units, such as a front camera and a rear camera. As described below, camera 380 can be used as a means of obtaining a user image in an exemplary embodiment that tracks the user's gaze.
  • the controller 200 can perform a control operation according to a user's voice input through the microphone 370 or a user motion recognized by the camera 380. Therefore, the user equipment 1000 can operate in an action control mode or a voice control mode.
  • the controller 200 photographs the user by activating the camera 380, tracks changes in user actions, and performs corresponding operations.
  • the controller 200 The voice recognition mode can be operated to analyze the voice input through the microphone 370 and perform a control operation based on the analyzed user voice.
  • a voice recognition technology or a motion recognition technology is used in the above various exemplary embodiments. For example, when the user performs an action such as selecting an object marked on the home screen or speaking a voice command corresponding to the object, it may be determined that the corresponding object is selected and a control operation matching the object may be performed.
  • the motion sensor 400 is a unit that senses the movement of the body of the user device 1000.
  • User device 1000 can be rotated or tilted in various directions.
  • the motion sensor 400 can sense moving features such as a rotational direction, an angle, and a slope by using one or more of various sensors such as a geomagnetic sensor, a gyro sensor, and an acceleration sensor.
  • the user device 1000 may further include a USB port connectable to a USB connector, for connecting like a headphone, a mouse, a LAN, and receiving and processing a digital multimedia broadcast.
  • a USB port connectable to a USB connector, for connecting like a headphone, a mouse, a LAN, and receiving and processing a digital multimedia broadcast.
  • Various input ports of various external components such as DMB chips of (DMB) signals, and various other sensors.
  • DMB DMB chips of
  • the storage device 310 can store various programs.
  • the touch screen is configured to receive an input operation of the user, and convert the physical input into an electrical signal to generate an input event;
  • the processor includes: a driving module, an application framework module, and an application module;
  • the driving module is configured to acquire an input event generated by the user through the input device, and report the event to the application framework module.
  • the application framework module is configured to determine whether the input event is an edge input event or a normal input event. If it is a normal input event, the normal input event is processed and recognized, and the recognition result is reported to the application module; if the edge input event is an edge Input events are processed and identified, and the recognition result is reported to the application module;
  • the application module is configured to execute a corresponding input instruction according to the reported recognition result.
  • the mobile terminal, the input processing method, and the user equipment in the embodiment of the present invention avoid the operation of distinguishing between the A area and the C area in the application framework layer, and the virtual device is established in the application framework layer, thereby avoiding distinguishing the A area in the driving layer.
  • the dependence of the C area on the hardware by setting the touch point number, the finger can be distinguished, compatible with the A protocol and the B protocol; and can be integrated into the operating system of the mobile terminal, and can be applied to different hardware, different types of mobile terminals, and can be transplanted.
  • Good all the elements of the touch point (coordinates, numbers, etc. of the touch point) are stored, which can be easily determined by subsequent judgment of the edge input (for example, FIT).
  • the functions performed by the above input processing method in the embodiment of the present invention may also be stored in a computer readable storage medium if implemented in the form of a software function module and sold or used as a stand-alone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • an embodiment of the present invention further provides a computer storage medium, wherein a computer program is stored, and the computer program is used to execute an input processing method of an embodiment of the present invention.
  • Any process or method description in the flowcharts or otherwise described in the embodiments of the invention may be understood to represent code that includes one or more executable instructions for implementing the steps of a particular logical function or process. Modules, segments or portions, and the scope of the embodiments of the invention includes additional implementations, in which the functions may be performed in a substantially simultaneous manner or in an inverse order depending on the functions involved, in the order shown or discussed. This should be the invention It will be understood by those skilled in the art of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种移动终端、输入处理方法及用户设备、计算机存储介质,移动终端包括:输入设备(201);驱动层(202),配置为获取用户通过输入设备产生的输入事件,并上报到应用框架层(203);应用框架层(203),配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给应用层(204);若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给应用层(204);应用层(204),配置为根据上报的识别结果执行相应的输入指令。在应用框架层(203)才进行区分A区和C区的操作,且在应用框架层(203)进行虚拟设备的建立,避免了在驱动层(202)区分A区和C区对硬件的依赖;通过设置触摸点编号,可实现区分手指,兼容A协议和B协议。

Description

移动终端、输入处理方法及用户设备、计算机存储介质 技术领域
本发明涉及通讯领域,更具体地说,涉及一种移动终端、输入处理方法及用户设备、计算机存储介质。
背景技术
随着移动终端技术的发展,终端边框越做越窄。为了改善用户的输入体验,边缘输入技术(例如,边缘触控)应运而生。
现有技术的边缘输入,当检测触摸点信息(touch info)后,在驱动层即根据触摸点信息判断触控是否发生在边缘输入的区域。
然而,在实际中由于输入芯片存在多样性,驱动层获取触摸点信息的方法也都带有极强的针对性,这就导致在判断事件类型(是否为边缘输入事件)时,需要对各款输入芯片做差异化的修改和移植,工作量较大且容易出错。
另一方面,驱动层在上报事件时,可以选择A协议或者B协议两种实现方式,其中B协议会区分手指ID。而边缘输入的实现需要依赖手指ID,在多点输入时用于对比同一手指前后两次点击的数据。因此,现有技术的输入方案仅能支持B协议,而采用A协议的驱动则不能得到支持。
因此,现有技术的输入方案存在硬件依赖性强,不能同时支持A协议和B协议的缺陷,需要改进。
发明内容
本发明要解决的技术问题在于,针对现有技术的上述移动终端的输入方案存储硬件依赖性强的缺陷,提供一种移动终端、输入处理方法及用户 设备、计算机存储介质。
本发明解决其技术问题所采用的技术方案是:
第一方面,提供一种移动终端,包括:
输入设备;
驱动层,配置为获取用户通过输入设备产生的输入事件,并上报到应用框架层;
应用框架层,配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给应用层;若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给应用层;
应用层,配置为根据上报的识别结果执行相应的输入指令。
在一个实施例中,所述正常输入事件与具有第一设备标识的第一输入设备对象相对应;
所述应用框架层还配置为设置一具有第二设备标识的第二输入设备对象,用于与所述边缘输入事件相对应。
在一个实施例中,所述驱动层采用A协议或B协议上报输入事件,若采用A协议上报输入事件,则所述事件获取模块还配置为为每一触摸点赋予一用于区分手指的编号;
若采用B协议上报输入事件,则所述应用框架层还配置为为每一触摸点赋予用于区分手指的编号。
在一个实施例中,所述驱动层包括事件获取模块,配置为获取用户通过输入设备产生的输入事件。
在一个实施例中,所述应用框架层包括输入读取器;
所述移动终端还包括设置于所述驱动层和所述输入读取器间的设备节点,配置为通知所述输入读取器获取输入事件;
所述输入读取器,配置为遍历设备节点,获取输入事件并上报。
在一个实施例中,所述应用框架层还包括:第一事件处理模块,配置为对所述输入读取器上报的输入事件进行坐标计算后上报;
第一判断模块,配置为根据所述第一事件处理模块上报的坐标值判断输入事件是否为边缘输入事件,若不是则将输入事件上报。
在一个实施例中,所述应用框架层还包括:
第二事件处理模块,配置为对所述输入读取器上报的输入事件进行坐标计算后上报;
第二判断模块,配置为根据所述第二事件处理模块上报的坐标值判断输入事件是否为边缘输入事件,若是则将输入事件上报。
在一个实施例中,所述应用框架层还包括:
事件派发模块,配置为将所述第二判断模块和所述第一判断模块上报的事件进行上报。
在一个实施例中,所述应用框架层还包括:
第一应用模块;
第二应用模块;
第三判断模块,配置为根据所述事件派发模块上报的事件中包含的设备标识判断事件是否为边缘输入事件,若属于,则上报给所述第一应用模块,否则上报给当所述第二应用模块;
所述第一应用模块,配置为根据正常输入事件的相关参数对正常输入事件进行识别并将识别结果上报到应用层;
所述第二应用模块,配置为根据边缘输入事件的相关参数对边缘输入事件进行识别并将识别结果上报的应用层。
在一个实施例中,所述输入设备为移动终端的触摸屏;
所述触摸屏包括至少一个边缘输入区和至少一个正常输入区。
在一个实施例中,所述输入设备为移动终端的触摸屏;
所述触摸屏包括至少一个边缘输入区、至少一个正常输入区和至少一个过渡区。
第二方面,提供一种输入处理方法,包括:
驱动层获取用户通过输入设备产生的输入事件,并上报到应用框架层;
应用框架层判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给应用层,若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给应用层;
应用层根据上报的识别结果执行相应的输入指令。
在一个实施例中,所述方法还包括:
为每一输入事件创建一具有设备标识的输入设备对象。
在一个实施例中,所述为每一输入事件创建一具有设备标识的输入设备对象包括:
将正常输入事件与具有第一设备标识的触摸屏相对应;应用框架层设置一具有第二设备标识的第二输入设备对象与边缘输入事件相对应。
在一个实施例中,所述驱动层获取用户通过输入设备产生的输入事件,并上报到应用框架层包括:
所述驱动层为每一触摸点赋予一用于区分手指的编号,并采用A协议协议上报所述输入事件。
在一个实施例中,所述驱动层获取用户通过输入设备产生的输入事件,并上报到应用框架层包括:
所述驱动层采用B协议上报所述输入事件;
所述方法还包括:
所述应用框架层为所述输入事件中的每一触摸点赋予用于区分手指的 编号。
在一个实施例中,所述方法还包括:
应用框架层将边缘输入事件的相关参数中的坐标进行转换后进行上报,以及将正常输入事件的相关参数中的坐标进行转换,并获取移动终端的当前状态,根据当前状态对转换后的坐标进行调整后上报;
应用框架层根据设备标识判断输入事件是否为边缘输入事件,若属于则根据正常输入事件的相关参数对正常输入事件进行识别并将识别结果上报到应用层;若不属于则根据边缘输入事件的相关参数对边缘输入事件进行识别并将识别结果上报的应用层。
在一个实施例中,所述应用框架层判断输入事件是边缘输入事件,还是正常输入事件包括:
从驱动层上报的输入事件的相关参数中获取触摸点的横轴坐标;
将触摸点的横轴坐标x与边缘输入区的宽度Wc以及触摸屏的宽度W进行比较,若Wc<x<(W-Wc)则触摸点位于正常输入区,输入事件为正常输入事件;否则,输入事件为边缘输入事件。
第三方面,提供一种用户设备,包括:
输入设备,配置为接收用户的输入操作,将物理输入转变为电信号以产生输入事件;
处理器,包括:驱动模块、应用框架模块和应用模块;
其中,所述驱动模块,配置为获取用户通过输入设备产生的输入事件,并上报到所述应用框架模块;
所述应用框架模块,配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给所述应用模块;若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给所述应用模块;
应用模块,配置为根据上报的识别结果执行相应的输入指令。
第四方面,提供一种计算机存储介质,所述计算机存储介质存储有计算机程序,该计算机程序用于执行上述输入处理方法。
实施本发明的移动终端、输入处理方法和用户设备、计算机存储介质,由于在应用框架层才进行区分A区和C区的操作,且在应用框架层进行虚拟设备的建立,避免了在驱动层区分A区和C区对硬件的依赖;通过设置触摸点编号,可实现区分手指,兼容A协议和B协议;且可集成到移动终端的操作系统中,可适用不同硬件、不同种类的移动终端,可移植性好;触摸点的所有要素(触摸点的坐标、编号等)被存储,可后续判断边缘输入(例如,FIT)提供便利。
附图说明
下面将结合附图及实施例对本发明作进一步说明,附图中:
图1是本发明第一实施例的移动终端的屏幕区域划分示意图;
图2是本发明实施例的移动终端的软件架构示意图;
图3是本发明一实施例的移动终端的结构示意图;
图4是本发明实施例中判断边缘输入事件的流程示意图;。
图5是本发明实施例根据设备标识判断输入事件的流程示意图;
图6是本发明实施例的输入处理方法的流程图;
图7是利用本发明实施例的输入处理方法对移动终端的相机应用进行开启的效果示意图;
图8是本发明第二实施例的移动终端的屏幕区域划分示意图;
图9是本发明一实施例的用户设备的硬件结构示意图。
具体实施方式
为了对本发明的技术特征、目的和效果有更加清楚的理解,现对照附 图详细说明本发明的具体实施方式。
参见图1为本发明第一实施例的移动终端的屏幕区域划分示意图。其中,C区101为边缘输入区,A区100为正常输入区,B区102为非输入区。
在本发明的实施例中,A区内的输入操作,按照现有的正常处理方式进行处理,例如,A区100内单击某应用图标即开启该应用等。对于C区101内的输入操作,可定义为边缘输入处理方式,例如,可定义C区101内双边滑动即进行终端加速等。B区102为非输入区,例如,B区102可设置有按键区、听筒等。
在本发明的实施例中,C区可采用固定方式划分或自定义划分。固定划分,即在移动终端的屏幕区设置固定长度、固定宽带的区域作为C区101。C区101可包括位于移动终端屏幕左侧的部分区域和右侧的部分区域,其位置固定设于移动终端的两侧边缘,如图1所示。当然,也可仅在移动终端的一侧边缘处划分C区101。
自定义划分,即C区101的区域的个数、位置及大小,可自定义的设置,例如,可由用户进行设定,也可由移动终端根据自身需求,调整C区101的区域的数量、位置及大小。通常,C区101的基本图形设计为矩形,只要输入图形对角的两个顶点坐标即可确定C区的位置和大小。
为满足不同用户对不同应用的使用习惯,还可设置应用于不同应用场景下的多套C区设置方案。例如,在系统桌面下,因为图标占位较多,两侧的C区宽度设置得相对较窄;而当点击相机图标进入相机应用后,可设置此场景下的C区数量、位置、大小,在不影响对焦的情况下,C区宽度可设置的相对较宽。
本发明实施例对C区的划分、设置方式不作限制。
参见图2,本发明实施例的移动终端的软件架构示意图。本发明实施例的移动终端的软件架构包括:输入设备201、驱动层202、应用框架层203 和应用层204。
输入设备201接收到用户的输入操作,将物理输入转变为电信号TP,将TP传递至驱动层202;驱动层202对输入的位置进行解析,得到触摸点的具体坐标、持续时间等参数,将该参数上传至应用框架层203,应用框架层203与驱动层202的通信可通过相应的接口来实现。应用框架层203接收到驱动层202上报的参数,进行解析,区分边缘输入事件和正常输入事件,并将有效的输入向上传递给应用层204的具体哪一个应用,以满足应用层204根据不同的输入操作执行不同的输入操作指令。
具体的,驱动层202配置为获取用户通过输入设备产生的输入事件,并上报到应用框架层203。
应用框架层203配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给应用层;若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给应用层204。
应用层204配置为根据上报的识别结果执行相应的输入指令。
本发明实施例的移动终端,由于在应用框架层才进行区分A区和C区的操作,且在应用框架层进行虚拟设备的建立,避免了在驱动层区分A区和C区对硬件的依赖。
参见图3,为本发明一实施例的移动终端的结构示意图。具体的,在本发明该实施例中,输入设备201配置为接收用户的输入。输入设备201可为触摸屏、触摸传感器面板(设置有分立的电容性传感器、电阻性传感器、力传感器、光学传感器或类似传感器等的触摸面板)、非触摸式输入设备(例如,红外输入设备等)等。
在本发明的一个实施例中,输入设备包括触摸屏2010。驱动层202包括事件获取模块2020。在驱动层202和应用框架层203之间设置有设备节 点2021。应用框架层203包括输入读取器(input reader)2030、第一事件处理模块2031、第二事件处理模块2032、第一判断模块2033、第二判断模块2034和事件派发模块2035、第三判断模块2036、第一应用模块2037、第二应用模块2038等。
其中,驱动层202包括事件获取模块2001,配置为获取用户通过输入设备201产生的输入事件,例如,通过触摸屏进行的输入操作事件。在本发明的实施例中,输入事件包括:正常输入事件(A区输入事件)和边缘输入事件(C区输入事件)。正常输入事件包括在A区进行的单击、双击、滑动等输入操作。边缘输入事件包括在C区进行的左侧边缘上滑、左侧边缘下滑、右侧边缘上滑、右侧边缘下滑、双边上滑、双边下滑、握持手机四角、单边来回滑、握一握、单手握持等输入操作。
此外,事件获取模块2001还配置为获取输入操作的触摸点的坐标、持续时间等相关参数。若采用A协议上报输入事件,则事件获取模块2001还配置为为每一触摸点赋予一用于区分手指的编号(ID)。由此,若采用A协议上报输入事件,则上报的数据包括触摸点的坐标、持续时间等参数,以及触摸点的编号。
驱动层202和输入读取器2030间设置有设备节点2011,配置为通知应用框架层203的输入读取器2030获取输入事件。
输入读取器2030,配置为遍历设备节点,获取输入事件并上报。若驱动层202采用B协议上报输入事件,则输入读取器2030还配置为为每一触摸点赋予用于区分手指的编号(ID)。在本发明的实施例中,输入读取器2030还配置为将触摸点的所有要素信息(坐标、持续时间、编号等)进行存储。
在本发明的实施例中,为了便于应用层204区分不同的输入事件以进行响应,每一输入事件创建一具有设备标识的输入设备对象。在一个实施例中,可为正常输入事件创建第一输入设备对象,其具有第一标识。第一 输入设备对象与实际硬件触摸屏相对应。
此外,应用框架层203还包括一第二输入设备对象,该第二输入设备对象(例如,边缘输入设备,FIT device)为虚拟设备,即为一空设备,其有一第二标识,用于与边缘输入事件相对应。应理解,也可将边缘输入事件与具有第一标识的第一输入设备对象相对应,而将正常控事件与具有第二标识的第二输入设备对象相对应。
第一事件处理模块2031,配置为对输入读取器2030上报的输入事件进行处理,例如,触摸点的坐标计算。
第二事件处理模块2032,配置为对输入读取器2030上报的输入事件进行处理,例如,触摸点的坐标计算。
第一判断模块2033配置为根据坐标值(X值)判断事件是否为边缘输入事件,若不是则将事件上传到事件派发模块2035。
第二判断模块2034配置为根据坐标值(X值)判断事件是否为边缘输入事件,若是则将事件上传到事件派发模块2035。
参见图4,第一判断模块2033在判断事件是否为边缘输入事件时,获取触摸点的横轴坐标,将触摸点的横轴坐标(即X轴坐标)(x)与C区宽度(Wc)以及触摸屏宽度(W)进行比较。具体的,若Wc<x<(W-Wc)则触摸点位于A区,事件为正常输入事件;否则,事件为边缘输入事件;若事件不是边缘输入事件(即为正常输入事件)则将事件上报到事件派发模块2035。同样的,第二判断模块2034在判断事件是否为边缘输入事件时,按照图4所示的方式进行判断,若判断结果为事件为边缘输入事件,则将事件上报到事件派发模块2035。
应理解,图4所示的判断流程是建立在如图1所示的移动终端的触摸屏基础上的,即移动终端包括位于左右两侧边缘的C区101,和位于中间的A区100。因此,当沿着图1所示的坐标系进行坐标设定时,若Wc<x< (W-Wc)则可确定触摸点位于A区。在其它实施例中,判断公式(Wc<x<(W-Wc))可根据移动终端区域的划分进行调整,例如,若移动终端仅包括一个位于左侧边缘的C区101,且其宽度为Wc,则当Wc<x<W时,触摸点位于A区;否则,触摸点位于C区。若移动终端仅包括一个位于右侧边缘的C区101,且其宽度为Wc,则当x<(W-Wc)时,触摸点位于A区;否则,触摸点位于C区。
事件派发模块2035配置为将边缘输入事件和/或A区输入事件上报到第三判断模块2036。在一个实施例中,边缘输入事件和A区输入事件上报所采用的通道不相同。边缘输入事件采用专用通道上报。
此外,事件派发模块2035还配置为将边缘输入事件的相关参数中的坐标进行转换后进行上报,以及将正常输入事件的相关参数中的坐标进行转换,并获取移动终端的当前状态,根据当前状态对转换后的坐标进行调整后上报。
对坐标进行转换包括:将触摸屏的坐标转换映射为移动终端显示屏的坐标。
本发明实施例中,仅对A区的坐标进行调整,具体的,获取移动终端的当前状态,根据当前状态对转换后的坐标进行调整包括:
若为单手操作状态,则坐标与正常状态的坐标相比按一定比例缩小和移动,因此,将转换后的坐标按比例进行缩小和移动。
若为横屏状态,则坐标与正常状态的坐标相比横纵坐标被切换,因此,将转换后的坐标进行横纵坐标的切换。
若为分屏状态,则坐标与正常状态的坐标相比被按照比例转换为了两个或两个以上坐标,因此,将转换后的坐标进行相应的转换。
根据检测到的移动终端的状态(例如,横竖屏、单手操作、分屏等状态)对输入事件的参数进行调整。例如,若为单手操作,则将坐标按比例 进行缩小。在一个实施例中,事件派发模块2035由inputdispatcher::dispatchmotion()实现。
第三判断模块2036配置为根据设备标识(ID)判断事件是否为边缘输入事件,若属于,则上报给第一应用模块2037,否则上报给当第二应用模块2038。
具体的,参见图5,第三判断模块2036在判断时,首先获取设备标识,根据设备标识判断是否为触屏类型设备;若是,则进一步判断设备标识是否为C区设备标识即上述第二输入设备对象的标识,若是,则判断为边缘输入事件,若否,则判断为正常输入事件。应理解,也可在判断为触屏类设备后,进一步判断设备标识是否为A区设备标识即上述第一输入设备对应的标识,若是,则判断为正常输入事件,若否,则判断为边缘输入事件。
在本发明的实施例中,第一应用模块2037配置为处理与A区输入相关的输入事件,具体的,这种处理包括:根据输入操作的触摸点坐标、持续时间、编号等进行处理识别,并将识别结果上报到应用层。第二应用模块2038配置为处理与C区输入相关的输入事件,具体的,这种处理包括:根据处理操作的触摸点坐标、持续时间、编号进行处理识别,并将识别结果上报到应用层。例如,根据触摸点的坐标、持续时间和编号即可识别出输入操作是A区的单击、滑动,还是C区的单边来回滑等。
应用层204包括相机、图库、锁屏等应用(应用1、应用2……)。本发明实施例中的输入操作包括应用级和系统级,系统级的手势处理也将其归类为应用层。其中,应用级为对应用程序的操控,例如,开启、关闭、音量控制等。系统级为对移动终端的操控,例如,开机、加速、应用间切换、全局返回等。应用层可以通过注册C区事件的Listener获得C区的输入事件进行处理,也可以通过注册A区事件的Listener获得A区的输入事件进行处理。
在一个实施例中,移动终端设置并存储有与不同的输入操作对应的输入指令,其中包括与边缘输入操作对应的输入指令和与正常输入操作对应的输入指令。应用层接收到上报的边缘输入事件的识别结果,即根据边缘输入操作调用相应的输入指令以响应该边缘输入操作。应用层接收到上报的正常输入事件的识别结果,即根据正常输入操作调用相应的输入指令以响应该正常输入操作。
应理解,本发明实施例的输入事件包括仅在A区的输入操作、仅在C区的输入操作以及同时产生于A区和C区的输入操作。由此,输入指令也包括与这三类输入事件对应的输入指令。本发明实施例可实现A区和C区输入操作的组合对移动终端进行控制,例如,输入操作为同时单击A区和C区的相应位置,对应的输入指令为关闭某一应用,因此,通过同时单击A区和C区相应位置的输入操作,可实现对应用的关闭。
本发明实施例的移动终端,由于在应用框架层才进行区分A区和C区的操作,且在应用框架层进行虚拟设备的建立,避免了在驱动层区分A区和C区对硬件的依赖;通过设置触摸点编号,可实现区分手指,兼容A协议和B协议;且由于输入读取器2030、第一事件处理模块2031、第二事件处理模块2032、第一判断模块2033、第二判断模块2034和事件派发模块2035、第三判断模块2036、第一应用模块2037、第二应用模块2038等的功能可集成到移动终端的操作系统中,可适用不同硬件、不同种类的移动终端,可移植性好;输入读取器(Input Reader)会自动将一个触摸点的所有要素(触摸点的坐标、编号等)保存起来,为后续判断边缘输入(例如,FIT)提供便利。
参见图6为本发明实施例的输入处理方法的流程图,包括以下步骤:
S1、驱动层获取用户通过输入设备产生的输入事件,并上报到应用框架层。
具体的,输入设备接收到用户的输入操作(即输入事件),将物理输入转变为电信号,并将电信号传递至驱动层。在本发明实施例中,输入事件包括A区输入事件和C区输入事件。A区输入事件包括在A区进行的单击、双击、滑动等输入操作。C区输入事件包括在C区进行的左侧边缘上滑、左侧边缘下滑、右侧边缘上滑、右侧边缘下滑、双边上滑、双边下滑、单边来回滑、握一握、单手握持等输入操作。
驱动层根据接收到的电信号对输入位置进行解析,得到触摸点的具体坐标、持续时间等相关参数。该相关参数被上报到应用框架层。
此外,若驱动层采用A协议上报输入事件,则该步骤S1还包括:
为每一触摸点赋予一用于区分手指的编号(ID)。
由此,若驱动层采用A协议上报输入事件,则上报的数据包括上述相关参数,以及触摸点的编号。
S2、应用框架层判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则执行步骤S3,若为边缘输入事件则执行步骤S4。
具体的,应用框架层根据输入事件的相关参数中的坐标可判断其为边缘输入事件还是正常输入事件。参见上述图4,首先获取触摸点的横轴坐标,然后将触摸点的横轴坐标(即X轴坐标)(x)与C区宽度(Wc)以及触摸屏宽度(W)进行比较。若Wc<x<(W-Wc)则触摸点位于A区,事件为正常输入事件;否则,事件为边缘输入事件。若驱动层采用B协议上报输入事件,则步骤S2还具体包括:为每一触摸点赋予用于区分手指的编号(ID);将触摸点的所有要素信息(坐标、持续时间、编号等)进行存储。
由此,本发明实施例通过设置触摸点编号,可实现区分手指,兼容A协议和B协议;且触摸点的所有要素(触摸点的坐标、编号等)被存储,可后续判断边缘输入(例如,FIT)提供便利。
在一个实施例中,边缘输入事件和正常输入事件上报所采用的通道不 相同。边缘输入事件采用专用通道。
S3、应用框架层对正常输入事件进行处理识别,并将识别结果上报给应用层。
S4、应用框架层对边缘输入事件进行处理识别,并将识别结果上报给应用层。
具体的,处理识别包括:根据输入操作的触摸点坐标、持续时间、编号等进行处理识别,以确定输入操作。例如,根据触摸点的坐标、持续时间和编号即可识别出是A区的单击、滑动等输入操作,还是C区的单边来回滑等输入操作。
S5、应用层根据上报的识别结果执行相应的输入指令。
具体的,应用层包括相机、图库、锁屏等应用。本发明实施例中的输入操作包括应用级和系统级,系统级的手势处理也将其归类为应用层。其中,应用级为对应用程序的操控,例如,开启、关闭、音量控制等。系统级为对移动终端的操控,例如,开机、加速、应用间切换、全局返回等。
在一个实施例中,移动终端设置并存储有与不同的输入操作对应的输入指令,其中包括与边缘输入操作对应的输入指令和与正常输入操作对应的输入指令。应用层接收到上报的边缘输入事件的识别结果,即根据边缘输入操作调用相应的输入指令以响应该边缘输入操作;应用层接收到上报的正常输入事件的识别结果,即根据正常输入操作调用相应的输入指令以响应该正常输入操作。
应理解,本发明实施例的输入事件包括仅在A区的输入操作、仅在C区的输入操作以及同时产生于A区和C区的输入操作。由此,输入指令也包括与这三类输入事件对应的输入指令。本发明实施例可实现A区和C区输入操作的组合对移动终端进行控制,例如,输入操作为同时单击A区和C区的相应位置,对应的输入指令为关闭某一应用,因此,通过同时单击A 区和C区相应位置的输入操作,可实现对应用的关闭。
在一个实施例中,本发明实施例的输入处理方法还包括:
S11、为每一输入事件创建一具有设备标识的输入设备对象。
具体的,在一个实施例中,可为正常输入事件创建第一输入设备对象,其具有第一标识。第一输入设备对象与输入设备触摸屏相对应。应用框架层设置一第二输入设备对象。该第二输入设备对象(例如,为FIT device)为虚拟设备,即为一空设备,其具有一第二标识,用于与边缘输入事件相对应。应理解,也可将边缘输入事件与具有第一标识的第一输入设备对象相对应,而将正常控事件与具有第二标识的第二输入设备对象相对应。
在一个实施例中,本发明实施例的输入处理方法还包括:
S21、将边缘输入事件的相关参数中的坐标进行转换后进行上报,以及将正常输入事件的相关参数中的坐标进行转换,并获取移动终端的当前状态,根据当前状态对转换后的坐标进行调整后上报。
具体的,移动终端的当前状态包括横竖屏、单手操作、分屏等。其中,横竖屏可通过移动终端中的陀螺仪等进行检测。单手操作和分屏可通过获取移动终端的相关设置参数进行检测。
对坐标进行转换包括:将触摸屏的坐标转换映射为移动终端显示屏的坐标。
本发明实施例中,仅对A区的坐标进行调整,具体的,获取移动终端的当前状态,根据当前状态对转换后的坐标进行调整包括:
若为单手操作状态,则坐标与正常状态的坐标相比按一定比例缩小和移动,因此,将转换后的坐标按比例进行缩小和移动。
若为横屏状态,则坐标与正常状态的坐标相比横纵坐标被切换,因此,将转换后的坐标进行横纵坐标的切换。
若为分屏状态,则坐标与正常状态的坐标相比被按照比例转换为了两 个或两个以上坐标,因此,将转换后的坐标进行相应的转换。
在一个实施例中,步骤S21可由inputdispatcher::dispatchmotion()实现。
S22、根据设备标识判断输入事件是否为边缘输入事件,若属于,则上执行步骤S3,若不属于则执行步骤S4。
具体的,参见上述图5,根据设备标识判断输入事件是否为边缘输入事件时,首先获取设备标识,根据设备标识判断是否为触屏类型设备;若是,则进一步判断设备标识是否为C区设备标识即上述第二输入设备对象的标识,若是,则判断为边缘输入事件,若否,则判断为正常输入事件。应理解,也可在判断为触屏类设备后,进一步判断设备标识是否为A区设备标识即上述第一输入设备对应的标识,若是,则判断为正常输入事件,若否,则判断为边缘输入事件。
本发明实施例的输入处理方法,由于在应用框架层才进行区分A区和C区的操作,且在应用框架层进行虚拟设备的建立,避免了在驱动层区分A区和C区对硬件的依赖;通过设置触摸点编号,可实现区分手指,兼容A协议和B协议;且可集成到移动终端的操作系统中,可适用不同硬件、不同种类的移动终端,可移植性好;触摸点的所有要素(触摸点的坐标、编号等)被存储,可后续判断边缘输入(例如,FIT)提供便利。
参见图7,是利用本发明实施例的输入处理方法对移动终端的相机应用进行开启的效果示意图。其中,图7左边的图为移动终端的主界面示意图,其中,区域1010为在边缘输入区域(C区域101)预先设置的可实现开启相机功能的输入操作的触摸点。具体的,单击区域1010可实现开启相机。则在移动终端中,存储有输入指令为:开启相机,其与单击区域1010的输入操作相对应。
当需要使用相机时,用户单击触摸屏的区域1010,驱动层获取该输入 事件,并上报到应用框架层。应用框架层根据触摸点的坐标可判断出该输入事件为边缘输入事件。应用框架层对该边缘输入事件进行处理识别,根据触摸点坐标、持续时间和编码,识别出该输入操作为单击区域1010。应用框架层将识别结果上报到应用层,应用层即执行开启相机的输入指令。
参见图8为本发明第二实施例的移动终端的屏幕划分示意图。在该实施例中,为了防止用户输入过程中偏离输入开始的区域导致准确率下降,在移动终端的屏幕边缘增加过渡区103(T区)。
在该实施例中,若输入事件从C区开始,偏离到T区则依旧认为本次滑动是边缘手势;若输入事件从C区开始,偏离到A区,则认为本次边缘手势结束,开始正常输入事件;若输入事件从T区或者A区开始,无论之后滑动到屏幕任何区域,都认为本次滑动是正常输入事件。
该实施例的输入事件的上报流程和上述实施例所述的输入处理方法相同,区别仅在于:应用框架层对边缘输入事件进行处理识别时,需要按照上述三种情况进行判断,以确定准确的输入事件。
本发明实施例的移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如手机、移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。
相应的,本发明实施例还提供一种用户设备,参见图9为其硬件结构示意图。参见图9,用户设备1000包括触摸屏100、控制器200、存储装置310、全球定位系统(GPS)芯片320、通信器330、视频处理器340、音频处理器350、按钮360、麦克风370、相机380、扬声器390和动作传感器400。
触摸屏100可以如上所述划分为A区、B区和C区,或A区、B区、 C区和T区。触摸屏100可以实现为各种类型的显示器,诸如:液晶显示器(LCD)、有机发光二极管(OLED)显示器和等离子体显示板(PDP)。触摸屏100可以包括驱动电路,其能够实现为,例如a-si TFT、低温多晶硅(LTPS)TFT和有机TFT(OTFT),和背光单元。
同时,触摸屏100可以包括用于感测用户的触摸手势的触摸传感器。触摸传感器可以实现为各种类型的传感器,诸如电容类型、电阻类型或者压电类型。电容类型通过当用户身体的一部分(例如,用户的手指)触摸表面上涂敷有导电材料的触摸屏的表面时感测由用户的身体激励的微电流计算触摸坐标值。根据电阻类型,触摸屏包括两个电极板,并且当用户触摸屏幕时通过感测当触摸点处的上板和下板接触时流动的电流,来计算触摸坐标值。此外,当用户设备1000支持笔输入功能时,触摸屏100可以感测用于使用除了用户手指之外诸如笔之类的输入装置的用户手势。当输入装置是包括线圈的手写笔(stylus pen)时,用户设备1000可以包括用于感测磁场的磁性传感器(未示出),所述磁场根据手写笔内线圈对磁性传感器的接近度而改变。由此,除了感测触摸手势之外,用户设备1000还可以感测接近的手势,即手写笔悬停在用户设备1000上方。
存储装置310可以存储用户设备1000的操作所需的各种程序和数据。例如,存储装置310可以存储用于构成将在各区(例如,A区、C区)上显示的各种屏幕的程序和数据。
控制器200通过使用存储在存储装置310中的程序和数据在触摸屏100的各区上显示内容。
控制器200包括RAM 210、ROM 220、处理器(CPU)230、图形处理单元(GPU)240和总线250。RAM 210、ROM 220、CPU 230和GPU 240可以通过总线250彼此连接。
CPU 230访问存储装置310并且使用存储在存储装置310中的操作系 统(OS)执行启动。而且,CPU 230通过使用存储在存储装置310中的各种程序、内容和数据执行各种操作。
ROM 220存储用于系统启动的命令集。当开启命令被输入并且电力被提供时,CPU 230根据存储在ROM 220中命令集将存储在存储装置310中的OS复制到RAM 210,并且通过运行OS启动系统。当启动完成时,CPU 230将存储在存储装置310中的各种程序复制到RAM 210,并且通过运行RAM 210中的复制程序执行各种操作。具体地说,GPU 240可以通过使用计算器(未示出)和渲染器(未示出)生成包括诸如图标、图像和文本这样的各种对象的屏幕。计算器计算诸如坐标值、格式、大小和颜色这样的特征值,其中分别根据屏幕的布局用颜色标记对象。
GPS芯片320是从GPS卫星接收GPS信号的单元,并且计算用户设备1000的当前位置。当使用导航程序时或者当请求用户的当前位置时,控制器200可以通过使用GPS芯片320计算用户的位置。
通信器330是根据各种类型的通信方法与各种类型的外部设备执行通信的单元。通信器330包括WiFi芯片331、蓝牙芯片332、无线通信芯片333和NFC芯片334。控制器200通过使用通信器330执行与各种外部设备的通信。
WiFi芯片331和蓝牙芯片332分别根据WiFi方法和蓝牙方法执行通信。当使用WiFi芯片331或者蓝牙芯片332时,诸如服务集标识符(service set identifier,SSID)和会话密钥这样的各种连接信息可以首先被收发,可以通过使用连接信息连接通信,并且可以收发各种信息。无线通信芯片333是根据诸如IEEE、Zigbee、第三代(3G)、第三代合作项目(3GPP)和长期演进(LTE)这样的各种通信标准执行通信的芯片。NFC芯片334是根据使用各种RF-ID频带宽度当中13.56兆赫带宽的近场通信(NFC)方法进行操作的芯片,各种RF-ID频带宽度诸如135千赫兹、13.56兆赫、433兆 赫、860~960兆赫和2.45吉赫。
视频处理器340是处理包括在通过通信器330接收到的内容或者存储在存储装置310中的内容中的视频数据的单元。视频处理器340可以执行对于视频数据的各种图像处理,诸如解码、缩放、噪声过滤、帧速率变换和分辩率变换。
音频处理器350是处理包括在通过通信器330接收到的内容或者存储在存储装置310中的内容中的音频数据的单元。音频处理器350可以执行对于音频数据的各种处理,诸如解码、放大和噪声过滤。
当对于多媒体内容运行再现程序时控制器200可以通过驱动视频处理器340和音频处理器350再现相应内容。
扬声器390输出在音频处理器350中生成的音频数据。
按钮360可以是各种类型的按钮,诸如机械按钮或者在像用户设备1000的主要外体的正面、侧面或者背面这样的一些区域上形成的触摸垫或者触摸轮。
麦克风370是接收用户语音或者其它声音并且将它们变换为音频数据的单元。控制器200可以使用在呼叫过程期间通过麦克风370输入的用户语音,或者将它们变换为音频数据并且存储在存储装置310中。
相机380是根据用户的控制捕获静止图像或者视频图像的单元。相机380可以实现为多个单元,诸如正面相机和背面相机。如下面所述,相机380可以用作在追踪用户的目光的示范性实施例中获得用户图像的装置。
当提供相机380和麦克风370时,控制器200可以根据通过麦克风370输入的用户的声音或者由相机380识别的用户动作执行控制操作。因此,用户设备1000可以在动作控制模式或者语音控制模式下操作。当在动作控制模式下操作时,控制器200通过激活相机380拍摄用户,跟踪用户动作的改变,以及执行相应的操作。当在语音控制模式下操作时,控制器200 可以在语音识别模式下操作以分析通过麦克风370输入的语音并且根据分析的用户语音执行控制操作。
在支持动作控制模式或者语音控制模式的用户设备1000中,在上述各种示范性实施例中使用语音识别技术或者动作识别技术。例如,当用户执行像选择在主页屏幕上标记的对象这样的动作或者说出相应于对象的语音命令时,可以确定选择了相应对象并且可以执行与该对象匹配的控制操作。
动作传感器400是感测用户设备1000的主体的移动的单元。用户设备1000可以旋转或者沿各种方向倾斜。动作传感器400可以通过使用诸如地磁传感器、陀螺仪传感器和加速度传感器这样的各种传感器中的一个或多个来感测诸如旋转方向、角度和斜率这样的移动特征。
而且,虽然在图9中未示出,但是根据示范性实施例,用户设备1000还可以包括能够与USB连接器连接的USB端口、用于连接像耳机、鼠标、LAN和接收并处理数字多媒体广播(DMB)信号的DMB芯片这样的各种外部元件的各种输入端口、以及各种其他传感器。
如上所述,存储装置310可以存储各种程序。
基于图9所示的用户设备,在本发明的实施例中,触摸屏,配置为接收用户的输入操作,将物理输入转变为电信号以产生输入事件;
处理器,包括:驱动模块、应用框架模块和应用模块;
其中,驱动模块,配置为获取用户通过输入设备产生的输入事件,并上报到应用框架模块;
应用框架模块,配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给应用模块;若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给应用模块;
应用模块,配置为根据上报的识别结果执行相应的输入指令。
应理解,上述实施例的用户设备处理边缘输入事件和正常输入事件的原理和细节同样适用于本发明实施例的用户设备。
本发明实施例的移动终端、输入处理方法和用户设备,由于在应用框架层才进行区分A区和C区的操作,且在应用框架层进行虚拟设备的建立,避免了在驱动层区分A区和C区对硬件的依赖;通过设置触摸点编号,可实现区分手指,兼容A协议和B协议;且可集成到移动终端的操作系统中,可适用不同硬件、不同种类的移动终端,可移植性好;触摸点的所有要素(触摸点的坐标、编号等)被存储,可后续判断边缘输入(例如,FIT)提供便利。
本发明实施例上述输入处理方法所执行的功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。
相应地,本发明实施例还提供一种计算机存储介质,其中存储有计算机程序,该计算机程序用于执行本发明实施例的输入处理方法。
流程图中或在本发明的实施例中以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的 实施例所述技术领域的技术人员所理解。
上面结合附图对本发明的实施例进行了描述,但是本发明并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本发明的启示下,在不脱离本发明宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本发明的保护之内。

Claims (20)

  1. 一种移动终端,包括:
    输入设备;
    驱动层,配置为获取用户通过输入设备产生的输入事件,并上报到应用框架层;
    应用框架层,配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给应用层;若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给应用层;
    应用层,配置为根据上报的识别结果执行相应的输入指令。
  2. 根据权利要求1所述的移动终端,其中,所述正常输入事件与具有第一设备标识的第一输入设备对象相对应;
    所述应用框架层还配置为设置一具有第二设备标识的第二输入设备对象,用于与所述边缘输入事件相对应。
  3. 根据权利要求1所述的移动终端,其中,所述驱动层采用A协议或B协议上报输入事件,若采用A协议上报输入事件,则所述事件获取模块还配置为为每一触摸点赋予一用于区分手指的编号;
    若采用B协议上报输入事件,则所述应用框架层还配置为为每一触摸点赋予用于区分手指的编号。
  4. 根据权利要求1所述的移动终端,其中,所述驱动层包括事件获取模块,配置为获取用户通过输入设备产生的输入事件。
  5. 根据权利要求1所述的移动终端,其中,所述应用框架层包括输入读取器;
    所述移动终端还包括设置于所述驱动层和所述输入读取器间的设备节点,配置为通知所述输入读取器获取输入事件;
    所述输入读取器,配置为遍历设备节点,获取输入事件并上报。
  6. 根据权利要求1所述的移动终端,其中,所述应用框架层还包括:第一事件处理模块,配置为对所述输入读取器上报的输入事件进行坐标计算后上报;
    第一判断模块,配置为根据所述第一事件处理模块上报的坐标值判断输入事件是否为边缘输入事件,若不是则将输入事件上报。
  7. 根据权利要求6所述的移动终端,其中,所述应用框架层还包括:
    第二事件处理模块,配置为对所述输入读取器上报的输入事件进行坐标计算后上报;
    第二判断模块,配置为根据所述第二事件处理模块上报的坐标值判断输入事件是否为边缘输入事件,若是则将输入事件上报。
  8. 根据权利要求7所述的移动终端,其中,所述应用框架层还包括:
    事件派发模块,配置为将所述第二判断模块和所述第一判断模块上报的事件进行上报。
  9. 根据权利要求8所述的移动终端,其中,所述应用框架层还包括:
    第一应用模块;
    第二应用模块;
    第三判断模块,配置为根据所述事件派发模块上报的事件中包含的设备标识判断事件是否为边缘输入事件,若属于,则上报给所述第一应用模块,否则上报给当所述第二应用模块;
    所述第一应用模块,配置为根据正常输入事件的相关参数对正常输入事件进行识别并将识别结果上报到应用层;
    所述第二应用模块,配置为根据边缘输入事件的相关参数对边缘输入事件进行识别并将识别结果上报的应用层。
  10. 根据权利要求1-9任一项所述的移动终端,其中,所述输入设备为 移动终端的触摸屏;
    所述触摸屏包括至少一个边缘输入区和至少一个正常输入区。
  11. 根据权利要求1-9任一项所述的移动终端,其中,所述输入设备为移动终端的触摸屏;
    所述触摸屏包括至少一个边缘输入区、至少一个正常输入区和至少一个过渡区。
  12. 一种输入处理方法,包括:
    驱动层获取用户通过输入设备产生的输入事件,并上报到应用框架层;
    应用框架层判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给应用层,若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给应用层;
    应用层根据上报的识别结果执行相应的输入指令。
  13. 根据权利要求12所述的输入处理方法,其中,所述方法还包括:
    为每一输入事件创建一具有设备标识的输入设备对象。
  14. 根据权利要求13所述的输入处理方法,其中,所述为每一输入事件创建一具有设备标识的输入设备对象包括:
    将正常输入事件与具有第一设备标识的触摸屏相对应;应用框架层设置一具有第二设备标识的第二输入设备对象与边缘输入事件相对应。
  15. 根据权利要求12所述的输入处理方法,其中,所述驱动层获取用户通过输入设备产生的输入事件,并上报到应用框架层包括:
    所述驱动层为每一触摸点赋予一用于区分手指的编号,并采用A协议协议上报所述输入事件。
  16. 根据权利要求12所述的输入处理方法,其中,所述驱动层获取用户通过输入设备产生的输入事件,并上报到应用框架层包括:
    所述驱动层采用B协议上报所述输入事件;
    所述方法还包括:
    所述应用框架层为所述输入事件中的每一触摸点赋予用于区分手指的编号。
  17. 根据权利要求12-16任一项所述的输入处理方法,其中,所述方法还包括:
    应用框架层将边缘输入事件的相关参数中的坐标进行转换后进行上报,以及将正常输入事件的相关参数中的坐标进行转换,并获取移动终端的当前状态,根据当前状态对转换后的坐标进行调整后上报;
    应用框架层根据设备标识判断输入事件是否为边缘输入事件,若属于则根据正常输入事件的相关参数对正常输入事件进行识别并将识别结果上报到应用层;若不属于则根据边缘输入事件的相关参数对边缘输入事件进行识别并将识别结果上报的应用层。
  18. 根据权利要求12所述的输入处理方法,其中,所述应用框架层判断输入事件是边缘输入事件,还是正常输入事件包括:
    从驱动层上报的输入事件的相关参数中获取触摸点的横轴坐标;
    将触摸点的横轴坐标x与边缘输入区的宽度Wc以及触摸屏的宽度W进行比较,若Wc<x<(W-Wc)则触摸点位于正常输入区,输入事件为正常输入事件;否则,输入事件为边缘输入事件。
  19. 一种用户设备,包括:
    输入设备,配置为接收用户的输入操作,将物理输入转变为电信号以产生输入事件;
    处理器,包括:驱动模块、应用框架模块和应用模块;
    其中,所述驱动模块,配置为获取用户通过输入设备产生的输入事件,并上报到所述应用框架模块;
    所述应用框架模块,配置为判断输入事件是边缘输入事件,还是正常输入事件,若为正常输入事件则对正常输入事件进行处理识别,并将识别结果上报给所述应用模块;若为边缘输入事件则对边缘输入事件进行处理识别,并将识别结果上报给所述应用模块;
    应用模块,配置为根据上报的识别结果执行相应的输入指令。
  20. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令配置为执行权利要求12-18任一项所述的输入处理方法。
PCT/CN2016/102779 2015-11-20 2016-10-20 移动终端、输入处理方法及用户设备、计算机存储介质 WO2017084470A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510810571.6 2015-11-20
CN201510810571.6A CN105487705B (zh) 2015-11-20 2015-11-20 移动终端、输入处理方法及用户设备

Publications (1)

Publication Number Publication Date
WO2017084470A1 true WO2017084470A1 (zh) 2017-05-26

Family

ID=55674726

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/102779 WO2017084470A1 (zh) 2015-11-20 2016-10-20 移动终端、输入处理方法及用户设备、计算机存储介质

Country Status (2)

Country Link
CN (1) CN105487705B (zh)
WO (1) WO2017084470A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031824A (zh) * 2021-03-31 2021-06-25 深圳市爱协生科技有限公司 一种动态上报触摸屏数据的方法、系统及移动终端

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105487705B (zh) * 2015-11-20 2019-08-30 努比亚技术有限公司 移动终端、输入处理方法及用户设备
CN105573545A (zh) * 2015-11-27 2016-05-11 努比亚技术有限公司 一种手势校准方法、装置及手势输入处理方法
CN109107148B (zh) * 2018-08-08 2022-04-19 Oppo广东移动通信有限公司 控制方法、装置、存储介质及移动终端
CN109240502B (zh) * 2018-09-20 2021-06-29 江苏电力信息技术有限公司 一种自动适应多种触摸方式的手势识别方法
WO2021068112A1 (zh) * 2019-10-08 2021-04-15 深圳市欢太科技有限公司 触摸事件的处理方法、装置、移动终端及存储介质
CN111596856B (zh) * 2020-05-06 2023-08-29 深圳市世纪创新显示电子有限公司 一种基于副屏触摸的一种笔迹书写方法、系统及存储介质
CN111857415B (zh) * 2020-07-01 2024-02-27 清华大学深圳国际研究生院 一种多点式电阻触控屏及寻址方法
WO2023184301A1 (zh) * 2022-03-31 2023-10-05 京东方科技集团股份有限公司 触控事件的处理方法及装置、存储介质、电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102520845A (zh) * 2011-11-23 2012-06-27 优视科技有限公司 一种移动终端调出缩略图界面的方法及装置
US20130237288A1 (en) * 2012-03-08 2013-09-12 Namsu Lee Mobile terminal
CN104346093A (zh) * 2013-08-02 2015-02-11 腾讯科技(深圳)有限公司 触屏界面手势识别方法和装置及移动终端
CN104375685A (zh) * 2013-08-16 2015-02-25 中兴通讯股份有限公司 一种移动终端屏幕边缘触控优化方法及装置
CN105487705A (zh) * 2015-11-20 2016-04-13 努比亚技术有限公司 移动终端、输入处理方法及用户设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101840299A (zh) * 2010-03-18 2010-09-22 华为终端有限公司 一种触摸操作方法、装置和移动终端
TWI445384B (zh) * 2010-04-26 2014-07-11 Htc Corp 通訊控制方法、通訊裝置及電腦程式產品
CN201910039U (zh) * 2010-12-13 2011-07-27 广州鸿诚电子科技有限公司 触摸屏有驱动与无驱动的转换装置
KR101235432B1 (ko) * 2011-07-11 2013-02-22 김석중 3차원 모델링된 전자기기의 가상터치를 이용한 원격 조작 장치 및 방법
CN104735256B (zh) * 2015-03-27 2016-05-18 努比亚技术有限公司 移动终端的握持方式判断方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102520845A (zh) * 2011-11-23 2012-06-27 优视科技有限公司 一种移动终端调出缩略图界面的方法及装置
US20130237288A1 (en) * 2012-03-08 2013-09-12 Namsu Lee Mobile terminal
CN104346093A (zh) * 2013-08-02 2015-02-11 腾讯科技(深圳)有限公司 触屏界面手势识别方法和装置及移动终端
CN104375685A (zh) * 2013-08-16 2015-02-25 中兴通讯股份有限公司 一种移动终端屏幕边缘触控优化方法及装置
CN105487705A (zh) * 2015-11-20 2016-04-13 努比亚技术有限公司 移动终端、输入处理方法及用户设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113031824A (zh) * 2021-03-31 2021-06-25 深圳市爱协生科技有限公司 一种动态上报触摸屏数据的方法、系统及移动终端

Also Published As

Publication number Publication date
CN105487705A (zh) 2016-04-13
CN105487705B (zh) 2019-08-30

Similar Documents

Publication Publication Date Title
WO2017097097A1 (zh) 触摸控制方法、用户设备、输入处理方法、移动终端及智能终端
WO2017084470A1 (zh) 移动终端、输入处理方法及用户设备、计算机存储介质
KR102427833B1 (ko) 사용자 단말장치 및 디스플레이 방법
US20170322713A1 (en) Display apparatus and method for controlling the same and computer-readable recording medium
EP3091426B1 (en) User terminal device providing user interaction and method therefor
US20170185373A1 (en) User terminal device, and mode conversion method and sound system for controlling volume of speaker thereof
KR102519800B1 (ko) 전자 장치
US20170364166A1 (en) User terminal device and method for controlling the user terminal device thereof
US10067666B2 (en) User terminal device and method for controlling the same
US11157127B2 (en) User terminal apparatus and controlling method thereof
US10928948B2 (en) User terminal apparatus and control method thereof
US10579248B2 (en) Method and device for displaying image by using scroll bar
KR20150019352A (ko) 전자장치에서 그립상태를 인지하기 위한 방법 및 장치
KR20170124933A (ko) 디스플레이 장치, 그 제어 방법 및 컴퓨터 판독가능 기록 매체
US20160139797A1 (en) Display apparatus and contol method thereof
WO2017088694A1 (zh) 手势校准方法、装置及手势输入处理方法、计算机存储介质
US10095384B2 (en) Method of receiving user input by detecting movement of user and apparatus therefor
CN105824531A (zh) 数值调整方法及装置
KR20150134674A (ko) 사용자 단말 및 이의 제어 방법, 그리고 멀티미디어 시스템
WO2021203815A1 (zh) 页面操作方法、装置、终端及存储介质
US10474335B2 (en) Image selection for setting avatars in communication applications
KR102351634B1 (ko) 사용자 단말장치, 음향 시스템 및 외부 스피커의 음량 제어 방법
WO2015014135A1 (zh) 鼠标指针的控制方法、装置及终端设备
WO2017084469A1 (zh) 触摸控制方法、用户设备、输入处理方法和移动终端
US9870085B2 (en) Pointer control method and electronic device thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16865648

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16865648

Country of ref document: EP

Kind code of ref document: A1