WO2018209578A1 - 输入方法及电子设备 - Google Patents

输入方法及电子设备 Download PDF

Info

Publication number
WO2018209578A1
WO2018209578A1 PCT/CN2017/084602 CN2017084602W WO2018209578A1 WO 2018209578 A1 WO2018209578 A1 WO 2018209578A1 CN 2017084602 W CN2017084602 W CN 2017084602W WO 2018209578 A1 WO2018209578 A1 WO 2018209578A1
Authority
WO
WIPO (PCT)
Prior art keywords
fingerprint
electronic device
user
vocabulary
fingerprints
Prior art date
Application number
PCT/CN2017/084602
Other languages
English (en)
French (fr)
Inventor
黄洁静
吴黄伟
黄曦
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP17909894.2A priority Critical patent/EP3640787B1/en
Priority to CN201780008034.7A priority patent/CN109074171B/zh
Priority to US16/613,511 priority patent/US11086975B2/en
Priority to PCT/CN2017/084602 priority patent/WO2018209578A1/zh
Publication of WO2018209578A1 publication Critical patent/WO2018209578A1/zh
Priority to US17/363,344 priority patent/US11625468B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/316User authentication by observing the pattern of computer usage, e.g. typical user behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04803Split screen, i.e. subdividing the display area or the window area into separate subareas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries

Definitions

  • the embodiments of the present application relate to the field of communications technologies, and in particular, to an input method and an electronic device.
  • the input method application can create a personal vocabulary for the user according to the user's input habits. For example, user A often enters the word "document”, then the input method application can store the word "document” as a high-frequency vocabulary in user A's personal vocabulary. Subsequently, when user A logs in his account with a password in the input method application, the input method application will call up the corresponding personal vocabulary for user A. At this time, if the user inputs the word "certificate” again, the input method application can be The word "piece” is presented to the user as the first candidate vocabulary.
  • the input method and device provided by the embodiments of the present application can simplify the process of applying the user to the input method, and improve the input efficiency and the intelligence of human-computer interaction when multiple users share the same input method application.
  • an embodiment of the present application provides an input method implemented in an electronic device having a fingerprint collection device, the method comprising: when an input method application is running, the electronic device acquires a fingerprint on a touch screen of the user; when the fingerprint is a pre- When the stored fingerprint is stored, the electronic device determines a target vocabulary associated with the fingerprint; the electronic device uses the target vocabulary to provide at least one candidate vocabulary corresponding to the current input event.
  • the electronic device can authenticate the user by using the fingerprint when the user inputs the information in the input interface, thereby calling up the personal vocabulary corresponding to the user according to the authentication result, and the subsequent input process.
  • the electronic device can use the personal vocabulary to determine the candidate vocabulary for the input event for the user.
  • the electronic device can automatically call the personal vocabulary corresponding to the input habit for different users according to the fingerprint of the user without the user's perception. Completing the prompts of related candidate vocabulary, thereby improving the input efficiency of the input method, and reducing the probability of user privacy leakage due to confusion of different users' lexicon, and improving the intelligence of human-computer interaction.
  • the electronic device determines the target vocabulary associated with the fingerprint, including: the electronic device is different according to the personal vocabulary of different users. The correspondence between the registered fingerprints of the households is determined as the target vocabulary of the personal vocabulary corresponding to the fingerprint.
  • the electronic device may use the user who is currently applying the input method as a new user that is not registered in the electronic device, and then the electronic device may establish a corresponding to the fingerprint.
  • Temporary personal vocabulary when the fingerprint is a non-registered fingerprint, the electronic device may use the user who is currently applying the input method as a new user that is not registered in the electronic device, and then the electronic device may establish a corresponding to the fingerprint.
  • the method further includes: when the similarity between the temporary personal vocabulary and the first user's personal vocabulary is greater than a threshold, The electronic device adds the temporary personal vocabulary to the personal vocabulary of the first user; the electronic device establishes a correspondence between the fingerprint and the personal vocabulary of the first user in the correspondence.
  • the number of the fingerprints is N, and N is an integer greater than 1.
  • the electronic device determines a target vocabulary associated with the fingerprint. The method includes: when at least one of the N fingerprints is a registered fingerprint, the electronic device determines, according to the correspondence between the personal vocabulary of different users and the registered fingerprints of different users, the personal vocabulary corresponding to the registered fingerprint as the Target vocabulary.
  • the method further includes: the electronic device is Corresponding relationship between the Y unregistered fingerprints and the target thesaurus is established in the correspondence, so that the subsequent electronic device can accurately call the corresponding target thesaurus according to the updated correspondence.
  • the N fingerprints include Z registration fingerprints, 1 ⁇ Z ⁇ N
  • the method further includes: determining, by the electronic device, the corresponding fingerprints of the Z registration fingerprints according to the correspondence relationship Whether the personal vocabulary is the same; when there is a different personal vocabulary between the personal lexicons corresponding to each of the Z registered fingerprints, the electronic device merges the personal vocabulary corresponding to the Z registered fingerprints into one individual. Thesaurus.
  • the method further includes: the electronic device establishing a temporary personal vocabulary corresponding to the N fingerprints.
  • the method further includes: when the similarity between the temporary personal vocabulary and the second user's personal vocabulary is greater than a threshold The electronic device adds the temporary personal vocabulary to the personal vocabulary of the second user; the electronic device establishes a correspondence between the fingerprint and the personal vocabulary of the second user in the correspondence.
  • the touch screen includes an area for displaying a virtual keyboard of the input method application; wherein the electronic device acquires a fingerprint of the user on the touch screen, including: the electronic device acquires a user performing an input event in an area of the keyboard The fingerprint produced.
  • an embodiment of the present application provides an electronic device, including: an acquiring unit, configured to: acquire a fingerprint on a touch screen of a user when the input method is running; and determine a unit, when the fingerprint is a pre-stored registration At the time of the fingerprint, the target vocabulary associated with the fingerprint is determined; and the execution unit is configured to: use the target vocabulary to provide at least one candidate vocabulary corresponding to the current input event.
  • the determining unit is specifically configured to: according to different users The correspondence between the vocabulary and the registered fingerprints of different users determines the personal vocabulary corresponding to the fingerprint as the target vocabulary.
  • the electronic device further includes: an establishing unit, configured to: when the fingerprint is a non-registered fingerprint, establish a temporary personal vocabulary corresponding to the fingerprint.
  • the electronic device further includes a merging unit, the merging unit is configured to: when the similarity between the temporary personal vocabulary and the first user's personal vocabulary is greater than a threshold, the temporary individual The vocabulary is added to the personal vocabulary of the first user; the establishing unit is further configured to: establish a correspondence between the fingerprint and the personal vocabulary of the first user in the correspondence.
  • the number of the fingerprints is N, and N is an integer greater than 1.
  • the determining unit is specifically configured to: when at least one of the N fingerprints is a registered fingerprint, according to different users The correspondence between the personal vocabulary and the registered fingerprints of different users determines the personal vocabulary corresponding to the registered fingerprint as the target vocabulary.
  • the electronic device further includes a merging unit, and the determining unit is further configured to: determine, according to the correspondence, whether the personal vocabulary corresponding to each of the Z registered fingerprints is the same; the merging unit For: when there is a different personal vocabulary between the personal lexicons corresponding to each of the Z registered fingerprints, the personal vocabulary corresponding to the Z registered fingerprints is merged into one personal vocabulary.
  • the establishing unit is further configured to: when the N fingerprints are unregistered fingerprints, establish a temporary personal vocabulary corresponding to the N fingerprints.
  • the merging unit is further configured to: add the temporary personal vocabulary to the second when the similarity between the temporary personal vocabulary and the second user's personal vocabulary is greater than a threshold The personal vocabulary of the user; the establishing unit is further configured to: establish a correspondence between the fingerprint and the personal vocabulary of the second user in the correspondence relationship.
  • the touch screen includes an area for displaying a virtual keyboard of the input method application, and the acquiring unit is specifically configured to: acquire a fingerprint generated when a user performs an input event in an area of the keyboard.
  • an embodiment of the present application provides an electronic device, including: a processor, a memory, a bus, and a communication interface; the memory is configured to store a computer execution instruction, and the processor is connected to the memory through the bus, when the electronic device In operation, the processor executes the computer-executed instructions stored by the memory to cause the electronic device to perform any of the touch control methods described above.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores instructions that, when run on a computer, cause the computer to perform the methods described in the above aspects.
  • the embodiment of the present application further provides a computer program product comprising instructions, which when executed on a computer, cause the computer to perform the method described in the above aspects.
  • FIG. 1 is a schematic diagram of an input method login interface in the prior art
  • FIG. 2 is a schematic structural diagram 1 of an electronic device according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of results of a touch screen in an electronic device according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram 1 of an input method architecture provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram 2 of an input method architecture according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic flowchart 1 of an input method according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram 1 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram 2 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram 3 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram 4 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram 5 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram 6 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram 7 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic diagram 8 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 15 is a second schematic flowchart of an input method according to an embodiment of the present disclosure.
  • FIG. 16 is a schematic diagram 9 of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 17 is a schematic diagram of an application scenario of an input method according to an embodiment of the present disclosure.
  • FIG. 18 is a schematic diagram of an application scenario of an input method according to an embodiment of the present application.
  • FIG. 19 is a schematic diagram of an application scenario of an input method according to an embodiment of the present application.
  • FIG. 20 is a schematic diagram of an application scenario of an input method according to an embodiment of the present application.
  • FIG. 21 is a schematic diagram of an application scenario of an input method according to an embodiment of the present application.
  • FIG. 22 is a schematic diagram of an application scenario of an input method according to an embodiment of the present application.
  • FIG. 23 is a schematic diagram of an application scenario of an input method according to an embodiment of the present application.
  • FIG. 24 is a schematic diagram of an application scenario of an input method according to an embodiment of the present application.
  • FIG. 25 is a schematic structural diagram 2 of an electronic device according to an embodiment of the present disclosure.
  • FIG. 26 is a schematic structural diagram 3 of an electronic device according to an embodiment of the present application.
  • first and second are used for descriptive purposes only, and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, features defining “first” and “second” may include one or more of the features either explicitly or implicitly. In this application In the description of the embodiments, “multiple” means two or more unless otherwise stated.
  • APP input information in various applications
  • it is generally required to install a corresponding input method APP, or use an input method applied by an electronic device to implement text (Chinese or English, etc.).
  • Input of information such as numbers or symbols.
  • the input method application can establish a personal vocabulary according to the input habit of the user at the time of input, and the personal vocabulary may include a user-defined vocabulary, a high-frequency vocabulary, a favorite expression, etc., so that the user uses the input method when using the input method.
  • the input is more efficient.
  • the candidate vocabulary may specifically include at least one of a word, a word, a sentence, a phrase, a number, a letter, a symbol, and an expression in Chinese, and may also include a word in English or other languages. At least one of a word, a sentence, a phrase, a number, a letter, a symbol, and an expression.
  • the input method application will call the default lexicon set in advance to complete the user-triggered input event.
  • the default thesaurus may have been modified according to the input habits of the previous user. Therefore, the input options that conform to their own input habits cannot be obtained in time when the user inputs, so that the input efficiency is lowered, and even the disclosure may be leaked.
  • the privacy of the previous user For example, user A enters user A's phone number "123456" multiple times without logging in to their account, and the input method application adds user A's phone number to the default vocabulary based on this input behavior, followed by the user.
  • the input method application may prompt the user A's phone number "123456" as the candidate vocabulary to the user B. At this time, the phone number of the user A will be leaked.
  • embodiments of the present application provide an input method applicable to a mobile phone, a wearable device, an AR (Augmented Reality) VR (Virtual Reality) device, a tablet computer, a notebook computer, a UMPC (Super Mobile Personal Computer), Any electronic device having a fingerprint verification function such as a netbook or a PDA (Personal Digital Assistant).
  • AR Augmented Reality
  • VR Virtual Reality
  • UMPC Super Mobile Personal Computer
  • Any electronic device having a fingerprint verification function such as a netbook or a PDA (Personal Digital Assistant).
  • the specific form of the electronic device is not limited.
  • the fingerprint collection device can be integrated on a touch screen corresponding to an input interface (eg, a keyboard area) of the input method application.
  • an input interface eg, a keyboard area
  • the electronic device can acquire the fingerprint in the input event through the integrated fingerprint collection device. Then, the electronic device performs fingerprint verification on the acquired fingerprint.
  • the similarity between the obtained fingerprint and the registered fingerprint of the user A is greater than a preset threshold, it indicates that the user corresponding to the input event is the user A, then the electronic device can log in the account of the user A in the input method application.
  • user A's personal vocabulary is called, and the candidate vocabulary corresponding to the input event is determined.
  • the electronic device can authenticate the user by using the fingerprint when the user inputs the information in the input interface, thereby calling the corresponding user according to the authentication result.
  • the personal vocabulary the electronic device can use the personal vocabulary to determine the candidate vocabulary of the input event during the subsequent input process.
  • the electronic device can automatically call the personal vocabulary corresponding to the input habit for different users according to the fingerprint of the user without the user's perception. Completing the prompts of related candidate vocabulary, thereby improving the input efficiency of the input method, and reducing the probability of user privacy leakage due to confusion of different users' lexicon, and improving the intelligence of human-computer interaction.
  • the above input event may refer to any input action of the user in the input interface by using the input method, for example, short pressing the letters in the keyboard, long pressing the function button applied by the input method, and providing the sliding input method application.
  • the input method for example, short pressing the letters in the keyboard, long pressing the function button applied by the input method, and providing the sliding input method application.
  • the electronic device in the embodiment of the present application may be the mobile phone 100.
  • the embodiment will be specifically described below by taking the mobile phone 100 as an example. It should be understood that the illustrated mobile phone 100 is only one example of an electronic device, and the mobile phone 100 may have more or fewer components than those shown in the figures, two or more components may be combined, or Has a different component configuration.
  • the mobile phone 100 may specifically include: a processor 101, a radio frequency (RF) circuit 102, a memory 103, a touch screen 104, a Bluetooth device 105, one or more sensors 106, a Wi-Fi device 107, a positioning device 108, Components such as audio circuit 109, peripheral interface 110, and power system 111. These components can communicate over one or more communication buses or signal lines (not shown in Figure 2). It will be understood by those skilled in the art that the hardware structure shown in FIG. 2 does not constitute a limitation to the mobile phone, and the mobile phone 100 may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • RF radio frequency
  • the processor 101 is a control center of the mobile phone 100, and connects various parts of the mobile phone 100 by using various interfaces and lines, and executes the mobile phone 100 by running or executing an application stored in the memory 103 and calling data stored in the memory 103.
  • the processor 101 may include one or more processing units; for example, the processor 101 may be a Kirin 960 chip manufactured by Huawei Technologies Co., Ltd.
  • the processor 101 may further include a fingerprint verification chip for verifying the collected fingerprint.
  • the radio frequency circuit 102 can be used to receive and transmit wireless signals during transmission or reception of information or calls.
  • the radio frequency circuit 102 can process the downlink data of the base station and then process it to the processor 101; in addition, transmit the data related to the uplink to the base station.
  • radio frequency circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency circuit 102 can also communicate with other devices through wireless communication.
  • the wireless communication can use any communication standard or protocol, including but not limited to global mobile communication systems, general packet radio services, code division multiple access, wideband code division multiple access, long term evolution, email, short message service, and the like.
  • the memory 103 is used to store applications and data, and the processor 101 executes various functions and data processing of the mobile phone 100 by running applications and data stored in the memory 103.
  • the memory 103 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.); the storage data area can be stored according to the use of the mobile phone. Data created at 100 o'clock (such as audio data, phone book, etc.).
  • the memory 103 may include a high speed random access memory, and may also include a nonvolatile memory such as a magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
  • the memory 103 can store various operating systems, for example, developed by Apple. Operating system, developed by Google Inc. Operating system, etc.
  • the above memory 103 may be independent and connected to the processor 101 via the above communication bus; the memory 103 may also be integrated with the processor 101.
  • the touch screen 104 can include a touch panel 104-1 and a display 104-2.
  • the touch panel 104-1 can collect touch events on or near the user of the mobile phone 100 (for example, the user uses any suitable object such as a finger, a stylus, or the like on the touch panel 104-1 or on the touchpad 104.
  • the operation near -1) and the collected touch information is transmitted to other devices such as the processor 101.
  • the touch event of the user in the vicinity of the touch panel 104-1 may be referred to as a hovering touch; the hovering touch may mean that the user does not need to directly touch the touchpad in order to select, move or drag a target (eg, an icon, etc.) And only the user is located near the electronic device in order to perform the desired function.
  • the touch panel 104-1 capable of floating touch can be realized by a capacitive type, an infrared light feeling, an ultrasonic wave, or the like.
  • the touch panel 104-1 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • a display (also referred to as display) 104-2 can be used to display information entered by the user or information provided to the user as well as various menus of the mobile phone 100.
  • the display 104-2 can be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the touchpad 104-1 can be overlaid on the display 104-2, and when the touchpad 104-1 detects a touch event on or near it, it is transmitted to the processor 101 to determine the type of touch event, and then the processor 101 may provide a corresponding visual output on display 104-2 depending on the type of touch event.
  • the touchpad 104-1 and the display 104-2 are implemented as two separate components to implement the input and output functions of the handset 100, in some embodiments, the touchpad 104- 1 is integrated with the display screen 104-2 to implement the input and output functions of the mobile phone 100. It is to be understood that the touch screen 104 is formed by stacking a plurality of layers of materials.
  • the touch panel 104-1 may be overlaid on the display 104-2, and the size of the touch panel 104-1 is larger than the size of the display 104-2, so that the display 104- 2 is completely covered under the touch panel 104-1, or the touch panel 104-1 may be disposed on the front side of the mobile phone 100 in the form of a full-board, that is, the user's touch on the front of the mobile phone 100 can be perceived by the mobile phone. You can achieve a full touch experience on the front of your phone.
  • the touch panel 104-1 is disposed on the front side of the mobile phone 100 in the form of a full-board
  • the display screen 104-2 may also be disposed on the front side of the mobile phone 100 in the form of a full-board, so that the front side of the mobile phone is Can achieve a borderless structure.
  • the mobile phone 100 may further have a fingerprint recognition function.
  • the fingerprint collection device 112 can be configured in the touch screen 104 to implement the fingerprint recognition function, that is, the fingerprint collection device 112 can be integrated with the touch screen 104 to implement the fingerprint recognition function of the mobile phone 100.
  • the fingerprint capture device 112 is disposed in the touch screen 104 and may be part of the touch screen 104 or may be otherwise disposed in the touch screen 104.
  • the fingerprint collection device 112 also It can be implemented as a full-board fingerprint acquisition device.
  • the touch screen 104 can be viewed as a panel that can be fingerprinted at any location.
  • the fingerprint collection device 112 can transmit the collected fingerprint to the processor 101 for the processor 101 to process the fingerprint (eg, fingerprint verification, etc.).
  • the main component of the fingerprint collection device 112 in the embodiment of the present application is a fingerprint sensor, which can employ any type of sensing technology, including but not limited to optical, capacitive, piezoelectric or ultrasonic sensing technologies.
  • the above-mentioned pattern collecting device 112 may be a capacitive collecting device 112-1.
  • the touch screen 104 may specifically include a capacitive fingerprint collection device 112-1, a touch panel 104-1, and a display 104-2.
  • the display 104-2 is located at the lowest layer in the touch screen 104, and the touch panel 104-1 is located on the touch screen.
  • the capacitive acquisition device 112-1 is located between the touch panel 104-1 and the display 104-2.
  • the ridges and valleys of the fingerprint and the capacitance-inducing particles of the capacitive acquisition device 112-1 may be different in size, and the positions of the ridges and valleys of the fingerprint may be respectively determined to obtain a fingerprint. Further, the capacitive sensing particles on each pixel in the screen may be charged in advance, so that the capacitive sensing particles reach a preset threshold. When the user touches the touch screen 104, there is a preset relationship between the capacitance value and the distance. Therefore, different capacitance values are formed at the positions of the ridges and valleys, and then discharge is performed by the discharge current.
  • the capacitance values corresponding to the ridges and valleys are different, the discharge speeds of the pixels corresponding to the ridges and valleys are also different, and the pixels corresponding to the ridges are different.
  • the point discharge is slow, and the pixel corresponding to the valley discharges quickly. Therefore, the user's fingerprint can be acquired by charging and discharging the pixels corresponding to the ridges and valleys.
  • the fingerprint collecting device 112 may also be a radio frequency fingerprint collecting device 112-2.
  • the touch screen 104 may include a radio frequency fingerprint collection device 112-2, a touch panel 104-1, and a display 104-2.
  • the radio frequency fingerprint collection device 112-2 is located at the lowest layer of the touch screen 104.
  • the touch panel 104-1 is located at the uppermost layer in the touch screen 104, and the display 104-2 is located between the touch panel 104-1 and the radio frequency fingerprint collection device 112-2.
  • the radio frequency fingerprint collecting device 112-2 can absorb the reflected light through the CCD (Charge Coupled Device) to acquire the fingerprint. Further, due to the difference in the depth of the ridges and valleys of the fingerprint on the touch panel 104-1 and the grease and moisture between the skin and the touch panel 104-1, the light is irradiated to the valley of the fingerprint through the touch panel 104-1. The position is totally reflected, and the position of the ridge that is irradiated to the fingerprint cannot be totally reflected, and a part of the light is absorbed by the touch panel 104-1 or diffused to other places, thereby forming a fingerprint on the CCD.
  • CCD Charge Coupled Device
  • the handset 100 in order to reduce power consumption when fingerprinting is performed within the touch screen 104, can turn the power of the fingerprint collection device on or off under certain conditions.
  • the mobile phone 100 can turn on the power of the fingerprint collection device when the user touches a specific location on the touch screen 104, so that the mobile phone 100 performs fingerprint recognition, and the mobile phone 100 does not detect the specific location of the touch screen 104 by the user.
  • the fingerprint collection device is not powered on, that is, the mobile phone 100 turns off the fingerprint recognition function.
  • the mobile phone 100 can also be displayed in the setup menu.
  • a switch control associated with fingerprint recognition so that the user can manually activate or deactivate the fingerprint recognition function.
  • the mobile phone 100 can also activate or deactivate the fingerprint recognition function according to specific conditions. For example, the fingerprint recognition function or the like can be turned on or off depending on the geographical location.
  • the mobile phone 100 can also include a Bluetooth device 105 for enabling data exchange between the handset 100 and other short-range electronic devices (eg, mobile phones, smart watches, etc.).
  • the Bluetooth device in the embodiment of the present application may be an integrated circuit or a Bluetooth chip or the like.
  • the handset 100 can also include at least one type of sensor 106, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display of the touch screen 104 according to the brightness of the ambient light, and the proximity sensor may turn off the power of the display when the mobile phone 100 moves to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the gesture of the mobile phone (such as horizontal and vertical screen switching, related Game, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.
  • the mobile phone 100 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
  • the Wi-Fi device 107 is configured to provide the mobile phone 100 with network access complying with the Wi-Fi related standard protocol, and the mobile phone 100 can access the Wi-Fi access point through the Wi-Fi device 107, thereby helping the user to send and receive emails, Browsing web pages and accessing streaming media, etc., it provides users with wireless broadband Internet access.
  • the Wi-Fi device 107 can also function as a Wi-Fi wireless access point, and can provide Wi-Fi network access to other electronic devices.
  • the positioning device 108 is configured to provide a geographic location for the mobile phone 100. It can be understood that the positioning device 108 can be specifically a receiver of a positioning system such as a Global Positioning System (GPS) or a Beidou satellite navigation system, or a Russian GLONASS. After receiving the geographical location transmitted by the positioning system, the positioning device 108 sends the information to the processor 101 for processing, or sends it to the memory 103 for storage.
  • GPS Global Positioning System
  • Beidou satellite navigation system Beidou satellite navigation system
  • Russian GLONASS Russian GLONASS
  • the positioning device 108 may also be a receiver that assists the Global Positioning System (AGPS), which assists the positioning device 108 in performing ranging and positioning services by acting as a secondary server, in which case The secondary location server provides location assistance over a wireless communication network in communication with an electronic device, such as the location device 108 (ie, a GPS receiver) of the handset 100.
  • AGPS Global Positioning System
  • the positioning device 108 can also be a Wi-Fi access point based positioning technology. Since each Wi-Fi access point has a globally unique MAC address, the electronic device can scan and collect the broadcast signals of the surrounding Wi-Fi access points when Wi-Fi is turned on, so that Wi can be obtained.
  • the audio circuit 109, the speaker 113, and the microphone 114 can provide an audio interface between the user and the handset 100.
  • the audio circuit 109 can transmit the converted electrical data of the received audio data to the speaker 113 for conversion to the sound signal output by the speaker 113; on the other hand, the microphone 114 will collect the collected
  • the sound signal is converted into an electrical signal, which is received by the audio circuit 109 and converted into audio data, which is then output to the RF circuit 102 for transmission to, for example, another mobile phone, or the audio data is output to the memory 103 for further processing.
  • the peripheral interface 110 is used to provide various interfaces for external input/output devices (such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.).
  • external input/output devices such as a keyboard, a mouse, an external display, an external memory, a subscriber identity module card, etc.
  • a universal serial bus (USB) interface is connected to the mouse, and a metal contact on the card slot of the subscriber identification module is connected to a subscriber identity module card (SIM) card provided by the telecommunications carrier.
  • SIM subscriber identity module card
  • Peripheral interface 110 can be used to couple the external input/output peripherals described above to processor 101 and memory 103.
  • the mobile phone 100 may further include a power supply device 111 (such as a battery and a power management chip) that supplies power to the various components.
  • the battery may be logically connected to the processor 101 through the power management chip to manage charging, discharging, and power management through the power supply device 111. And other functions.
  • the mobile phone 100 may further include a camera (front camera and/or rear camera), a flash, a micro projection device, a near field communication (NFC) device, and the like, and details are not described herein.
  • a camera front camera and/or rear camera
  • a flash a flash
  • micro projection device a micro projection device
  • NFC near field communication
  • an input method application is stored in the memory 103 in the mobile phone 100.
  • the user inputs information to the APP that needs to use the input method application, the user usually involves three levels of the user, the input method, and the system.
  • the input platform framework of the Android platform generally consists of three parts: user 11, input method application 12, and APP 13.
  • the APP 13 detects that the user's click operation is received in the edit window 13a during the running process, the APP 13 calls the input method interface 14, at this time, the input The method application 12 can display an input method related graphical user interface such as the virtual keyboard 12a on the touch screen. Then, after the electronic device detects an input event on the virtual keyboard 12a (for example, clicking a button in the virtual keyboard 12a), the processor 101 can acquire the input event and input data such as letters or symbols corresponding to the input event. It is sent to the input method control main program 12b.
  • the input method control main program 12b may call a thesaurus (for example, a user's personal vocabulary) to determine a candidate vocabulary corresponding to the input event, and display the candidate vocabulary in the candidate window 12c. After detecting that the user selects the target vocabulary from the candidate vocabulary, the input method control main program 12b sends the target vocabulary to the input method manager 13b of the APP 13 through the input method interface 14 (for example, the input method manager class), thus inputting The method manager 13b can input the target vocabulary selected by the user into the current editing window 13a to complete the input event.
  • a thesaurus for example, a user's personal vocabulary
  • the input method application 12 can first call the fingerprint collection.
  • the device 112 collects the fingerprint in the input event, and then sends the fingerprint to the fingerprint recognition module 15, and the fingerprint recognition module 15 identifies the identity of the current user based on the fingerprint.
  • the input method control main program 12b can also call up a personal vocabulary (ie, a target vocabulary) corresponding to the current user according to the correspondence between different users and different personal lexicons. Subsequently, similar to the input flow shown in FIG. 4, the input method control main program 12b can use the target vocabulary to determine the candidate vocabulary in the current input event.
  • the method includes:
  • the electronic device runs the input method application, the electronic device acquires a fingerprint of the user on the touch screen.
  • the APP that currently needs to input information is a short message application
  • an edit window 13a is displayed in the short message interface shown in FIG. 7 and can be used to display the short message that the user is editing.
  • the short message APP invokes an API (application programming interface) of the input method application, triggering the input method to display the virtual keyboard 12a on the touch screen, so that the input method application is called up to The front desk is running.
  • API application programming interface
  • the electronic device can detect the click operation of the user's finger on the related virtual button of the virtual keyboard 12a.
  • the fingerprint collection device (not shown in FIG. 7) is integrated on the touch screen, The electronic device can collect the fingerprint 20 formed by the user on the touch screen through the fingerprint collection device.
  • the fingerprint collection device may be disposed in a certain area on the touch screen corresponding to the virtual keyboard 12a. In this way, the electronic device can determine the identity of the user according to the collected fingerprint 20, thereby calling the personal vocabulary associated with the user without the user's perception, simplifying the login process of the input method application, and improving the input efficiency.
  • the collected fingerprints 20 are schematically displayed on the touch screen of the electronic device. It can be understood that the collected fingerprints may not be displayed when the user fingerprint is collected on the touch screen. The relevant pattern of the fingerprint.
  • the above-mentioned fingerprint collection device can also be disposed in a certain area outside the virtual keyboard 12a, so that when the user touches the display screen in an area other than the virtual keyboard 12a, the electronic device can also acquire the fingerprint 12 of the user.
  • the embodiment of the present application does not impose any limitation on the location of obtaining the fingerprint of the user.
  • the electronic device can also acquire the fingerprint of the user on the touch screen through the fingerprint collection device. At this time, the electronic device may also determine the personal vocabulary associated with the current user according to the collected fingerprint. Then, once the electronic device calls the input method application to the foreground, the determined personal vocabulary can be used in time, thereby Increased input efficiency.
  • the electronic device in order to reduce power consumption when performing fingerprint recognition in the touch screen, may set the power of the fingerprint collection device to be turned on under certain conditions; or, turn off the power of the fingerprint collection device under certain conditions. Or reduce the scanning frequency of the fingerprint acquisition device.
  • the input method application eg, virtual keyboard 12a described above
  • the input method application can further capture the fingerprint that the user touches on the virtual keyboard 12a through the integrated fingerprint capture device on the touch screen.
  • FIG. 7 exemplifies a fingerprint as an example, but it can be understood that one or more fingerprints collected in the above step 401 may be included.
  • the electronic device performs fingerprint verification based on the collected fingerprint.
  • the collected fingerprint 20 may be compared with a pre-stored registered fingerprint to determine whether the collected fingerprint 20 is a registered fingerprint.
  • the electronic device may send the collected fingerprint 20 to the server on the network side through the wireless network, perform fingerprint verification by the server, and return the verification result to the electronic device through the wireless network.
  • One or more registered fingerprints are pre-stored in the memory 103 of the electronic device, for example, the note
  • the fingerprint can be used to unlock the fingerprint of the electronic device screen, or it can be the fingerprint used to log in to the input method.
  • step 402 the electronic device compares the collected fingerprint 20 with the registered fingerprint, that is, performs fingerprint identification on the collected fingerprint 20, when the similarity between the collected fingerprint 20 and a registered fingerprint is greater than a preset threshold.
  • the electronic device may continue to perform the following steps 403-404:
  • the electronic device determines a target vocabulary associated with the registered fingerprint.
  • the electronic device determines the candidate vocabulary corresponding to the input event by using the target vocabulary, and displays the candidate vocabulary.
  • the personal vocabulary corresponding to the collected fingerprints may be used as the target vocabulary according to the corresponding relationship.
  • Table 1 user A (Tom) registers fingerprint 1 and fingerprint 2 in the input method application, and Tom's personal vocabulary is the lexicon 1; similarly, user B (Alice) is input.
  • Fingerprint 3 and fingerprint 4 are registered in the law application, and Alice's personal vocabulary is the thesaurus 2.
  • the fingerprints 1 - 4 in Table 1 and the different fingerprints involved in Table 2 are only used to distinguish different fingerprints of different users, and do not limit the specific fingers corresponding to each fingerprint.
  • the correspondence between the registered fingerprint of the user and the personal vocabulary is only exemplarily shown in Table 1. It can be understood that the corresponding relationship can also be presented in other forms than the form.
  • the electronic device can determine, by using Table 1, the target vocabulary associated with the registered fingerprint (ie, the fingerprint 1) as the lexicon 1, that is, the personal vocabulary of Tom.
  • the electronic device can log in to Tom's personal vocabulary and use Tom's personal vocabulary to determine candidate vocabulary corresponding to the current input event.
  • the user has entered "My phone is 130" in the editing window 13a. Since Tom's personal vocabulary records Tom's phone number "13088666688", the input method application can be from Tom's personal word. The last 8 digits of "88666688" of Tom's telephone number are indicated as the first candidate vocabulary 17 in the library.
  • the input method application can use the last 8 digits of "88666688" of Tom's telephone number as the first candidate from Tom's personal vocabulary. Word 17 is prompted. In this way, Tom can select the first candidate vocabulary 17, and input his/her own telephone number into the editing window 13a, without manually inputting his own telephone number one by one, thereby improving the input efficiency of the input method.
  • the electronic device can continuously enrich and improve the personal vocabulary according to Tom's input habits.
  • the input method application server can collect source of entries, filter deduplication, machine sorting (for example, remove typos, junk vocabulary, etc.), word frequency statistics, new word discovery, lexicon verification (for example, adjust the order of vocabulary occurrence) ) etc. Create a thesaurus in the server, which can be updated in real time.
  • the electronic device may store the personal word inventory from the above-mentioned thesaurus into the electronic device of the user according to the input habits (for example, spelling habits, high-frequency vocabulary, and style) when the user uses the input method, the individual The thesaurus can be updated through regular networking.
  • the input habits for example, spelling habits, high-frequency vocabulary, and style
  • the electronic device may log in to Tom's personal vocabulary, or hide relevant user graphics of the input method application already displayed on the touch screen.
  • the interface for example, a virtual keyboard
  • the power of the fingerprint collecting device is turned off, or the scanning frequency of the fingerprint collecting device is lowered.
  • the electronic device can automatically log in to the personal vocabulary without the user's perception through the fingerprint formed on the touch screen during the user input process, that is, the user does not need to manually log in to the account in the input method application.
  • each user can register two or more fingerprints, then, for one user (for example, Tom), Tom can be registered according to different registration fingerprints.
  • the personal vocabulary is divided into different sub-words.
  • the electronic device can establish the sub-word library 1-1 according to the input habit of the fingerprint 1, and establish the sub-word library 1-2 according to the input habit of the fingerprint 2.
  • the electronic device collects the fingerprint of the finger 1 and performs fingerprint verification on the collected fingerprint.
  • the electronic device determines that the collected fingerprint is the fingerprint 1
  • the sub-word library 1-1 associated with the fingerprint 1 can be called, and the candidate vocabulary input this time is determined according to the sub-word library 1-1 and displayed on the touch screen;
  • the electronic device collects the fingerprint of the finger 2 and performs fingerprint verification on the collected fingerprint.
  • the electronic device may determine the candidate vocabulary input according to the sub-word library 1-2. And displayed on the touch screen.
  • the electronic device may differently select the candidate words respectively determined from the sub-word library 1-1 and the sub-word library 1-2.
  • the form is displayed in the current display interface.
  • the candidate words determined from the sub-word library 1-1 are marked as red in the candidate window
  • the candidate words determined from the sub-words 1-2 are marked as black in the candidate window. In this way, the user can visually see different candidate words corresponding to different fingers, thereby improving the user experience.
  • the electronic device may further divide the sub-word library into a scene library associated with different application scenarios.
  • the above sub-word library 1-1 may be divided into a scene library 1 and a scene library 2; wherein the scene library 1 may be set according to an input habit of the user using the fingerprint 1 in the WeChat application, the scene The library 2 can be set according to the input habits of the user using the fingerprint 1 in the short message application. Therefore, after the electronic device determines that the fingerprint in the current input event is the fingerprint 1 above, the corresponding scenario library may be further called according to the specific application running in the foreground to determine the candidate vocabulary of the input event to satisfy the user. Further improve the user experience by input requirements in the same application scenario.
  • the electronic device collects the fingerprint of the user's finger.
  • the collected fingerprint is consistent with the fingerprint 1
  • the APP currently running in the foreground is a short message application, as shown in FIG. 11a
  • the electronic The device can call the scene library 2 in the sub-word library 1-1 according to the fingerprint 1 and the current short message application, and then use the scene library 2 to determine the candidate words of "my name is" in the editing window 13a.
  • the electronic device can include "Tom” as a high frequency vocabulary in the scene library 2. Then, when it is detected that "my name is called” is input in the editing window 13a, the electronic device can call the scene library 2, and display one or more high frequency words associated with the "name” in the candidate window 12c, for example, As shown in FIG. 11a, the electronic device provides "Tom" as the first candidate vocabulary 17 for display in the candidate window 12c.
  • the electronic device can call the scene library 1 in the sub-word library 1-1, and use the scene library 1 to determine "my" in the editing window 13a.
  • the candidate name is "candidate.”
  • the electronic device can record "Xiaoxiao" as a high-frequency vocabulary in the scene library 1. Then, when it is detected that "my name is called" is input in the editing window 13a, the electronic device can call the scene library 1 to display one or more high frequency words associated with the "name" in the candidate window 12c, for example, As shown in FIG. 11b, the electronic device displays "small sputum” as the first candidate vocabulary 17 in the candidate window 12c. In this way, the electronic device can call the corresponding scene database to determine the candidate vocabulary of the input event according to the specific application running in the foreground, so as to meet the input requirements of the user in different application scenarios, thereby improving the input efficiency.
  • the input method may further include the following steps 405-407. :
  • the electronic device creates a temporary personal vocabulary corresponding to the fingerprint collected above.
  • the collected fingerprint 20 is a non-registered fingerprint, it indicates that the fingerprint of the finger used by the current user for operation is not registered in the electronic device, thereby determining the identity of the current user and the personal vocabulary corresponding to the current user. .
  • the electronic device may use the user who is currently applying the input method as a new user that is not registered in the electronic device, for example, user C in Table 2. Further, the electronic device can learn the input habits of the current user (ie, user C) based on the default vocabulary, for example, the high-frequency vocabulary input by the user, the behavior style of the user, etc., thereby creating a fingerprint with the user C. Corresponding new personal vocabulary, namely temporary personal vocabulary. At this time, as shown in Table 2, the fingerprint 5 of the user C has a correspondence relationship with the temporary personal vocabulary.
  • the electronic device may prompt the user to create a personal vocabulary belonging to itself through a prompt box 18. After the user confirms the creation of the personal vocabulary, each time the user receives the input event of the user having the fingerprint 20, the user learns the input habit based on the default vocabulary. After a period of time, the temporary user belongs to the user. Personal vocabulary.
  • the electronic device adds the temporary personal vocabulary to the first user's personal vocabulary. .
  • the user may only register one or a small number of fingerprints in the electronic device, when the fingerprint 20 collected above is a non-registered fingerprint, it is not determined that the user having the fingerprint 20 does not have a corresponding individual in the electronic device.
  • Thesaurus Since the user may only register one or a small number of fingerprints in the electronic device, when the fingerprint 20 collected above is a non-registered fingerprint, it is not determined that the user having the fingerprint 20 does not have a corresponding individual in the electronic device. Thesaurus.
  • the electronic device can periodically or irregularly compare the temporary personal vocabulary created in step 405 with the personal vocabulary of each user already stored in Table 1.
  • the similarity between the temporary personal vocabulary and the stored personal vocabulary of the first user for example, Tom
  • the electronic device can log into Tom's personal vocabulary, and add the temporary personal vocabulary to Tom's personal vocabulary to complete the update of Tom's personal vocabulary.
  • the electronic device may prompt the user to have logged into Tom's personal vocabulary and update Tom's personal vocabulary. Subsequently, the electronic device will use Tom's personal vocabulary to determine its The candidate vocabulary for the triggered input event.
  • the electronic device can determine the similarity between the two thesaurus according to the user's spelling habits, high-frequency vocabulary, and style of writing recorded in the thesaurus. For example, when the high frequency vocabulary recorded in the thesaurus 1 is the same as the high frequency vocabulary recorded in the thesaurus 2, it can be considered that the user who uses the thesaurus 1 and the user who uses the thesaurus 2 are the same person.
  • the technical person in the art can also specifically set the method for determining the similarity between the two thesaurus and the threshold according to the actual experience or the actual application scenario.
  • the embodiment of the present application does not make a method for determining the similarity between the two thesaurus. Any restrictions.
  • the electronic device establishes a correspondence between the collected fingerprint and the personal vocabulary of the first user.
  • the electronic device may update the correspondence between the registered fingerprint and the personal vocabulary shown in Table 2 above, so that When a new input event is received subsequently, the electronic device can accurately call the corresponding target vocabulary according to the updated correspondence.
  • the electronic device may update the fingerprint 5 (ie, the collected fingerprint 20) and the thesaurus 1 in the correspondence between the registered fingerprint and the personal vocabulary shown in Table 1. Correspondence.
  • the electronic device may further prompt the user on the display interface that a thesaurus associated with the current fingerprint (ie, the collected fingerprint 20) has been established, for example,
  • a thesaurus associated with the current fingerprint ie, the collected fingerprint 20
  • the above-mentioned thesaurus 1 makes it clear that the user can subsequently log in to his personal vocabulary using the fingerprint 20 collected above.
  • the electronic device when the similarity between the temporary personal vocabulary and the stored personal vocabulary of all users is less than the threshold, it indicates that the user C corresponding to the temporary personal vocabulary is indeed A newly added user, then, the electronic device can continue to enrich and complete the temporary personal vocabulary according to the input habits of the user C. In this way, when the subsequent user C uses the same fingerprint again to perform an input event in the input method application, the electronic device can invoke the temporary personal word according to the correspondence between the fingerprint 5 of the user C in Table 2 and the temporary personal vocabulary.
  • the library determines the candidate vocabulary corresponding to the input event.
  • an input method provided by some other embodiments of the present application may include:
  • the electronic device runs the input method application, the electronic device acquires a fingerprint of the user on the touch screen.
  • the fingerprint acquired on the touch screen contains two or more fingerprints, that is, N (an integer greater than 1;) fingerprints.
  • N an integer greater than 1;
  • the user performs an input event on the virtual keyboard 12a of the input method application using the thumb of the left hand and the thumb of the right hand.
  • the electronic device can acquire the fingerprint 21 and the fingerprint 22 through a fingerprint collection device (not shown in FIG. 16) integrated on the touch screen.
  • the electronic device performs fingerprint identification on the fingerprint obtained above.
  • the electronic device may determine that the fingerprint 21 is a registered fingerprint, and the fingerprint 22 is a non-registered fingerprint that is not registered on the electronic device. At this point, the electronic device can continue to perform the following steps 503a-505a.
  • the electronic device determines a target vocabulary associated with the registered fingerprint.
  • the electronic device can use the personal vocabulary corresponding to the fingerprint 21 (registered fingerprint) as the target word through the correspondence relationship shown in Table 1 or Table 2. Library.
  • the electronic device collects the fingerprint 21 and For the fingerprint 22, the user corresponding to the fingerprint 21 and the fingerprint 22 may be considered to be the same.
  • the electronic device establishes a correspondence between the unregistered fingerprint and the target vocabulary.
  • the electronic device may update the registration.
  • the correspondence between the fingerprint and the personal vocabulary that is, the correspondence between the fingerprint 21 and Tom's personal vocabulary
  • the correspondence between the fingerprint 22 and Tom's personal vocabulary is increased, so as to facilitate subsequent electronic devices.
  • the corresponding target lexicon can be accurately called according to the updated correspondence.
  • the electronic device can establish this in the above correspondence relationship.
  • the electronic device may prompt the user to display the unregistered fingerprint by a prompt box 18 on the display interface, that is, whether the unregistered fingerprint 22 and Tom's personal vocabulary are used. Association. After the user confirms the association, the electronic device increases the correspondence between the fingerprint 22 and Tom's personal vocabulary, so that the fingerprint 22 also becomes a registered fingerprint corresponding to Tom's personal vocabulary.
  • the electronic device if the fingerprint 21 and the fingerprint 22 are stored in the electronic device, that is, when the collected fingerprints are all registered fingerprints, if the personal vocabulary corresponding to the fingerprint 21 and the fingerprint 22 is different, The electronic device mistakes the fingerprint 21 and the fingerprint 22 as fingerprints of two different users. Therefore, the electronic device can merge the personal vocabulary corresponding to the fingerprint 21 with the personal vocabulary corresponding to the fingerprint 22, so that the fingerprint 21 and the fingerprint 22 correspond to each other.
  • the personal lexicon is Tom's personal vocabulary.
  • the electronic device can combine the plurality of personal vocabularies corresponding to the Z registered fingerprints into one personal vocabulary.
  • the electronic device determines the candidate vocabulary corresponding to the input event by using the target vocabulary.
  • the electronic device can use the target vocabulary determined in step 503, that is, the individual of Tom.
  • the vocabulary determines the candidate vocabulary corresponding to the input event.
  • "West Street 66" is a high-frequency vocabulary in Tom's personal vocabulary. Therefore, when the user inputs "West Street” in the editing window 13a, the electronic device can be based on Tom's personal vocabulary. "No. 66" is displayed as the first candidate vocabulary 17 in the candidate window to facilitate user selection, thereby improving user input efficiency.
  • the electronic device compares the acquired multiple fingerprints with the pre-stored registered fingerprints respectively, and if it is determined that the registered fingerprints are not present in the plurality of fingerprints, for example, FIG. 16
  • the fingerprint 21 and the fingerprint 22 acquired by the middle electronic device are both unregistered fingerprints, and the electronic device can continue to perform the following steps 503b-504b.
  • the electronic device creates a temporary personal vocabulary corresponding to the multiple fingerprints collected above.
  • step 405 when the fingerprints are not included in the plurality of fingerprints collected, the fingerprint of the operating finger of the current user is not registered in the electronic device.
  • step 405 the electronic device can use the current user as a new user to learn the input habits of the current user based on the default thesaurus, thereby creating a temporary individual corresponding to the fingerprint collected in step 501 above.
  • Thesaurus can be used to create a temporary individual corresponding to the fingerprint collected in step 501 above.
  • the electronic device can prompt the user through the prompt box 18 to create a thesaurus A associated with the fingerprint 21 and the fingerprint 22 in the display interface.
  • the user confirms that the thesaurus A is created, each time the user receives the input event of the user having the fingerprint, the user enters the input habit based on the default thesaurus, and after a period of time, the temporary individual belonging to the user is obtained.
  • Thesaurus the thesaurus A.
  • the electronic device adds the temporary personal vocabulary to the first user's personal vocabulary.
  • the electronic device may periodically or irregularly compare the temporary personal vocabulary created in step 403b with the stored personal vocabulary of each user, when the temporary personal vocabulary and the stored first user (for example, When the similarity between the personal lexicons of Tom) is greater than the threshold, it indicates that the input habits of the user corresponding to the plurality of fingerprints collected above are very similar to the input habits of Tom, and therefore, the electronic device can be considered to have the above multiple fingerprints.
  • the user is Tom.
  • the electronic device can log in Tom's personal vocabulary, and add the above-mentioned thesaurus A to Tom's personal vocabulary to complete the update of the Tom personal vocabulary.
  • the electronic device establishes a correspondence between the plurality of fingerprints collected and the personal vocabulary of the first user.
  • the electronic device may update the correspondence between the registered fingerprint and the personal vocabulary, so as to facilitate subsequent reception of a new input event, the electronic device The corresponding target lexicon can be accurately called according to the updated correspondence.
  • the electronic device may further prompt the user through the prompt box 18 whether to add the fingerprint 21 and the fingerprint 22 (ie, the fingerprint collected in the current input operation) as the registration fingerprint of the input method application. If the user confirms that the fingerprint 21 and the fingerprint 22 are added as the registered fingerprint of the input method application, the subsequent user can log in to the personal vocabulary using the fingerprint 21 and the fingerprint 22.
  • the fingerprint 21 and the fingerprint 22 ie, the fingerprint collected in the current input operation
  • the electronic device can further use the fingerprint collected above for fingerprint unlocking or fingerprinting. Payment and other functions. Or, when the user enters a fingerprint for fingerprint unlocking or fingerprint recognition in the electronic device, the electronic device may prompt the fingerprint for applying the fingerprint as an input method, and establish the fingerprint and the corresponding personal vocabulary.
  • the corresponding relationship between the embodiments of the present application is not limited thereto.
  • the user can also enable or disable the fingerprint login function shown in the above embodiment by using the setting interface of the input method application.
  • you can manage your registered fingerprints for example, add or delete registered fingerprints.
  • the user can also manage the relationship between the established personal vocabulary and the registered fingerprint.
  • a user can have one or more personal lexicons, such as the lexicon 1 and the thesaurus shown in FIG. 2.
  • you can manually set the corresponding registration fingerprint so that when a finger with different fingerprints triggers an input event, different lexicons can be called to determine the corresponding candidate vocabulary to improve the input method. Input efficiency and intelligence of human-computer interaction.
  • the short message application invokes the API of the input method application to display the virtual keyboard of the input method application. 12a and other related user graphical interfaces.
  • a shortcut login button for quickly logging in to the personal vocabulary can also be set on the user graphical interface.
  • the shortcut login button may be the fingerprint pattern 31 in FIG. 23, then, when the user touches the fingerprint pattern 31, in response to the touch operation, the electronic device triggers the fingerprint collection device at the fingerprint pattern 31 to collect the fingerprint of the user.
  • the candidate vocabulary can be provided by calling the personal vocabulary corresponding to the collected fingerprint according to the input method shown in the above steps 402-407. For example, after the electronic device detects the touch operation of the user at the fingerprint pattern 31, the electronic device collects the fingerprint through the fingerprint collection device at the fingerprint pattern 31, and performs fingerprint identification on the collected fingerprint. When it is recognized that the collected fingerprint is a fingerprint of Tom (for example, the fingerprint 1 in Table 1), the electronic device can log in to Tom's personal vocabulary (ie, the the thesaurus according to the correspondence between the fingerprint 1 and its personal vocabulary). 1), and call the thesaurus 1 to determine the candidate vocabulary associated with the subsequently received input event.
  • Tom's personal vocabulary ie, the thesaurus according to the correspondence between the fingerprint 1 and its personal vocabulary
  • the fingerprint capture device may not be disposed at the location where the shortcut login button (eg, the one-click login button 32 in FIG. 24) is displayed, then, when the user clicks the one-key login 32, the electronic The device can prompt the user to enter a fingerprint at the location where the fingerprint acquisition device is set. For example, a fingerprint is entered in an area within the touch screen (eg, area 33 in FIG. 24), or a fingerprint is entered in an area outside the touch screen (eg, home button 34 in FIG. 24). Subsequently, the electronic device performs fingerprint identification on the collected fingerprint. When the collected fingerprint is identified as Alice's fingerprint (for example, the fingerprint 3 in Table 1), the electronic device may be based on the fingerprint 3 and its personal vocabulary. Correspondence, log in to Alice's personal vocabulary (ie, thesaurus 2) and call the lexicon 2 to determine the candidate vocabulary associated with the subsequently received input event.
  • the shortcut login button eg, the one-click login button 32 in FIG. 24
  • the electronic device may collect the fingerprint of the user when the user uses the fingerprint to unlock, and further, according to the input method shown in the above steps 402-403.
  • the personal vocabulary corresponding to the fingerprint collected at the time of unlocking is used as the target vocabulary, so that when the APP invokes the API of the input method to display the relevant user graphical interface of the input method application, the input method application can use the determined vocabulary.
  • Target vocabulary provides candidate vocabulary Shorten the time for the user to log into the personal vocabulary, which improves input efficiency.
  • the input method application can automatically switch to the personal vocabulary corresponding to the fingerprint in the previous input event, so that Users who perform the current input event can complete the input using a personal vocabulary that conforms to their input habits.
  • the correspondence between the fingerprints of different users and their personal lexicon is taken as an example, and an example is how the input method application calls the corresponding personal vocabulary to complete the input according to the fingerprint of the user. It can be understood that, in the input method application scenario, the correspondence between the user's personalized setting parameters (for example, the input method skin, the display position of the virtual keyboard, the shortcut key settings, and the like) and the user's fingerprint may also be established.
  • the registered fingerprint of user A corresponds to a virtual keyboard in the form of a nine-square grid
  • the registered fingerprint of user B corresponds to a virtual keyboard in the form of a full keyboard.
  • the user when the user uses the input method to perform an input event, the user can set the relevant parameter of the input method application to the personalized setting parameter corresponding to the fingerprint by using the fingerprint formed on the touch screen, so that the input method is applied in operation.
  • the input environment, candidate vocabulary, etc. which are consistent with the input habits of the current user, can be better provided, thereby improving the input efficiency when the user uses the input method, and greatly improving the user experience.
  • the above electronic device or the like includes a hardware structure and/or a software module corresponding to each function.
  • the embodiments of the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the embodiments of the present application.
  • the embodiment of the present application may perform the division of the function modules on the electronic device or the like according to the above method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present application is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 25 is a schematic diagram of a possible structure of an electronic device involved in the foregoing embodiment, where the electronic device includes: an obtaining unit 1101, a determining unit 1102, and an executing unit. 1103, an establishing unit 1104 and a merging unit 1105.
  • FIG. 26 shows a possible structural diagram of the electronic device involved in the above embodiment.
  • the electronic device includes a processing module 1302 and a communication module 1303.
  • the processing module 1302 is configured to control and manage the actions of the electronic device.
  • the communication module 1303 is configured to support communication between the UE and other network entities.
  • the electronic device may further include a storage module 1301 for storing program codes and data of the electronic device.
  • the processing module 1302 may be a processor or a controller, for example, may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application specific integrated circuit (Application-Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 1303 may be a transceiver, a transceiver circuit, a communication interface, or the like.
  • the storage module 1301 may be a memory.
  • the processing module 1302 is a processor
  • the communication module 1303 is an RF transceiver circuit
  • the storage module 1301 is a memory
  • the electronic device provided by the embodiment of the present application may be the electronic device shown in FIG.
  • a non-transitory computer readable storage medium having stored therein one or more programs, the one or more programs including instructions.
  • the electronic device having the display detects that the touch surface thereof receives the touch event, the above instruction is executed, so that the electronic device executes the input method provided in the above embodiments.
  • the steps of the above input method refer to the related descriptions of the foregoing steps 401-407 and/or steps 501-505, and therefore no further description is provided herein.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available medium that can be accessed by a computer or contain one or more Data storage devices such as servers and data centers that can be integrated with media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

Abstract

一种输入方法,在具有指纹采集器件的电子设备中实现,该方法包括:当输入法应用运行时,电子设备获取用户触摸屏上的指纹;当该指纹为预先存储的注册指纹时,电子设备确定与该指纹关联的目标词库;电子设备使用该目标词库,提供与当前的输入事件对应的至少一个候选词汇。

Description

输入方法及电子设备 技术领域
本申请实施例涉及通信技术领域,尤其涉及输入方法及电子设备。
背景技术
目前,用户在使用输入法应用输入信息时,输入法应用可以根据用户的输入习惯为用户创建个人词库。例如,用户A经常输入“证件”这一词语,那么,输入法应用可以将“证件”这一词语作为高频词汇存储在用户A的个人词库中。后续,当用户A在输入法应用中用密码登录自己的账号后,输入法应用会为用户A调出其对应的个人词库,这时,如果用户再次输入“证”字,输入法应用可将“件”字作为第一候选词汇提示给用户。
那么,当不同的用户使用相同的输入法应用时,例如,不同用户使用同一台手机或电脑上的同一输入法应用输入信息时,需要先手动登录自己的账号,使得输入法应用调出与当前操作用户对应的个人词库,进而实现上述词汇联想等符合该用户输入习惯的输入功能。可以看出,上述输入方法的实现过程较为繁琐,用户每次使用输入法应用时均需要手动登录自己的账号,才能使用符合自身输入习惯的个人词库完成输入,使得输入效率较低。
发明内容
本申请实施例提供的输入方法及装置,在多用户共用同一输入法应用时,可简化用户登录输入法应用的过程,提高输入效率以及人机交互的智能性。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,本申请实施例提供一种输入方法,在具有指纹采集器件的电子设备中实现,该方法包括:当输入法应用运行时,电子设备获取用户触摸屏上的指纹;当该指纹为预先存储的注册指纹时,电子设备确定与该指纹关联的目标词库;电子设备使用该目标词库,提供与当前的输入事件对应的至少一个候选词汇。
也就是说,当用户使用输入法应用时,电子设备可以通过用户在输入界面内输入信息时的指纹对用户进行鉴权,从而根据鉴权结果调出与用户对应的个人词库,后续输入过程中电子设备可使用该个人词库为用户确定本次输入事件的候选词汇。这样一来,不同用户在同一电子设备上使用输入法应用时,电子设备可以在用户并不感知的情况下,根据用户的指纹自动的为不同的用户调用与其输入习惯相符的个人词库,以完成相关候选词汇的提示,从而提高输入法的输入效率,同时降低了由于不同用户的词库混淆而导致用户隐私泄露的几率,提升了人机交互的智能性。
在一种可能的设计方法中,当该指纹的个数为一个时,电子设备确定与该指纹关联的目标词库,包括:电子设备根据不同用户的个人词库与不同用 户的注册指纹之间的对应关系,将与该指纹对应的个人词库确定为目标词库。
在一种可能的设计方法中,当该指纹为非注册指纹时,电子设备可以将当前操作输入法应用的用户作为电子设备中没有注册过的新用户,那么电子设备可建立与该指纹对应的临时个人词库。
在一种可能的设计方法中,在电子设备建立与该指纹对应的临时个人词库之后,还包括:当该临时个人词库与第一用户的个人词库之间的相似度大于阈值时,电子设备将该临时个人词库添加至第一用户的个人词库;电子设备在该对应关系中建立该指纹与第一用户的个人词库之间的对应关系。
这样,通过判断与不同指纹对应词库之间的相似度,可以识别出与同一用户对应的不同词库,那么,将同一用户对应的不同词库合并后,可以丰富和优化该用户的个人词库,可提高后续用户使用输入法应用的准确率。
在一种可能的设计方法中,该指纹的个数为N个,N为大于1的整数;其中,当该指纹为预先存储的注册指纹时,电子设备确定与该指纹关联的目标词库,包括:当该N个指纹中的至少一个为注册指纹时,电子设备根据不同用户的个人词库与不同用户的注册指纹之间的对应关系,将与该注册指纹对应的个人词库确定为该目标词库。
在一种可能的设计方法中,当该N个指纹中包括X个注册指纹和Y个未注册指纹时,X+Y=N,X≥1,Y≥1,该方法还包括:电子设备在该对应关系中建立该Y个未注册指纹与该目标词库之间的对应关系,以便于后续电子设备能够根据更新后的对应关系,准确的调用相应的目标词库。
在一种可能的设计方法中,该N个指纹中包括Z个注册指纹,1<Z≤N,该方法还包括:电子设备根据该对应关系确定该Z个注册指纹中每个注册指纹对应的个人词库是否相同;当该Z个注册指纹中每个注册指纹对应的个人词库之间存在不相同的个人词库时,电子设备将该Z个注册指纹对应的个人词库合并为一个个人词库。
在一种可能的设计方法中,当该N个指纹均为未注册指纹时,该方法还包括:电子设备建立与该N个指纹对应的临时个人词库。
在一种可能的设计方法中,在电子设备建立与该N个指纹对应的临时个人词库之后,还包括:当该临时个人词库与第二用户的个人词库之间的相似度大于阈值时,电子设备将该临时个人词库添加至第二用户的个人词库;电子设备在该对应关系中建立该指纹与第二用户的个人词库之间的对应关系。
在一种可能的设计方法中,该触摸屏包括显示该输入法应用的虚拟键盘的区域;其中,电子设备获取用户在触摸屏上的指纹,包括:电子设备获取用户在该键盘的区域执行输入事件时产生的指纹。
第二方面,本申请实施例提供一种电子设备,包括:获取单元,用于:当输入法应用运行时,获取用户触摸屏上的指纹;确定单元,用于:当该指纹为预先存储的注册指纹时,确定与该指纹关联的目标词库;执行单元,用于:使用该目标词库,提供与当前的输入事件对应的至少一个候选词汇。
在一种可能的设计方法中,该确定单元,具体用于:根据不同用户的个 人词库与不同用户的注册指纹之间的对应关系,将与该指纹对应的个人词库确定为目标词库。
在一种可能的设计方法中,电子设备还包括:建立单元,用于:当该指纹为非注册指纹时,建立与该指纹对应的临时个人词库。
在一种可能的设计方法中,电子设备还包括合并单元,该合并单元,用于:当该临时个人词库与第一用户的个人词库之间的相似度大于阈值时,将该临时个人词库添加至第一用户的个人词库;该建立单元,还用于:在该对应关系中建立该指纹与第一用户的个人词库之间的对应关系。
在一种可能的设计方法中,该指纹的个数为N个,N为大于1的整数;该确定单元,具体用于:当该N个指纹中的至少一个为注册指纹时,根据不同用户的个人词库与不同用户的注册指纹之间的对应关系,将与该注册指纹对应的个人词库确定为该目标词库。
在一种可能的设计方法中,该N个指纹中包括X个注册指纹和Y个未注册指纹时,X+Y=N,X≥1,Y≥1,电子设备还包括:建立单元,用于:在该对应关系中建立该Y个未注册指纹与该目标词库之间的对应关系。
在一种可能的设计方法中,电子设备还包括合并单元,该确定单元,还用于:根据该对应关系确定该Z个注册指纹中每个注册指纹对应的个人词库是否相同;该合并单元,用于:当该Z个注册指纹中每个注册指纹对应的个人词库之间存在不相同的个人词库时,将该Z个注册指纹对应的个人词库合并为一个个人词库。
在一种可能的设计方法中,该建立单元,还用于:当该N个指纹均为未注册指纹时,建立与该N个指纹对应的临时个人词库。
在一种可能的设计方法中,该合并单元,还用于:当该临时个人词库与第二用户的个人词库之间的相似度大于阈值时,将该临时个人词库添加至第二用户的个人词库;该建立单元,还用于:在该对应关系中建立该指纹与第二用户的个人词库之间的对应关系。
在一种可能的设计方法中,该触摸屏包括显示该输入法应用的虚拟键盘的区域,该获取单元,具体用于:获取用户在该键盘的区域执行输入事件时产生的指纹。
第三方面,本申请的实施例提供一种电子设备,包括:处理器、存储器、总线和通信接口;该存储器用于存储计算机执行指令,该处理器与该存储器通过该总线连接,当电子设备运行时,该处理器执行该存储器存储的该计算机执行指令,以使电子设备执行上述任一项触摸控制方法。
第四方面,本申请实施例又提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
第五方面,本申请实施例又提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面所述的方法。
应当理解的是,本申请中对技术特征、技术方案、有益效果或类似语言 的描述并不是暗示在任意的单个实施例中可以实现所有的特点和优点。相反,可以理解的是对于特征或有益效果的描述意味着在至少一个实施例中包括特定的技术特征、技术方案或有益效果。因此,本说明书中对于技术特征、技术方案或有益效果的描述并不一定是指相同的实施例。进而,还可以任何适当的方式组合本实施例中所描述的技术特征、技术方案和有益效果。本领域技术人员将会理解,无需特定实施例的一个或多个特定的技术特征、技术方案或有益效果即可实现实施例。在其他实施例中,还可在没有体现所有实施例的特定实施例中识别出额外的技术特征和有益效果。
附图说明
图1为现有技术中输入法登录界面的示意图;
图2为本申请实施例提供的一种电子设备的结构示意图一;
图3为本申请实施例提供的电子设备中触摸屏的结果示意图;
图4为本申请实施例提供的输入法架构示意图一;
图5为本申请实施例提供的输入法架构示意图二;
图6为本申请实施例提供的一种输入方法的流程示意图一;
图7为本申请实施例提供的一种输入方法的应用场景示意图一;
图8为本申请实施例提供的一种输入方法的应用场景示意图二;
图9为本申请实施例提供的一种输入方法的应用场景示意图三;
图10为本申请实施例提供的一种输入方法的应用场景示意图四;
图11为本申请实施例提供的一种输入方法的应用场景示意图五;
图12为本申请实施例提供的一种输入方法的应用场景示意图六;
图13为本申请实施例提供的一种输入方法的应用场景示意图七;
图14为本申请实施例提供的一种输入方法的应用场景示意图八;
图15为本申请实施例提供的一种输入方法的流程示意图二;
图16为本申请实施例提供的一种输入方法的应用场景示意图九;
图17为本申请实施例提供的一种输入方法的应用场景示意图十;
图18为本申请实施例提供的一种输入方法的应用场景示意图十一;
图19为本申请实施例提供的一种输入方法的应用场景示意图十二;
图20为本申请实施例提供的一种输入方法的应用场景示意图十三;
图21为本申请实施例提供的一种输入方法的应用场景示意图十四;
图22为本申请实施例提供的一种输入方法的应用场景示意图十五;
图23为本申请实施例提供的一种输入方法的应用场景示意图十六;
图24为本申请实施例提供的一种输入方法的应用场景示意图十七;
图25为本申请实施例提供的一种电子设备的结构示意图二;
图26为本申请实施例提供的一种电子设备的结构示意图三。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请 实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
目前,用户在电子设备内的各类应用程序(以下简称APP)中输入信息时,一般需要安装相应的输入法APP,或者,使用电子设备自带的输入法应用实现文字(中文或英文等)、数字或符号等信息的输入。并且,输入法应用可以根据用户在输入时的输入习惯建立个人词库,该个人词库中可能包括用户自定义的词汇、高频词汇、收藏的表情等,使得用户在使用该输入法应用时的输入效率更高。
然而,如图1所示,当用户想使用自己的个人词库进行输入时,通常需要输入密码登陆自己在某个输入法应用的账号,使得输入法应用可以将与其账号关联的词库(即该用户的个人词库)调出,进而,使用该用户的个人词库确定其需要输入的候选词汇。
在本申请一些是实施例中,上述候选词汇具体可以包括汉语中的字、词、句、短语、数字、字母、符号以及表情中的至少一项,也可以包括英文或其他语言中的字、词、句、短语、数字、字母、符号以及表情中的至少一项。
那么,当不同用户使用同一电子设备的同一输入法应用进行输入时,用户需要首先登录到自己的账号后,电子设备才能使用其个人词库提供候选词汇的选项,这无疑降低了输入法的输入效率。
否则,输入法应用将调用预先设置的默认词库完成用户触发的输入事件。而该默认词库可能已经根据上一位用户使用时的输入习惯被修改,那么,本次用户输入时无法及时获取到符合其自身输入习惯的输入选项,使得输入效率降低,甚至,还可能泄露上一位用户的隐私。例如,用户A在未登陆其账号的状态下多次输入了用户A的电话号码“123456”,输入法应用基于这一输入行为将用户A的电话号码添加至默认词库中,后续,当用户B也在未登陆其账号的状态下输入“12”时,输入法应用可能将用户A的电话号码“123456”作为候选词汇提示给用户B,此时,用户A的电话号码将被泄露。
对此,本申请的实施例提供一种输入方法,可应用于手机、可穿戴设备、AR(增强现实)\VR(虚拟现实)设备、平板电脑、笔记本电脑、UMPC(超级移动个人计算机)、上网本、PDA(个人数字助理)等任何具有指纹验证功能的电子设备,当然,在以下实施例中,对该电子设备的具体形式不作任何限制。
具体的,可以在与输入法应用的输入界面(例如,键盘区域)对应的触摸屏上集成指纹采集器件。这样,当电子设备启动输入法应用并接收到在输入界面上的一个输入事件时,电子设备可以通过集成的指纹采集器件获取到本次输入事件中的指纹。然后,电子设备对获取的指纹进行指纹验证。当上述获取到的指纹与用户A的注册指纹的相似度大于预设阈值时,说明本次输入事件所对应的用户是用户A,那么,电子设备可以登录用户A在输入法应用中的账号,从而调用用户A的个人词库,并确定本次输入事件对应的候选词汇。
也就是说,当用户使用输入法应用时,电子设备可以通过用户在输入界面内输入信息时的指纹对用户进行鉴权,从而根据鉴权结果调出与用户对应 的个人词库,后续输入过程中电子设备可使用该个人词库确定本次输入事件的候选词汇。
这样一来,不同用户在同一电子设备上使用输入法应用时,电子设备可以在用户并不感知的情况下,根据用户的指纹自动的为不同的用户调用与其输入习惯相符的个人词库,以完成相关候选词汇的提示,从而提高输入法的输入效率,同时降低了由于不同用户的词库混淆而导致用户隐私泄露的几率,提升了人机交互的智能性。
需要说明的是,上述输入事件可以是指用户通过输入法应用,在输入界面内的任意输入动作,例如,短按键盘内的字母,长按输入法应用的功能按键,滑动输入法应用提供的候选词汇等。
如图2所示,本申请实施例中的电子设备可以为手机100。下面以手机100为例对实施例进行具体说明。应该理解的是,图示手机100仅是电子设备的一个范例,并且手机100可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。
如图2所示,手机100具体可以包括:处理器101、射频(RF)电路102、存储器103、触摸屏104、蓝牙装置105、一个或多个传感器106、Wi-Fi装置107、定位装置108、音频电路109、外设接口110以及电源系统111等部件。这些部件可通过一根或多根通信总线或信号线(图2中未示出)进行通信。本领域技术人员可以理解,图2中示出的硬件结构并不构成对手机的限定,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图2对手机100的各个部件进行具体的介绍:
处理器101是手机100的控制中心,利用各种接口和线路连接手机100的各个部分,通过运行或执行存储在存储器103内的应用程序,以及调用存储在存储器103内的数据,执行手机100的各种功能和处理数据。在一些实施例中,处理器101可包括一个或多个处理单元;举例来说,处理器101可以是华为技术有限公司制造的麒麟960芯片。在本申请一些实施例中,上述处理器101还可以包括指纹验证芯片,用于对采集到的指纹进行验证。
射频电路102可用于在收发信息或通话过程中,无线信号的接收和发送。特别地,射频电路102可以将基站的下行数据接收后,给处理器101处理;另外,将涉及上行的数据发送给基站。通常,射频电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频电路102还可以通过无线通信和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统、通用分组无线服务、码分多址、宽带码分多址、长期演进、电子邮件、短消息服务等。
存储器103用于存储应用程序以及数据,处理器101通过运行存储在存储器103的应用程序以及数据,执行手机100的各种功能以及数据处理。存储器103主要包括存储程序区以及存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能 等);存储数据区可以存储根据使用手机100时所创建的数据(比如音频数据、电话本等)。此外,存储器103可以包括高速随机存取存储器,还可以包括非易失存储器,例如磁盘存储器件、闪存器件或其他易失性固态存储器件等。存储器103可以存储各种操作系统,例如,苹果公司所开发的
Figure PCTCN2017084602-appb-000001
操作系统,谷歌公司所开发的
Figure PCTCN2017084602-appb-000002
操作系统等。上述存储器103可以是独立的,通过上述通信总线与处理器101相连接;存储器103也可以和处理器101集成在一起。
触摸屏104可以包括触控板104-1和显示器104-2。其中,触控板104-1可采集手机100的用户在其上或附近的触摸事件(比如用户使用手指、触控笔等任何适合的物体在触控板104-1上或在触控板104-1附近的操作),并将采集到的触摸信息发送给其他器件例如处理器101。其中,用户在触控板104-1附近的触摸事件可以称之为悬浮触控;悬浮触控可以是指,用户无需为了选择、移动或拖动目标(例如图标等)而直接接触触控板,而只需用户位于电子设备附近以便执行所想要的功能。在悬浮触控的应用场景下,术语“触摸”、“接触”等不会暗示用于直接接触触摸屏,而是附近或接近的接触。能够进行悬浮触控的触控板104-1可以采用电容式、红外光感以及超声波等实现。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型来实现触控板104-1。显示器(也称为显示屏)104-2可用于显示由用户输入的信息或提供给用户的信息以及手机100的各种菜单。可以采用液晶显示器、有机发光二极管等形式来配置显示器104-2。触控板104-1可以覆盖在显示器104-2之上,当触控板104-1检测到在其上或附近的触摸事件后,传送给处理器101以确定触摸事件的类型,随后处理器101可以根据触摸事件的类型在显示器104-2上提供相应的视觉输出。虽然在图2中,触控板104-1与显示屏104-2是作为两个独立的部件来实现手机100的输入和输出功能,但是在某些实施例中,可以将触控板104-1与显示屏104-2集成而实现手机100的输入和输出功能。可以理解的是,触摸屏104是由多层的材料堆叠而成,本申请实施例中只展示出了触控板(层)和显示屏(层),其他层在本申请实施例中不予记载。另外,在本申请其他一些实施例中,触控板104-1可以覆盖在显示器104-2之上,并且触控板104-1的尺寸大于显示屏104-2的尺寸,使得显示屏104-2全部覆盖在触控板104-1下面,或者,上述触控板104-1可以以全面板的形式配置在手机100的正面,也即用户在手机100正面的触摸均能被手机感知,这样就可以实现手机正面的全触控体验。在其他一些实施例中,触控板104-1以全面板的形式配置在手机100的正面,显示屏104-2也可以以全面板的形式配置在手机100的正面,这样在手机的正面就能够实现无边框的结构。
在本申请实施例中,手机100还可以具有指纹识别功能。例如,可以在触摸屏104中配置指纹采集器件112来实现指纹识别功能,即指纹采集器件112可以与触摸屏104集成在一起来实现手机100的指纹识别功能。在这种情况下,该指纹采集器件112配置在触摸屏104中,可以是触摸屏104的一部分,也可以以其他方式配置在触摸屏104中。另外,该指纹采集器件112还 可以被实现为全面板指纹采集器件。因此,可以把触摸屏104看成是任何位置都可以进行指纹识别的一个面板。该指纹采集器件112可以将采集到的指纹发送给处理器101,以便处理器101对该指纹进行处理(例如指纹验证等)。本申请实施例中的指纹采集器件112的主要部件是指纹传感器,该指纹传感器可以采用任何类型的感测技术,包括但不限于光学式、电容式、压电式或超声波传感技术等。
在本申请的一个实施例中,如图3中的(a)所示,为触摸屏104的一种可能的设计方式,上述纹采集器件112可以为电容式采集器件112-1。此时,触摸屏104具体可以包括电容式指纹采集器件112-1、触控板104-1以及显示器104-2,该显示器104-2位于触摸屏104中的最下层,触控板104-1位于触摸屏104中的最上层,所述电容式采集器件112-1位于触控板104-1与显示器104-2之间。
具体实现中,可以根据指纹的脊和谷与电容式采集器件112-1的电容感应颗粒形成的电容值大小不同,分别判断指纹的脊和谷的位置,从而获取指纹。进一步的,可以预先对屏幕中每个像素点上的电容感应颗粒进行充电,使电容感应颗粒达到预设阈值,当用户手指接触到触摸屏104时,由于电容值与距离之间存在预设关系,因此,会在脊和谷的位置形成不同的电容值,然后通过放电电流进行放电,因为脊和谷所对应的电容值不同,脊和谷对应的像素点的放电速度也不同,脊对应的像素点放电慢,谷对应的像素点放电速度快。因此,通过脊和谷对应的像素点的充电与放电,可以获取用户的指纹。
在本申请其他一些实施例中,如图3中的(b)所示,为触摸屏104的另一种可能的设计方式,上述指纹采集器件112还可以为射频式指纹采集器件112-2。此时,触摸屏104可以包括射频式指纹采集器件112-2、触控板104-1以及显示器104-2,所述射频式指纹采集器件112-2位于所述触摸屏104中的最下层,所述触控板104-1位于触摸屏104中的最上层,所述显示器104-2位于触控板104-1与射频式指纹采集器件112-2之间。
具体实现中,当光线照射到压有指纹的触控板104-1的表面时,射频式指纹采集器件112-2可通过CCD(电荷耦合器件)吸收反射光从而获取指纹。进一步的,由于触控板104-1上的指纹的脊和谷的深度不同以及皮肤与触控板104-1之间的油脂和水分,光线经过触控板104-1照射到指纹的谷的位置发生全反射,而照射到指纹的脊的位置不能发生全反射,一部分光线被触控板104-1吸收或者漫反射到其他地方,从而在CCD上形成指纹。
在本申请一些实施例中,为了降低在触摸屏104内进行指纹识别时的功耗,手机100可以在特定条件下接通或关闭指纹采集器件的电源。例如,手机100可以在检测到用户在触摸屏104上的特定位置的触摸事件时,接通指纹采集器件的电源,以便手机100进行指纹识别,而在手机100没有检测到用户对触摸屏104的特定位置的触摸事件时,不给指纹采集器件上电,即手机100关闭了指纹识别功能。当然,手机100还可以通过在设置菜单中显示 与指纹识别相关的开关控件,以便用户手动启动或关闭指纹识别功能。在本申请另外一些实施例中,手机100还可以根据特定的条件来启动或关闭指纹识别功能。例如,可以根据地理位置的不同,来启动或关闭指纹识别功能等。
手机100还可以包括蓝牙装置105,用于实现手机100与其他短距离的电子设备(例如手机、智能手表等)之间的数据交换。本申请实施例中的蓝牙装置可以是集成电路或者蓝牙芯片等。
手机100还可以包括至少一种传感器106,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节触摸屏104的显示器的亮度,接近传感器可在手机100移动到耳边时,关闭显示器的电源。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机100还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
Wi-Fi装置107,用于为手机100提供遵循Wi-Fi相关标准协议的网络接入,手机100可以通过Wi-Fi装置107接入到Wi-Fi接入点,进而帮助用户收发电子邮件、浏览网页和访问流媒体等,它为用户提供了无线的宽带互联网访问。在其他一些实施例中,该Wi-Fi装置107也可以作为Wi-Fi无线接入点,可以为其他电子设备提供Wi-Fi网络接入。
定位装置108,用于为手机100提供地理位置。可以理解的是,该定位装置108具体可以是全球定位系统(GPS)或北斗卫星导航系统、俄罗斯GLONASS等定位系统的接收器。定位装置108在接收到上述定位系统发送的地理位置后,将该信息发送给处理器101进行处理,或者发送给存储器103进行保存。在另外的一些实施例中,该定位装置108还可以是辅助全球卫星定位系统(AGPS)的接收器,AGPS系统通过作为辅助服务器来协助定位装置108完成测距和定位服务,在这种情况下,辅助定位服务器通过无线通信网络与电子设备例如手机100的定位装置108(即GPS接收器)通信而提供定位协助。在另外的一些实施例中,该定位装置108也可以是基于Wi-Fi接入点的定位技术。由于每一个Wi-Fi接入点都有一个全球唯一的MAC地址,电子设备在开启Wi-Fi的情况下即可扫描并收集周围的Wi-Fi接入点的广播信号,因此可以获取到Wi-Fi接入点广播出来的MAC地址;电子设备将这些能够标示Wi-Fi接入点的数据(例如MAC地址)通过无线通信网络发送给位置服务器,由位置服务器检索出每一个Wi-Fi接入点的地理位置,并结合Wi-Fi广播信号的强弱程度,计算出该电子设备的地理位置并发送到该电子设备的定位装置108中。
音频电路109、扬声器113、麦克风114可提供用户与手机100之间的音频接口。音频电路109可将接收到的音频数据转换后的电信号,传输到扬声器113,由扬声器113转换为声音信号输出;另一方面,麦克风114将收集的 声音信号转换为电信号,由音频电路109接收后转换为音频数据,再将音频数据输出至RF电路102以发送给比如另一手机,或者将音频数据输出至存储器103以便进一步处理。
外设接口110,用于为外部的输入/输出设备(例如键盘、鼠标、外接显示器、外部存储器、用户识别模块卡等)提供各种接口。例如通过通用串行总线(USB)接口与鼠标连接,通过用户识别模块卡卡槽上的金属触点与电信运营商提供的用户识别模块卡(SIM)卡进行连接。外设接口110可以被用来将上述外部的输入/输出外围设备耦接到处理器101和存储器103。
手机100还可以包括给各个部件供电的电源装置111(比如电池和电源管理芯片),电池可以通过电源管理芯片与处理器101逻辑相连,从而通过电源装置111实现管理充电、放电、以及功耗管理等功能。
尽管图2未示出,手机100还可以包括摄像头(前置摄像头和/或后置摄像头)、闪光灯、微型投影装置、近场通信(NFC)装置等,在此不再赘述。
在本申请实施例中,手机100内的存储器103中存储有输入法应用,用户在使用输入法向需要使用输入法应用的APP输入信息时,通常涉及用户、输入法和系统三个层次。以Android操作系统为例,如图4所示,Android平台输入法框架一般由用户11,输入法应用12,APP 13三部分组成。
在一种可能的设计方法中,如图4所示,如果APP 13在运行的过程中检测到其编辑窗13a口中接收到用户的点击操作,则APP 13调用输入法接口14,此时,输入法应用12可在触摸屏上显示虚拟键盘12a等输入法相关的图形用户界面。那么,电子设备在该虚拟键盘12a上检测到输入事件(例如,点击虚拟键盘12a中的按键)后,处理器101可以获取到该输入事件,并将该输入事件所对应的字母或符号等数据发送给输入法控制主程序12b。输入法控制主程序12b可以调用词库(例如,用户的个人词库)确定与该输入事件对应的候选词汇,并在候选窗口12c内显示该候选词汇。当检测到用户从候选词汇中选定目标词汇后,输入法控制主程序12b将该目标词汇通过输入法接口14送给APP 13的输入法管理器13b(例如,inputmethod manager类),这样,输入法管理器13b便可以将用户选中的目标词汇输入至当前的编辑窗口13a中,完成本次输入事件。
在另一种可能的设计方法中,在输入法控制主程序12b调用词库之前,如图5所示,当用户在虚拟键盘12a上触发一个输入事件后,输入法应用12可以先调用指纹采集器件112采集上述输入事件中的指纹,进而,将该指纹发送至指纹识别模块15,由指纹识别模块15根据该指纹识别当前用户的身份。如图5所示,上述输入法控制主程序12b还可以根据不同用户与不同个人词库之间的对应关系,调出与当前用户对应的个人词库(即目标词库)。后续,与如图4所示的输入流程类似,输入法控制主程序12b可以使用该目标词库确定本次输入事件中的候选词汇。
以下,将结合具体实施例详细阐述本申请实施例提供的一种输入方法,如图6所示,该方法包括:
401、当电子设备运行输入法应用时,电子设备获取用户在触摸屏上的指纹。
示例性的,如图7所示,当前需要输入信息的APP为短信应用,在图7所示的短信界面中显示有一个编辑窗口13a,可用于显示用户正在编辑的短信。当电子设备检测到用户的手指点击编辑窗口13a时,短信APP调用输入法应用的API(应用程序编程接口),触发该输入法将虚拟键盘12a显示在触摸屏上,使得输入法应用被调出至前台运行。
输入法应用在前台运行时,电子设备可以检测到用户手指在虚拟键盘12a的相关虚拟按键上的点击操作,此时,由于触摸屏上集成有指纹采集器件(图7中未示出),那么,电子设备可以通过该指纹采集器件采集用户在触摸屏上形成的指纹20。在本申请实施例中,例如,上述指纹采集器件可以被设置在上述虚拟键盘12a所对应的触摸屏上的某一区域。这样,电子设备可以根据采集到的指纹20确定用户的身份,从而在用户无感知的情况下调用与用户关联的个人词库,简化了输入法应用的登录过程,提高了输入效率。
需要说明的是,本申请实施例的附图中仅将采集到的指纹20示意性的显示在电子设备的触摸屏上,可以理解的是,在触摸屏上采集用户指纹时也可以不显示采集到的指纹的相关图案。
当然,上述指纹采集器件也可以被设置在上述虚拟键盘12a之外的某一区域,这样,当用户在虚拟键盘12a之外的区域触摸显示屏时,电子设备也可以获取到用户的指纹12,本申请实施例对获取用户指纹的位置不作任何限制。
又或者,当输入法应用在后台运行时,电子设备也可以通过指纹采集器件获取用户在触摸屏上的指纹。此时,电子设备也可以根据采集到的指纹确定与当前用户关联的个人词库,那么,一旦电子设备将输入法应用调出至前台运行时,可以及时的使用确定出的个人词库,从而提高了输入效率。
在本申请其他一些实施例中,为了降低在触摸屏内进行指纹识别时的功耗,电子设备可以设置在特定条件下接通指纹采集器件的电源;或者,在特定条件下关闭指纹采集器件的电源或降低指纹采集器件的扫描频率。例如,当输入法应用的相关图形用户界面(例如,上述虚拟键盘12a)显示在触摸屏上之后,输入法应用可进一步通过触摸屏上集成的指纹采集器件采集用户触摸在虚拟键盘12a上的指纹。
另外,图7中示例性的以一个指纹为例进行说明,但可以理解的是,上述步骤401中采集到的指纹中可以包括一个或多个。
402、电子设备基于上述采集到的指纹来进行指纹验证。
例如,可以将上述采集到的指纹20与预先存储的注册指纹对比,确定上述采集到的指纹20是否为注册指纹。还例如,电子设备可以将采集到的指纹20通过无线网络发送到网络侧的服务器,由服务器进行指纹验证,并将验证结果通过该无线网络返回给该电子设备。
电子设备的存储器103内预先存储有一个或多个注册指纹,例如,该注 册指纹可以用于解锁电子设备屏幕的指纹,也可以为用于登录该输入法应用的指纹。
在步骤402中,电子设备将上述采集到的指纹20与注册指纹对比,即对上述采集到的指纹20进行指纹识别,当上述采集到的指纹20与某一个注册指纹的相似度大于预设阈值时,可确认该指纹20为一个注册指纹,此时,电子设备可继续执行下述步骤403-404:
403、当上述采集到的指纹为注册指纹时,电子设备确定与该注册指纹关联的目标词库。
404、电子设备使用上述目标词库确定与输入事件对应的候选词汇,并显示该候选词汇。
具体的,电子设备确定出不同用户的注册指纹与其个人词库之间的对应关系后,可以根据该对应关系,将与上述采集到的指纹对应的个人词库作为目标词库。示例性的,如表1所示,用户A(Tom)在输入法应用中注册有指纹1和指纹2,并且,Tom的个人词库为词库1;类似的,用户B(Alice)在输入法应用中注册有指纹3和指纹4,并且,Alice的个人词库为词库2。可以理解的是,表1中的指纹1-指纹4以及后续表2中涉及的不同指纹的编号仅用于区分不同用户的不同指纹,并不限定每个指纹所对应的具体手指。另外,表1中仅示例性的表示出用户的注册指纹与个人词库之间的对应关系,可以理解的是,该对应关系还可以以表格以外的其他形式展现。
表1
Figure PCTCN2017084602-appb-000003
当上述采集到的指纹20为表1中的指纹1时,电子设备可通过表1确定与该注册指纹(即指纹1)关联的目标词库为词库1,即Tom的个人词库。
此时,如图8所示,电子设备可以登录Tom的个人词库,并使用Tom的个人词库确定与当前的输入事件对应的候选词汇。如图8所示,用户已经在编辑窗口13a中输入“我的电话是130”,由于Tom的个人词库中记录有Tom的电话号码“13088666688”,因此,输入法应用可以从Tom的个人词库中将Tom的电话号码的后8位“88666688”作为第一候选词汇17提示出来。类似的,如果用户已经在编辑窗口13a中用英文输入“My phone number is 130”时,输入法应用可以从Tom的个人词库中将Tom的电话号码的后8位“88666688”作为第一候选词汇17提示出来。这样,Tom可以选择该第一候选词汇17,将自己的电话号码输入至编辑窗口13a,无需手动的逐一输入自己的电话号码,从而提高了输入法的输入效率。
在本申请其他一些实施例中,电子设备在登录Tom的账号后,在后续的输入过程中,电子设备可以根据Tom的输入习惯不断丰富和完善其个人词库。 例如,输入法应用的服务器可以通过搜集词条来源、过滤去重、机器整理(例如,去除错别字、垃圾词汇等)、词频统计、新词发现、词库验证(例如,调整词汇出现的前后顺序)等方式在服务器中建立一个词库,该词库可以是实时更新的。进而,电子设备可以根据用户使用该输入法应用时的输入习惯(例如,拼写习惯,高频词汇,行文风格),从上述词库中建立其个人词库存储至用户的电子设备中,该个人词库可以通过定期联网进行更新。
在本申请另外一些实施例中,为了降低在触摸屏内进行指纹识别时的功耗,电子设备可以在登录Tom的个人词库时,或者,在隐藏触摸屏上已显示的输入法应用的相关用户图形界面(例如,虚拟键盘)时关闭上述指纹采集器件的电源,或降低上述指纹采集器件的扫描频率。
可以看出,电子设备可以通过用户输入过程中在触摸屏上形成的指纹,在用户没有感知的情况下自动登录自己的个人词库,即用户并不需要手动在输入法应用中登录自己的账号,便可调用自己的个人词库,使得输入法对于用户身份的识别或用户账户的切换,对于用户来说都是无感知无缝地切换,从而提高了输入法的智能性、交互性以及输入效率。
在本申请其他一些实施例中,仍如表1所示,每个用户可以注册有2个或2个以上的指纹,那么,对于一个用户(例如Tom),可以按照不同的注册指纹将Tom的个人词库划分为不同的子词库。如图9所示,电子设备可以根据指纹1的输入习惯建立子词库1-1,根据指纹2的输入习惯建立子词库1-2。
这样,当用户使用手指1在输入法应用的虚拟键盘12a上输入时,电子设备采集手指1的指纹,并对采集到的指纹进行指纹验证,当电子设备确定采集到的指纹是指纹1时,可以调用与指纹1关联的子词库1-1,并根据子词库1-1确定本次输入的候选词汇并显示在触摸屏上;当用户使用手指2在输入法应用的虚拟键盘12a上输入时,电子设备采集手指2的指纹,并对采集到的指纹进行指纹验证,当电子设备确定采集到的指纹是指纹2时,电子设备可以根据子词库1-2确定本次输入的候选词汇并显示在触摸屏上。
当然,如果用户同时使用手指1和手指2在输入法应用的虚拟键盘12a上输入时,电子设备可以将分别从子词库1-1与子词库1-2中确定的候选词汇,以不同的形式显示至当前的显示界面中。例如,将从子词库1-1中确定的候选词汇标记为红色显示在候选窗口中,将从子词库1-2中确定的候选词汇标记为黑色显示在候选窗口中。这样,用户就能够直观地看到不同手指所对应的不同候选词,从而提高了用户体验。
在本申请其他一些实施例中,对于一个用户的不同注册指纹,电子设备还可以将上述子词库划分为与不同应用场景关联的场景库。例如,如图10所示,可以将上述子词库1-1划分为场景库1和场景库2;其中,场景库1可以是根据用户使用指纹1在微信应用中的输入习惯设置的,场景库2可以是根据用户使用指纹1在短信应用中的输入习惯设置的。因此,当电子设备确定当前输入事件中的指纹为上述指纹1后,还可以进一步根据前台运行的具体应用,来调用相应的场景库确定本次输入事件的候选词汇,以满足用户在不 同应用场景下的输入需求,进一步提高用户体验。
例如,当输入法应用在前台运行时,电子设备采集用户手指的指纹,当采集的指纹与上述指纹1一致时,如果当前在前台运行的APP为短信应用时,如图11a所示,则电子设备可根据指纹1和当前的短信应用,调用子词库1-1中的场景库2,进而使用场景库2确定编辑窗口13a中“我的名字叫”的候选词汇。
由于用户之前在使用短信应用时一般输入的是自己的真实姓名:Tom,因此,电子设备可将“Tom”作为高频词汇收录在场景库2中。那么,当检测到编辑窗口13a中输入有“我的名字叫”时,电子设备可调用场景库2,将与“名字”关联的一个或多个高频词汇显示在候选窗口12c中,例如,如图11a所示,电子设备将“Tom”作为第一候选词汇17提供显示在候选窗口12c中。
相应的,如果当前在前台运行的APP为微信应用时,如图11b所示,则电子设备可调用子词库1-1中的场景库1,使用场景库1确定编辑窗口13a中“我的名字叫”的候选词汇。
由于用户之前在使用微信应用时一般输入的是自己的昵称:小蝌蚪,因此,电子设备可将“小蝌蚪”作为高频词汇收录在场景库1中。那么,当检测到编辑窗口13a中输入有“我的名字叫”时,电子设备可调用场景库1,将与“名字”关联的一个或多个高频词汇显示在候选窗口12c中,例如,如图11b所示,电子设备将“小蝌蚪”作为第一候选词汇17显示在候选窗口12c中。这样,电子设备可以根据前台运行的具体应用,来调用相应的场景库确定本次输入事件的候选词汇,以满足用户在不同应用场景下的输入需求,从而提高输入效率。
在本申请另外一些实施例中,在执行上述步骤402后,如果上述步骤401中获取的指纹不是注册指纹(即获取的指纹是非注册指纹),则上述输入方法还可以包括下述步骤405-407:
405、电子设备创建与上述采集到的指纹对应的临时个人词库。
具体的,当上述采集到的指纹20为非注册指纹时,则说明电子设备内并未注册当前用户进行操作所用手指的指纹,从而无法确定当前用户的身份,以及与当前用户对应的个人词库。
那么,在步骤405中,电子设备可以将当前操作输入法应用的用户作为电子设备中没有注册过的新用户,例如,表2中的用户C。进而,电子设备可以在默认的词库的基础上,学习当前用户(即用户C)的输入习惯,例如,用户输入的高频词汇,用户的行为风格等,从而创建一个与用户C的指纹5对应的新的个人词库,即临时个人词库。此时,如表2所示,用户C的指纹5与该临时个人词库之间具有对应关系。
表2
Figure PCTCN2017084602-appb-000004
在本申请的一个实施例中,如图12所示,当上述采集到的指纹20为非注册指纹时,电子设备可以通过一个提示框18提示用户是否创建属于自己的个人词库。当用户确认创建个人词库后,在后续每次接收到具有上述指纹20的用户的输入事件时,基于默认的词库学习该用户的输入习惯,经过一段时间后,可得到属于该用户的临时个人词库。
406、当上述临时个人词库与已存储的第一用户的个人词库之间的相似度大于或等于一预设阈值时,电子设备将该临时个人词库添加至第一用户的个人词库。
由于用户可能仅在电子设备中注册了一个或少量的几个指纹,因此,当上述采集到的指纹20为非注册指纹时,并不能确定具有该指纹20的用户在电子设备中没有对应的个人词库。
因此,电子设备可以定期或不定期的将步骤405中创建的临时个人词库与表1中已经存储的各个用户的个人词库进行比较。当上述临时个人词库与已存储的第一用户(例如,Tom)的个人词库之间的相似度大于阈值时,则说明具有步骤401中指纹20的用户与Tom的输入习惯极为相似,因此,可以认为具有上述指纹20的用户即为Tom。此时,电子设备可登陆Tom的个人词库,并将该临时个人词库添加至Tom的个人词库,完成Tom个人词库的更新。
在本申请的一个实施例中,如图13所示,电子设备可以提示用户已经登陆Tom的个人词库,并且更新了Tom的个人词库,后续,电子设备将使用Tom的个人词库确定其触发的输入事件的候选词汇。
另外,电子设备可以根据词库中记录的用户的拼写习惯,高频词汇,以及行文风格等数据,来确定两个词库之间的相似度。例如,词库1中记录的高频词汇与词库2中记录的高频词汇有85%都相同时,可认为使用词库1的用户与使用词库2的用户为同一人。
当然,本领域技术人还可以根据实际经验或者实际应用场景对确定两个词库的相似度的方法以及上述阈值进行具体设置,本申请实施例对如何确定两个词库的相似度的方法不作任何限制。
407、电子设备建立上述采集到的指纹与第一用户的个人词库之间的对应关系。
当电子设备已经将上述临时个人词库添加至第一用户的个人词库后,电子设备可以更新上述表2所示的注册指纹与个人词库之间的对应关系,以便 于后续接收到新的输入事件时,电子设备能够根据更新后的对应关系,准确的调用相应的目标词库。
示例性的,如表3所示,电子设备可以在表1所示的注册指纹与个人词库之间的对应关系中,更新指纹5(即上述采集到的指纹20)与词库1之间的对应关系。
表3
Figure PCTCN2017084602-appb-000005
在本申请的一个实施例中,如图14所示,电子设备还可以在显示界面上向用户提示:已经建立了与本次指纹(即上述采集到的指纹20)关联的词库,例如,上述词库1,使得用户明确后续可以使用上述采集到的指纹20登录自己的个人词库。
这样,通过判断与不同指纹对应词库之间的相似度,可以识别出与同一用户对应的不同词库,那么,将同一用户对应的不同词库合并后,可以丰富和优化该用户的个人词库,可提高后续用户使用输入法应用的准确率。
在本申请另外一些实施例中,当上述临时个人词库与已存储的所有用户的个人词库之间的相似度均小于上述阈值时,则说明上述临时个人词库所对应的用户C确实是一个新加入的用户,那么,电子设备可继续根据用户C的输入习惯丰富和完成该临时个人词库。这样,后续用户C再次使用相同的指纹在该输入法应用中执行输入事件时,电子设备可以根据表2中用户C的指纹5与上述临时个人词库之间的对应关系,调用该临时个人词库确定与输入事件对应的候选词汇。
如图15所示,为本申请其他一些实施例提供的一种输入方法,该方法可以包括:
501、当电子设备运行输入法应用时,电子设备获取用户在触摸屏上的指纹。
在本申请实施中,在触摸屏上获取到的指纹包含两个甚至更多个指纹,即N(为大于1的整数;)个指纹。示例性的,如图16所示,用户使用左手的拇指和右手的拇指,在输入法应用的虚拟键盘12a上执行输入事件。此时,电子设备可以通过集成在触摸屏上的指纹采集器件(图16中未示出)获取到指纹21和指纹22。
502、电子设备对上述获取到的指纹进行指纹识别。
其中,电子设备将上述获取到的指纹与注册指纹进行对比和判断的方法可参见步骤402的相关描述,故此处不再赘述。
当采集到的多个指纹中的至少有一个指纹为注册指纹时,例如,仍如图 16所示,电子设备中可能仅存储有指纹21,那么,通过与预先存储的注册指纹对比,电子设备可以确定指纹21为注册指纹,而指纹22为未在电子设备上注册的非注册指纹。此时,电子设备可继续执行下述步骤503a-505a。
503a、电子设备确定与该注册指纹关联的目标词库。
由于上述指纹21和指纹22是从电子设备显示输入法应用的虚拟键盘12a到隐藏该虚拟键盘12a的过程中出现的,而这个过程一般是由单个用户执行的。因此,可以认为上述指纹21和指纹22所对应的用户是相同,进而,电子设备可通过表1或表2所示的对应关系,将与指纹21(注册指纹)对应的个人词库作为目标词库。
又或者,如果述指纹21和指纹22是从用户最近一次触摸上述虚拟键盘12a起预设时段内出现的,例如,在用户触摸该虚拟键盘12a后2秒内,电子设备采集到上述指纹21和指纹22,则也可以认为上述指纹21和指纹22所对应的用户是相同。
504a、电子设备建立未注册指纹与上述目标词库之间的对应关系。
与步骤407类似的,当电子设备确定上述上述指纹21和指纹22所对应的用户是相同的,例如,上述指纹21和指纹22所对应的用户均为Tom,此时,电子设备可以更新上述注册指纹与个人词库之间的对应关系,即在指纹21与Tom的个人词库之间的对应关系的基础上,增加指纹22与Tom的个人词库之间的对应关系,以便于后续电子设备能够根据更新后的对应关系,准确的调用相应的目标词库。
也就是说,当上述N个指纹中包括X(X≥1)个注册指纹和Y(Y≥1,X+Y=N)个未注册指纹时,电子设备在可以在上述对应关系中建立这Y个未注册指纹与上述目标词库之间的对应关系。
在本申请的一个实施例中,如图17所示,电子设备可以在显示界面上通过一个提示框18提示用户是否将上述未注册指纹,即是否将未注册的指纹22与Tom的个人词库关联。当用户确认关联后,电子设备增加指纹22与Tom的个人词库之间的对应关系,这样,指纹22也成为与Tom的个人词库对应的注册指纹。
在本申请的一个实施例中,如果电子设备中存储有指纹21和指纹22,即上述采集到的多个指纹中均为注册指纹时,如果指纹21和指纹22对应的个人词库不同,则说明电子设备将指纹21和指纹22误认为是两个不同用户的指纹,因此,电子设备可以将指纹21对应的个人词库,和指纹22对应的个人词库合并,使得指纹21和指纹22对应的个人词库均为Tom的个人词库。
也就是说,当上述N个指纹中包括Z(1<Z≤N)个注册指纹时,如果这Z个注册指纹中每个注册指纹所对应的个人词库之间存在不相同的个人词库,则电子设备可将这Z个注册指纹对应的多个个人词库合并为一个个人词库。
505a、电子设备使用上述目标词库确定与输入事件对应的候选词汇。
示例性的,电子设备可使用步骤503中确定的目标词库,即Tom的个人 词库,确定与输入事件对应的候选词汇。如图18所示,“西大街66号”是Tom的个人词库中的高频词汇,因此,当用户在编辑窗口13a输入“西大街”后,电子设备可以基于Tom的个人词库,将“66号”作为第一候选词汇17显示在候选窗口中方便用户选择,从而提高用户的输入效率。
在本申请的一个实施例中,在执行上述步骤502之后,电子设备将获取到的多个指纹分别与预先存储的注册指纹对比,如果确定这多个指纹中不存在注册指纹,例如,图16中电子设备获取到的指纹21和指纹22均为未注册指纹,则电子设备可继续执行下述步骤503b-504b。
503b、电子设备创建与上述采集到的多个指纹均对应的临时个人词库。
与步骤405类似的,当上述采集到的多个指纹中不包含注册指纹时,则说明电子设备内并未注册当前用户的操作手指的指纹。
那么,在步骤405中,电子设备可以将当前用户作为一个新的用户,在默认的词库的基础上,学习当前用户的输入习惯,从而创建与上述步骤501中采集到的指纹对应的临时个人词库。
此时,如图19所示,电子设备可以在显示界面内通过提示框18提示用户是否创建与上述指纹21和指纹22关联的词库A。当用户确认创建词库A后,在后续每次接收到具有上述指纹的用户的输入事件时,基于默认的词库学习该用户的输入习惯,经过一段时间后,可得到属于该用户的临时个人词库,即词库A。
504b、当上述临时个人词库与已存储的第一用户的个人词库之间的相似度大于阈值时,电子设备将该临时个人词库添加至第一用户的个人词库。
具体的,电子设备可以定期或不定期的将步骤403b中创建的临时个人词库与已经存储的各个用户的个人词库进行比较,当上述临时个人词库与已存储的第一用户(例如,Tom)的个人词库之间的相似度大于阈值时,则说明与上述采集到的多个指纹对应的用户的输入习惯与Tom的输入习惯极为相似,因此,电子设备可以认为具有上述多个指纹的用户即为Tom。此时,如图20所示,电子设备可登陆Tom的个人词库,并将上述词库A添加至Tom的个人词库,完成Tom个人词库的更新。
505b、电子设备建立上述采集到的多个指纹与第一用户的个人词库之间的对应关系。
当电子设备已经将上述临时个人词库添加至第一用户的个人词库后,电子设备可以更新注册指纹与个人词库之间的对应关系,以便于后续接收到新的输入事件时,电子设备能够根据更新后的对应关系,准确的调用相应的目标词库。
示例性的,如图21所示,电子设备还可以通过提示框18向用户提示:是否将上述指纹21和指纹22(即本次输入操作中采集到的指纹)添加为输入法应用的注册指纹,如果用户确认将指纹21和指纹22添加为输入法应用的注册指纹,则后续用户可以使用指纹21和指纹22登录自己的个人词库。
当然,电子设备还可以进一步将上述采集到的指纹用于指纹解锁或指纹 支付等功能。又或者,当用户在电子设备内录入一个用于指纹解锁或指纹识别的指纹时,电子设备可提示用于将该指纹作为输入法应用的注册指纹,并建立该指纹与其对应的个人词库之间的对应关系,本申请实施例对此不作任何限制。
进一步地,如图22所示,用户还可以通过输入法应用的设置界面开启或关闭上述实施例中所示的指纹登录功能。并且,还可以对注册指纹进行管理,例如,添加或删除注册指纹。又或者,用户还可以对已经建立的个人词库与注册指纹之间的关系进行管理,例如,一个用户可以有一个或多个个人词库,例如图22中所示的词库1和词库2,对于每一个个人词库,都可以手动设置与其对应的注册指纹,这样,当使用不同指纹的手指触发输入事件时,可以调用不同的词库为其确定相应的候选词汇,以提高输入法的输入效率以及人机交互的智能性。
在本申请另外一些实施例中,如图23所示,当用户触发APP(例如图23中的短信应用)中的编辑窗口13a后,短信应用调用输入法应用的API显示输入法应用的虚拟键盘12a等相关用户图形界面。此时,还可以在该用户图形界面上设置一个用于快速登录个人词库的快捷登录按钮。例如,该快捷登录按钮可以为图23中的指纹图案31,那么,当用户触摸该指纹图案31时,响应于该触摸操作,电子设备触发该指纹图案31处的指纹采集器件采集用户的指纹,进而,可按照上述步骤402-407中所示的输入方法,调用与该采集到的指纹对应的个人词库提供候选词汇。例如,电子设备检测到用户在指纹图案31处的触摸操作后,通过指纹图案31处的指纹采集器件采集指纹,并对采集到的指纹进行指纹识别。当识别出该采集到的指纹为Tom的指纹(例如,表1中的指纹1)时,电子设备可根据指纹1与其个人词库之间的对应关系,登录Tom的个人词库(即词库1),并调用词库1确定与后续接收到的输入事件关联的候选词汇。
在本申请的另一个实施例中,显示快捷登录按钮(例如,图24中的一键登录按钮32)的位置处可能没有设置指纹采集器件,那么,当用户点击该一键登录32后,电子设备可以提示用户在设置有指纹采集器件的位置录入指纹。例如,在触摸屏内的某个区域(例如,图24中的区域33)录入指纹,或者,在触摸屏外的某个区域(例如,图24中home键34)录入指纹。后续,电子设备对采集到的指纹进行指纹识别,当识别出该采集到的指纹为Alice的指纹(例如,表1中的指纹3)时,电子设备可根据指纹3与其个人词库之间的对应关系,登录Alice的个人词库(即词库2),并调用词库2确定与后续接收到的输入事件关联的候选词汇。
在本申请的另一个实施例中,如果电子设备具有指纹解锁的功能,那么,电子设备可以在用户使用指纹解锁时采集到用户的指纹,进而,按照上述步骤402-403中所示的输入方法,将与解锁时采集到的指纹对应的个人词库作为目标词库,这样一来,当APP调用输入法的API显示输入法应用的相关用户图形界面时,输入法应用便可使用已经确定的目标词库提供候选词汇,缩 短了用户登录个人词库这一过程的时间,从而提高了输入效率。
当然,如果在输入法应用运行的过程中,检测到当前输入事件中的指纹与上述解锁时的指纹不同时,输入法应用可自动切换到与前输入事件中的指纹对应的个人词库,使得执行当前输入事件的用户可以使用符合自身输入习惯的个人词库完成输入。
另外,上述实施例中仅以不同用户的指纹与其个人词库之间的对应关系为例,举例说明输入法应用如何根据用户的指纹调用对应的个人词库完成输入。可以理解的是,在输入法应用场景下,还可以建立用户的个性化设置参数(例如,输入法皮肤、虚拟键盘的显示位置以及快捷键设置等)与用户的指纹之间的对应关系。
示例性的,用户A的注册指纹与九宫格形式的虚拟键盘对应,用户B的注册指纹与全键盘形式的虚拟键盘对应。那么,当电子设备采集到与用户A的注册指纹相同的指纹时,可在显示界面内显示九宫格形式的虚拟键盘;当电子设备采集到与用户B的注册指纹相同的指纹时,可在显示界面内显示全键盘形式的虚拟键盘。
综上所述,用户在使用输入法应用执行输入事件时,可以通过在触摸屏上形成的指纹,将输入法应用的相关参数设置为与该指纹对应的个性化设置参数,使得输入法应用在运行过程中能够更好的提供与当前用户的输入习惯相符的输入环境、候选词汇等,从而提高用户使用输入法应用时的输入效率,极大的提高了用户体验。
可以理解的是,上述电子设备等为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对上述电子设备等进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,图25示出了上述实施例中所涉及的电子设备的一种可能的结构示意图,该电子设备包括:获取单元1101、确定单元1102、执行单元1103、建立单元1104以及合并单元1105。
获取单元1101用于支持电子设备执行图6中的过程401,图15中的过程501;确定单元1102用于支持电子设备执行图6中的过程402-403,图15中的过程502-503a;执行单元1103用于支持电子设备执行图6中的过程404, 图15中的过程505a;建立单元1104用于支持电子设备执行图6中的过程405和407,图15中的过程503b、505b以及504a;合并单元1105用于支持电子设备执行图6中的过程406,图15中的过程504b。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的单元的情况下,图26示出了上述实施例中所涉及的电子设备的一种可能的结构示意图。该电子设备包括:处理模块1302和通信模块1303。处理模块1302用于对电子设备的动作进行控制管理。通信模块1303用于支持UE与其他网络实体的通信。该电子设备还可以包括存储模块1301,用于存电子设备的程序代码和数据。
其中,处理模块1302可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块1303可以是收发器、收发电路或通信接口等。存储模块1301可以是存储器。
当处理模块1302为处理器,通信模块1303为RF收发电路,存储模块1301为存储器时,本申请实施例所提供的电子设备可以为图4所示的电子设备。
在本申请的另外一些实施例中,提供了一种非易失性计算机可读存储介质,该可读存储介质中存储有一个或多个程序,这一个或多个程序中包括指令。其中,当具有显示器的电子设备执检测到其触摸表面接收到触摸事件时,执行上述指令,使得电子设备执行上述各实施例中提供的输入方法。其中,上述输入方法的各步骤可参见上述步骤401-407和/或步骤501-505的相关描述,故在此不再赘述。
在上述实施例中,可以全部或部分的通过软件,硬件,固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式出现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多 个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质,(例如,软盘,硬盘、磁带)、光介质(例如,DVD)或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (23)

  1. 一种输入方法,在具有指纹采集器件的电子设备中实现,其特征在于,包括:
    当输入法应用运行时,所述电子设备获取用户触摸屏上的指纹;
    当所述指纹为预先存储的注册指纹时,所述电子设备确定与所述指纹关联的目标词库;
    所述电子设备使用所述目标词库,提供与当前的输入事件对应的至少一个候选词汇。
  2. 根据权利要求1所述的方法,其特征在于,当所述指纹的个数为一个时,所述电子设备确定与所述指纹关联的目标词库,包括:
    所述电子设备根据不同用户的个人词库与不同用户的注册指纹之间的对应关系,将与所述指纹对应的个人词库确定为目标词库。
  3. 根据权利要求2所述的方法,其特征在于,当所述指纹为非注册指纹时,所述方法还包括:
    所述电子设备建立与所述指纹对应的临时个人词库。
  4. 根据权利要求3所述的方法,其特征在于,在所述电子设备建立与所述指纹对应的临时个人词库之后,还包括:
    当所述临时个人词库与第一用户的个人词库之间的相似度大于阈值时,所述电子设备将所述临时个人词库添加至所述第一用户的个人词库;
    所述电子设备在所述对应关系中建立所述指纹与所述第一用户的个人词库之间的对应关系。
  5. 根据权利要求1所述的方法,其特征在于,所述指纹的个数为N个,N为大于1的整数;
    其中,当所述指纹为预先存储的注册指纹时,所述电子设备确定与所述指纹关联的目标词库,包括:
    当所述N个指纹中的至少一个为注册指纹时,所述电子设备根据不同用户的个人词库与不同用户的注册指纹之间的对应关系,将与所述注册指纹对应的个人词库确定为所述目标词库。
  6. 根据权利要求5所述的方法,其特征在于,当所述N个指纹中包括X个注册指纹和Y个未注册指纹时,X+Y=N,X≥1,Y≥1,所述方法还包括:
    所述电子设备在所述对应关系中建立所述Y个未注册指纹与所述目标词库之间的对应关系。
  7. 根据权利要求5或6所述的方法,其特征在于,所述N个指纹中包括Z个注册指纹,1<Z≤N,所述方法还包括:
    所述电子设备根据所述对应关系确定所述Z个注册指纹中每个注册指纹对应的个人词库是否相同;
    当所述Z个注册指纹中每个注册指纹对应的个人词库之间存在不相同的个人词库时,所述电子设备将所述Z个注册指纹对应的个人词库合并为一个个人词库。
  8. 根据权利要求5-7中任一项所述的方法,其特征在于,当所述N个指 纹均为未注册指纹时,所述方法还包括:
    所述电子设备建立与所述N个指纹对应的临时个人词库。
  9. 根据权利要求8所述的方法,其特征在于,在所述电子设备建立与所述N个指纹对应的临时个人词库之后,还包括:
    当所述临时个人词库与第二用户的个人词库之间的相似度大于阈值时,所述电子设备将所述临时个人词库添加至所述第二用户的个人词库;
    所述电子设备在所述对应关系中建立所述指纹与所述第二用户的个人词库之间的对应关系。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述触摸屏包括显示所述输入法应用的虚拟键盘的区域;
    其中,电子设备获取用户在触摸屏上的指纹,包括:
    所述电子设备获取用户在所述键盘的区域执行输入事件时产生的指纹。
  11. 一种电子设备,其特征在于,包括:
    获取单元,用于:当输入法应用运行时,获取用户触摸屏上的指纹;
    确定单元,用于:当所述指纹为预先存储的注册指纹时,确定与所述指纹关联的目标词库;
    执行单元,用于:使用所述目标词库,提供与当前的输入事件对应的至少一个候选词汇。
  12. 根据权利要求11所述的电子设备,其特征在于,
    所述确定单元,具体用于:根据不同用户的个人词库与不同用户的注册指纹之间的对应关系,将与所述指纹对应的个人词库确定为目标词库。
  13. 根据权利要求12所述的电子设备,其特征在于,所述电子设备还包括:
    建立单元,用于:当所述指纹为非注册指纹时,建立与所述指纹对应的临时个人词库。
  14. 根据权利要求13所述的电子设备,其特征在于,所述电子设备还包括合并单元,
    所述合并单元,用于:当所述临时个人词库与第一用户的个人词库之间的相似度大于阈值时,将所述临时个人词库添加至所述第一用户的个人词库;
    所述建立单元,还用于:在所述对应关系中建立所述指纹与所述第一用户的个人词库之间的对应关系。
  15. 根据权利要求11所述的电子设备,其特征在于,所述指纹的个数为N个,N为大于1的整数;
    所述确定单元,具体用于:当所述N个指纹中的至少一个为注册指纹时,根据不同用户的个人词库与不同用户的注册指纹之间的对应关系,将与所述注册指纹对应的个人词库确定为所述目标词库。
  16. 根据权利要求15所述的电子设备,其特征在于,所述N个指纹中包括X个注册指纹和Y个未注册指纹时,X+Y=N,X≥1,Y≥1,所述电子设备还包括:
    建立单元,用于:在所述对应关系中建立所述Y个未注册指纹与所述目标词库之间的对应关系。
  17. 根据权利要求15或16所述的电子设备,其特征在于,所述电子设备还包括合并单元,
    所述确定单元,还用于:根据所述对应关系确定所述Z个注册指纹中每个注册指纹对应的个人词库是否相同;
    所述合并单元,用于:当所述Z个注册指纹中每个注册指纹对应的个人词库之间存在不相同的个人词库时,将所述Z个注册指纹对应的个人词库合并为一个个人词库。
  18. 根据权利要求15-17中任一项所述的电子设备,其特征在于,
    所述建立单元,还用于:当所述N个指纹均为未注册指纹时,建立与所述N个指纹对应的临时个人词库。
  19. 根据权利要求18所述的电子设备,其特征在于,
    所述合并单元,还用于:当所述临时个人词库与第二用户的个人词库之间的相似度大于阈值时,将所述临时个人词库添加至所述第二用户的个人词库;
    所述建立单元,还用于:在所述对应关系中建立所述指纹与所述第二用户的个人词库之间的对应关系。
  20. 根据权利要求11-19中任一项所述的电子设备,其特征在于,所述触摸屏包括显示所述输入法应用的虚拟键盘的区域,
    所述获取单元,具体用于:获取用户在所述键盘的区域执行输入事件时产生的指纹。
  21. 一种电子设备,其特征在于,包括:处理器、存储器、总线和通信接口;所述存储器用于存储计算机执行指令,所述处理器与所述存储器通过所述总线连接,当所述电子设备运行时,所述处理器执行所述存储器存储的所述计算机执行指令,以使所述电子设备执行如权利要求1-10中任一项所述的输入方法。
  22. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-10中任一项所述的输入方法。
  23. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-10中任一项所述的输入方法。
PCT/CN2017/084602 2017-05-16 2017-05-16 输入方法及电子设备 WO2018209578A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP17909894.2A EP3640787B1 (en) 2017-05-16 2017-05-16 Input method and electronic device
CN201780008034.7A CN109074171B (zh) 2017-05-16 2017-05-16 输入方法及电子设备
US16/613,511 US11086975B2 (en) 2017-05-16 2017-05-16 Input method and electronic device
PCT/CN2017/084602 WO2018209578A1 (zh) 2017-05-16 2017-05-16 输入方法及电子设备
US17/363,344 US11625468B2 (en) 2017-05-16 2021-06-30 Input method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/084602 WO2018209578A1 (zh) 2017-05-16 2017-05-16 输入方法及电子设备

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/613,511 A-371-Of-International US11086975B2 (en) 2017-05-16 2017-05-16 Input method and electronic device
US17/363,344 Continuation US11625468B2 (en) 2017-05-16 2021-06-30 Input method and electronic device

Publications (1)

Publication Number Publication Date
WO2018209578A1 true WO2018209578A1 (zh) 2018-11-22

Family

ID=64273072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/084602 WO2018209578A1 (zh) 2017-05-16 2017-05-16 输入方法及电子设备

Country Status (4)

Country Link
US (2) US11086975B2 (zh)
EP (1) EP3640787B1 (zh)
CN (1) CN109074171B (zh)
WO (1) WO2018209578A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109933216A (zh) * 2019-03-01 2019-06-25 郑敏杰 一种用于智能输入的词语联想提示方法、装置、设备以及计算机存储介质
EP4016975A4 (en) * 2019-08-16 2022-10-19 Vivo Mobile Communication Co., Ltd. OBJECT POSITION ADJUSTMENT METHOD AND ELECTRONIC DEVICE

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD864226S1 (en) * 2017-02-22 2019-10-22 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
US11418503B2 (en) * 2019-07-03 2022-08-16 Bank Of America Corporation Sensor-based authentication, notification, and assistance systems
USD933454S1 (en) * 2019-09-30 2021-10-19 Dezhao Xiang Handle with fingerprint identifying device
US11620367B2 (en) * 2020-11-06 2023-04-04 International Business Machines Corporation Key specific fingerprint based access control
CN114968144A (zh) * 2021-02-27 2022-08-30 华为技术有限公司 一种拼接显示的方法、电子设备和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470732A (zh) * 2007-12-26 2009-07-01 北京搜狗科技发展有限公司 一种辅助词库的生成方法和装置
CN103389979A (zh) * 2012-05-08 2013-11-13 腾讯科技(深圳)有限公司 在输入法中推荐分类词库的系统、装置及方法
US8611422B1 (en) * 2007-06-19 2013-12-17 Google Inc. Endpoint based video fingerprinting
CN105138266A (zh) * 2015-08-26 2015-12-09 成都秋雷科技有限责任公司 一种自动切换输入法的方法
CN105159475A (zh) * 2015-08-27 2015-12-16 广东欧珀移动通信有限公司 一种字符输入方法及装置

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7694231B2 (en) 2006-01-05 2010-04-06 Apple Inc. Keyboards for portable electronic devices
US8842074B2 (en) 2006-09-06 2014-09-23 Apple Inc. Portable electronic device performing similar operations for different gestures
CN101169812A (zh) * 2006-10-25 2008-04-30 知网生物识别科技股份有限公司 视窗操作系统的多因子认证系统与登录方法
US7486810B1 (en) * 2008-04-24 2009-02-03 International Business Machines Corporation On-type biometrics fingerprint soft keyboard
JP4459282B2 (ja) * 2008-06-30 2010-04-28 株式会社東芝 情報処理装置およびセキュリティ保護方法
CN101667060A (zh) 2008-09-04 2010-03-10 黄轶 输入设备和输入方法
CN102063452A (zh) 2010-05-31 2011-05-18 百度在线网络技术(北京)有限公司 用于供用户进行文字输入的方法、设备、服务器和系统
CN101867650A (zh) 2010-05-21 2010-10-20 宇龙计算机通信科技(深圳)有限公司 一种保护用户操作终端行为的方法及装置
US9202059B2 (en) * 2011-03-01 2015-12-01 Apurva M. Bhansali Methods, systems, and apparatuses for managing a hard drive security system
US9600709B2 (en) * 2012-03-28 2017-03-21 Synaptics Incorporated Methods and systems for enrolling biometric data
US20150025876A1 (en) * 2013-07-21 2015-01-22 Benjamin Firooz Ghassabian Integrated keypad system
US9928355B2 (en) * 2013-09-09 2018-03-27 Apple Inc. Background enrollment and authentication of a user
WO2015079450A2 (en) * 2013-11-28 2015-06-04 Hewlett-Packard Development Company, L.P. Electronic device
CN103888342B (zh) * 2014-03-14 2018-09-04 北京智谷睿拓技术服务有限公司 交互方法及装置
US20150333910A1 (en) * 2014-05-17 2015-11-19 Dylan Kirdahy Systems, methods, and apparatuses for securely accessing user accounts
CN105302329A (zh) * 2014-05-27 2016-02-03 姚文杰 指纹编码输入方法和装置
US11163969B2 (en) * 2014-09-09 2021-11-02 Huawei Technologies Co., Ltd. Fingerprint recognition method and apparatus, and mobile terminal
US20160261589A1 (en) * 2015-03-03 2016-09-08 Kobo Incorporated Method and system for shelving digital content items for restricted access within an e-library collection
KR20160120103A (ko) * 2015-04-07 2016-10-17 엘지전자 주식회사 이동 단말기 및 그것의 제어 방법
JP2017097295A (ja) * 2015-11-27 2017-06-01 株式会社東芝 表示装置
CN105700699A (zh) * 2015-12-30 2016-06-22 魅族科技(中国)有限公司 一种信息输入方法、装置及终端
CN105718147A (zh) 2016-01-22 2016-06-29 百度在线网络技术(北京)有限公司 输入法面板启用方法和装置以及输入方法和输入法系统
CN107534700B (zh) * 2016-03-14 2021-02-09 华为技术有限公司 一种用于终端的信息输入方法及终端
US20190227707A1 (en) * 2016-11-30 2019-07-25 Shenzhen Royole Technologies Co. Ltd. Electronic device and soft keyboard display method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8611422B1 (en) * 2007-06-19 2013-12-17 Google Inc. Endpoint based video fingerprinting
CN101470732A (zh) * 2007-12-26 2009-07-01 北京搜狗科技发展有限公司 一种辅助词库的生成方法和装置
CN103389979A (zh) * 2012-05-08 2013-11-13 腾讯科技(深圳)有限公司 在输入法中推荐分类词库的系统、装置及方法
CN105138266A (zh) * 2015-08-26 2015-12-09 成都秋雷科技有限责任公司 一种自动切换输入法的方法
CN105159475A (zh) * 2015-08-27 2015-12-16 广东欧珀移动通信有限公司 一种字符输入方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3640787A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109933216A (zh) * 2019-03-01 2019-06-25 郑敏杰 一种用于智能输入的词语联想提示方法、装置、设备以及计算机存储介质
EP4016975A4 (en) * 2019-08-16 2022-10-19 Vivo Mobile Communication Co., Ltd. OBJECT POSITION ADJUSTMENT METHOD AND ELECTRONIC DEVICE

Also Published As

Publication number Publication date
EP3640787A1 (en) 2020-04-22
US11625468B2 (en) 2023-04-11
EP3640787A4 (en) 2020-08-19
EP3640787B1 (en) 2024-04-24
US20200372142A1 (en) 2020-11-26
CN109074171A (zh) 2018-12-21
US11086975B2 (en) 2021-08-10
US20210397689A1 (en) 2021-12-23
CN109074171B (zh) 2021-03-30

Similar Documents

Publication Publication Date Title
WO2018209578A1 (zh) 输入方法及电子设备
US11269981B2 (en) Information displaying method for terminal device and terminal device
WO2019233212A1 (zh) 文本识别方法、装置、移动终端以及存储介质
WO2019205065A1 (zh) 快速打开应用或应用功能的方法及终端
WO2019024056A1 (zh) 一种防误触方法及终端
US9075828B2 (en) Electronic device and method of controlling the same
WO2016037318A1 (zh) 一种指纹识别方法、装置及移动终端
WO2018223270A1 (zh) 一种显示的处理方法及装置
KR102521333B1 (ko) 사용자 인증과 관련된 사용자 인터페이스 표시 방법 및 이를 구현한 전자 장치
EP2869528A1 (en) Method for performing authentication using biometrics information and portable electronic device supporting the same
WO2016127426A1 (zh) 一种显示应用、图片的方法、装置及电子设备
US20150213127A1 (en) Method for providing search result and electronic device using the same
WO2019000287A1 (zh) 一种图标显示方法及装置
WO2014206101A1 (zh) 一种基于手势的会话处理方法、装置及终端设备
WO2019090486A1 (zh) 一种触摸控制方法及装置
US20150177957A1 (en) Method and apparatus for processing object provided through display
WO2017193496A1 (zh) 应用数据的处理方法、装置和终端设备
WO2018214748A1 (zh) 应用界面的显示方法、装置、终端及存储介质
CN111459358B (zh) 一种应用程序控制方法及电子设备
CN109753202B (zh) 一种截屏方法和移动终端
CN103279272B (zh) 一种在电子装置中启动应用程序的方法及装置
WO2019184631A1 (zh) 信息处理方法和装置、计算机可读存储介质、终端
US20190387094A1 (en) Mobile terminal and method for operating same
CN107577933B (zh) 应用登录方法和装置、计算机设备、计算机可读存储介质
WO2021012955A1 (zh) 界面切换方法及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17909894

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017909894

Country of ref document: EP

Effective date: 20191128