CN103729132A - Character input method and device, virtual keyboard and electronic equipment - Google Patents

Character input method and device, virtual keyboard and electronic equipment Download PDF

Info

Publication number
CN103729132A
CN103729132A CN201210390927.1A CN201210390927A CN103729132A CN 103729132 A CN103729132 A CN 103729132A CN 201210390927 A CN201210390927 A CN 201210390927A CN 103729132 A CN103729132 A CN 103729132A
Authority
CN
China
Prior art keywords
touch
touch point
character
input
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210390927.1A
Other languages
Chinese (zh)
Other versions
CN103729132B (en
Inventor
李凡智
刘旭国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201710828402.4A priority Critical patent/CN107479725B/en
Priority to CN201210390927.1A priority patent/CN103729132B/en
Publication of CN103729132A publication Critical patent/CN103729132A/en
Application granted granted Critical
Publication of CN103729132B publication Critical patent/CN103729132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

The invention provides a character input method and device, a virtual keyboard and electronic equipment. The character inputting method is applied to the electronic equipment. The electronic equipment comprises a touch screen. The method comprises the steps of obtaining a first position of a first touch point on the touch screen, determining a first input character corresponding to the first position, obtaining a second position of a second touch point on the touch screen, obtaining the position relation of the second touch point and one point of the touch screen according to the second position, and determining a second input character corresponding to the second position according to the position relation and the first input character. The electronic equipment can determine the characters according to the relative position among the points, the virtual keyboard does not need to be displayed on the touch screen when the electronic equipment carries out character input operation, and therefore the display area of the touch screen is prevented from being occupied, and the actual content can be displayed in the whole display area of the touch screen.

Description

Character input method and device, virtual keyboard and electronic equipment
Technical Field
The present disclosure relates to the field of character input technologies, and in particular, to a character input method and apparatus, a virtual keyboard, and an electronic device.
Background
With the progress of scientific technology, electronic devices are becoming more and more intelligent, and thus electronic devices having touch screens are gradually emerging.
When a user uses the touch screen to input words and phrases, a virtual keyboard is displayed on the touch screen, so that the user can input words and phrases according to the needed words and phrases. Specifically, the absolute position of a corresponding key on a clicked virtual keyboard is obtained, and the required vocabulary is matched and input according to the absolute position.
However, the display of the virtual keyboard occupies a part of the display area of the touch screen, so that the effective display area of the touch screen is reduced, and the display of the actual content is affected. For example, currently, the electronic device stores information interaction between a user and the same contact in the same short message list, and after the virtual keyboard is turned on, the virtual keyboard can block a short message display area, which affects a display area of effective content.
Disclosure of Invention
The technical problem to be solved by the application is to provide a character input method, a character input device, a virtual keyboard and an electronic device, so as to solve the problem that in the prior art, after the virtual keyboard is opened, the virtual keyboard can shield a short message display area and influence the display area of effective content.
Based on an aspect of the present application, a character input method is provided, which is applied to an electronic device, where the electronic device includes a touch screen, and includes:
obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position;
obtaining a second position of a second touch point on the touch screen;
obtaining the position relation between the second touch point and one point in the touch screen according to the second position;
and determining a second input character corresponding to the second position according to the position relation and the first input character.
Preferably, obtaining the position relationship between the second touch point and one point in the touch screen according to the second position includes: and acquiring the position relation between the second touch point and the positioning point in the touch screen according to the second position of the second touch point relative to the positioning point.
Preferably, obtaining the position relationship between the second touch point and one point in the touch screen according to the second position includes: and acquiring the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
Preferably, obtaining the position relationship between the second touch point and one point in the touch screen according to the second position includes: and acquiring the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
Preferably, determining a second input character corresponding to the second position according to the position relationship and the first input character includes: and selecting a character associated with the first input character from a preset vocabulary table according to the position relation and the first input character, and determining the character as a second input character, wherein the preset vocabulary table comprises words existing in a standard dictionary, or the preset vocabulary table comprises words existing in the standard dictionary and recorded words input by the user before.
Preferably, the method further comprises the following steps: combining the matched characters according to the touch sequence of the touch points to obtain a plurality of words;
and selecting a correct vocabulary from the plurality of vocabularies as an input vocabulary.
Preferably, selecting a correct vocabulary from the plurality of vocabularies as the input vocabulary comprises:
comparing the plurality of vocabularies with a preset vocabulary table respectively, and selecting one vocabulary included in the preset vocabulary table as an input vocabulary; the preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before.
Preferably, under the condition that a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, obtaining a first position of the first touch point on the touch screen and determining a first input character corresponding to the first position are performed, wherein the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of the other touch point, and the two touch points are touch points touched twice adjacently.
Preferably, the preset command operation is performed when a time difference value of a single touch point is not less than a preset time, or the preset command operation is performed when a first time difference value of the single touch point is less than a first preset time and a second time difference value of two touch points is not less than a second preset time, wherein the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of the other touch point, and the two touch points are touch points touched twice in a neighboring sequence.
Preferably, the method further comprises the following steps:
displaying input characters on the touch screen.
According to another aspect of the present application, there is also provided a character input device applied to an electronic device, the electronic device including a touch screen, the device including:
the first character determining unit is used for obtaining a first position of a first touch point on the touch screen and determining a first input character corresponding to the first position;
the position acquisition unit is used for acquiring a second position of a second touch point on the touch screen;
the position relation obtaining unit is used for obtaining the position relation between the second touch point and one point in the touch screen according to the second position;
and the second character determining unit is used for determining a second input character corresponding to the second position according to the position relation and the first input character.
Preferably, the position relationship obtaining unit is specifically configured to obtain a position relationship between the second touch point and a positioning point in the touch screen according to a second position of the second touch point relative to the positioning point.
Preferably, the position relation obtaining unit is specifically configured to obtain a position relation between the second touch point and the first touch point according to a second position of the second touch point relative to the first touch point.
Preferably, the position relation acquiring unit is specifically configured to acquire a position relation between the second touch point and the first touch point according to a second position of the second touch point relative to the first touch point.
Preferably, the second character determination unit is specifically configured to select a character associated with the first input character from a preset vocabulary according to the position relationship and the first input character, and determine the character as the second input character, where the preset vocabulary includes words already in the standard dictionary, or the preset vocabulary includes words already in the standard dictionary and recorded words that are input by the user before.
Preferably, the method further comprises the following steps: the matching unit is used for combining the matched characters according to the touch sequence of the touch points to obtain a plurality of vocabularies;
and the selecting unit is used for selecting a correct vocabulary from the vocabularies as an input vocabulary.
Preferably, the selecting unit is specifically configured to compare the plurality of vocabularies with a preset vocabulary table, and select one vocabulary included in the preset vocabulary table as an input vocabulary; the preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before.
Preferably, the first character determining unit is specifically configured to, when a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, obtain a first position of the first touch point on the touch screen, and determine a first input character corresponding to the first position, where the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of another touch point, and the two touch points are touch points touched twice adjacently.
Preferably, the touch screen further comprises a preset command executing unit, configured to execute a preset command operation if a time difference value of a single touch point is not less than a preset time, or execute a preset command operation if a first time difference value of a single touch point is less than a first preset time and a second time difference value of two touch points is not less than a second preset time, where the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between an end time of one touch point and a start time of another touch point, and the two touch points are touch points touched twice in adjacent.
Preferably, the method further comprises the following steps: and the display unit is used for displaying input characters on the touch screen.
According to still another aspect of the present application, there is provided a virtual keyboard including the above character input device.
On the basis of the other aspect of the application, the electronic device further comprises a touch screen and the virtual keyboard, wherein the virtual keyboard is connected with the touch screen.
Compared with the prior art, the method has the following advantages:
in this application, the electronic device may first obtain a first position of a first touch point on the touch screen, and determine a first input character corresponding to the first position. After the second position of the second touch point is obtained and the position relationship between the second touch point and one point in the touch screen is obtained, the second input character corresponding to the second position can be determined according to the position relationship and the first input character. That is to say, the electronic device can determine the characters according to the relative positions between the points, so that when the electronic device performs the character input operation, the virtual keyboard is not displayed on the touch screen, the display area of the touch screen is prevented from being occupied, and the actual content can be displayed in the whole display area of the touch screen.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a character input method provided herein;
FIG. 2 is another flow chart of a character input method provided herein;
FIG. 3 is a flow chart of another character input method provided by the present application;
FIG. 4 is a flow chart of another method for inputting characters provided by the present application;
FIG. 5 is a flow chart of another character input method provided by the present application;
FIG. 6 is a schematic diagram of a structure of a character input device provided in the present application;
FIG. 7 is a schematic diagram of another structure of a character input device provided in the present application;
fig. 8 is a schematic structural diagram of a character input device provided by the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multi-processor apparatus, distributed computing environments that include any of the above devices or equipment, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
When the existing electronic equipment uses a virtual keyboard to input words, the absolute position of a key on the virtual keyboard is firstly obtained, and then character matching is carried out according to the absolute position, so that in the character input process, the display of the virtual keyboard can occupy part of the display area of the touch screen, the effective display area of the touch screen is reduced, and the display of the actual content is influenced.
The character input method provided by the application adopts a mode of determining the characters by relative positions, so that the electronic equipment does not need to display a virtual keyboard any more in the character input process, and the display area of the touch screen is avoided being occupied. The following describes the character input method provided by the present application in detail by using specific embodiments.
One embodiment
Referring to fig. 1, a flow chart of a character input method provided by the present application is shown, where the character input method is applied to an electronic device including a touch screen, and may include the following steps:
step 101: the method comprises the steps of obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position.
Step 102: a second location of a second touch point on the touch screen is obtained.
When an object touches a touch screen of the electronic device, for example, a finger of a user touches the touch screen, a touch point is formed on the touch screen. A camera in the electronic equipment collects images formed on the touch screen, and then an image recognition chip of the equipment in the electronic equipment analyzes the images collected by the camera to recognize touch points on the touch screen, so that the positions of the touch points on the touch screen are obtained. The second touch point is a touch point formed on the touch screen at the current moment, and the first touch point is a touch point formed on the touch screen at the previous moment.
In this embodiment, the position of each touch point may be a position of the touch point relative to a center point of the touch screen or a position of a certain corner of the touch screen. Of course, the position of the touch point may also be the position of the touch point relative to the positioning point. The positioning point is a point formed by a certain key in the virtual keyboard on the touch screen, the position of the touch point relative to the positioning point can be directly above the positioning point, directly below the positioning point and 30 degrees to the left of the positioning point, and the specific position can be set according to an application scene.
The locating point is displayed on the touch screen when the electronic equipment starts character input operation, the locating point can be a point formed by an H key in an existing virtual keyboard on the touch screen or a point formed by an Enter key on the touch screen, the position of the locating point in the touch screen is preset in the electronic equipment, and the computing method of the position of the locating point in the touch screen can adopt a computing method of the absolute position of the locating point in the touch screen after the existing virtual keyboard is started.
Step 103: and obtaining the position relation between the second touch point and the positioning point in the touch screen according to the second position of the second touch point relative to the positioning point.
In this embodiment, when the position of the touch point is the position of the touch point relative to the positioning point, the position relationship is the position relationship between the second touch point and the positioning point. The second position is the position of the second touch point relative to the positioning point, and the position indicates the position relationship between the second touch point and the positioning point, so that the electronic device can obtain the position relationship between the second touch point and the positioning point after knowing the second position of the second touch point. If the second position is directly above the positioning point, the position relationship between the second touch point and the positioning point is as follows: the second touch point is located right above the positioning point.
Step 104: and determining a second input character corresponding to the second position according to the position relation and the first input character.
In this embodiment, the second input character determination process may be: and selecting a character associated with the first input character from a preset vocabulary table according to the position relation and the first input character, and determining the character as a second input character, wherein the preset vocabulary table comprises words existing in a standard dictionary, or the preset vocabulary table comprises words existing in the standard dictionary and recorded words input by the user before. The specific operation process is illustrated.
For example, the positioning point is a point formed by the H button on the touch screen, the first input character is a, the position relationship is directly above the positioning point, and the characters directly above the positioning point can be y and u according to the position relationship. The resulting character is combined with the first input character to yield a first string au and a first string ay. Inputting the first character string into a preset vocabulary table, checking whether a vocabulary comprising the first character string exists, and if so, determining the obtained character as the second input character.
By applying the technical scheme, the electronic equipment can firstly obtain the first position of the first touch point on the touch screen and determine the first input character corresponding to the first position. After the second position of the second touch point is obtained and the position relationship between the second touch point and one point in the touch screen is obtained, the second input character corresponding to the second position can be determined according to the position relationship and the first input character. That is to say, the electronic device can determine the characters according to the relative positions between the points, so that when the electronic device performs the character input operation, the virtual keyboard is not displayed on the touch screen, the display area of the touch screen is prevented from being occupied, and the actual content can be displayed in the whole display area of the touch screen.
Further, the existing virtual keyboard matches characters according to the absolute positions of the keys, that is, when a certain key is touched, the virtual keyboard can only match one character according to the absolute position of the key. When a certain key is touched, the adjacent keys are touched by mistake, namely, when the touch point is wrong, the virtual keyboard can only be matched with a wrong character according to the absolute position.
In this embodiment, the electronic device may determine the second input character according to the position relationship and the first input character, and therefore, when the adjacent key is touched by mistake, the character corresponding to the position of the touch point may include a wrong character and a correct character, so that the electronic device may obtain the correct character by using the character input method provided by the present application, and implement automatic error correction, and particularly when characters corresponding to positions where a plurality of touch points are located are continuously matched, the automatic error correction effect is better.
Another embodiment
The embodiment describes a specific process of inputting characters when the position of the touch point is relative to the position of the first touch point and the position relationship is between the second touch point and the first touch point. Referring to fig. 2, another flow chart of a character input method provided in the present application is shown, which may include the following steps:
step 201: the method comprises the steps of obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position.
Step 202: a second location of a second touch point on the touch screen is obtained.
In this embodiment, the position of each touch point may be a position of the touch point relative to a center point of the touch screen or a position of the touch point relative to a certain corner of the touch screen. Of course, the position of each touch point may also be relative to the position of the first touch point.
The first touch point is a first touch point formed on the touch screen in the character input process, the position of each touch point relative to the first touch point can be directly above the first touch point, directly below the first touch point, 30 degrees to the left of the previous touch point, and the like, and the specific position can be set according to an application scene.
And the position of the first touch point may be a position relative to a center point of the touch screen or a position of the touch point relative to a certain corner of the touch screen or a position relative to an anchor point. The information about the anchor point is described in the previous embodiment, and will not be described again.
Step 203: and obtaining the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
In this embodiment, the position of the touch point is a position of the touch point relative to the first touch point, and the positional relationship is a positional relationship between the second touch point and the first touch point. Since the second position is the position of the second touch point relative to the first touch point, which indicates the positional relationship between the second touch point and the first touch, the electronic device can obtain the positional relationship between the second touch point and the first touch after knowing the second position of the second touch point. If the second position is right above the first touch, the position relationship between the second touch point and the first touch point is as follows: the second touch point is located directly above the first touch.
Step 204: and determining a second input character corresponding to the second position according to the position relation and the first input character.
In this embodiment, the second input character determination process refers to the detailed description of the previous embodiment, which will not be described again.
Yet another embodiment
The present embodiment is different from the above embodiments in that: in this embodiment, the position of the touch point is a position of the touch point relative to the first touch point, and the position relationship is a position relationship between the touch point and the first touch point, as shown in fig. 3. Fig. 3 is another flowchart of a character input method provided in the present application, which may include the following steps:
step 301: the method comprises the steps of obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position.
Step 302: a second location of a second touch point on the touch screen is obtained.
In this embodiment, the second touch point is a touch point formed on the touch screen at the current moment, and the first touch point is a touch point formed on the touch screen at the previous moment. And the second position of the second touch point may be a position of the second touch point with respect to a center point of the touch screen or a position of the second touch point with respect to a certain corner of the touch screen. Of course, the position of the second touch point may also be relative to the position of the first touch point.
In this embodiment, the relative position of the second touch point with respect to the first touch point may be a position directly above the first touch point, a position directly below the first touch point, and a position 30 degrees to the left of the first touch point, and the like, and the specific position may be set according to an application scenario.
It should be noted that: the position of the first touch point may be a position with respect to a center point of the touch screen or a position of the touch point with respect to a certain corner of the touch screen or a position with respect to an anchor point. The information about the anchor point is described in the embodiment of fig. 1, and will not be described again.
Step 303: and obtaining the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
In this embodiment, the position of the touch point is a position of the touch point relative to the first touch point, and the positional relationship is a positional relationship between the second touch point and the first touch point. Since the second position is a position of the second touch point relative to the first touch point, which indicates a positional relationship between the second touch point and the first touch, the electronic device can obtain the positional relationship between the second touch point and the first touch after knowing the second position of the second touch point. If the second position is right above the first touch, the position relationship between the second touch point and the first touch point is as follows: the second touch point is located directly above the first touch.
Step 304: and determining a second input character corresponding to the second position according to the position relation and the first input character.
In this embodiment, the second input character determination process refers to the detailed description of the previous embodiment, which will not be described again.
Yet another embodiment
Referring to fig. 4, a flowchart of a character input method provided in the present application is shown, which may include the following steps:
step 401: the method comprises the steps of obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position.
Step 402: a second location of a second touch point on the touch screen is obtained.
Step 403: and obtaining the position relation between the second touch point and one point in the touch screen according to the second position.
Step 404: and determining a second input character corresponding to the second position according to the position relation and the first input character.
In this embodiment, the implementation process of steps 401 to 404 may refer to the implementation process described in any one of the three embodiments, and this embodiment will not be described again.
Step 405: and combining the matched characters according to the touch sequence of the touch points to obtain a plurality of words.
It is to be noted here that: in this embodiment, the electronic device may not combine the characters at will, but combine the characters according to the touch sequence of the touch points. Wherein the touch order is the sequence of touches.
Step 406: and selecting a correct vocabulary from the plurality of vocabularies as an input vocabulary.
In this embodiment, when the electronic device selects a correct vocabulary from the plurality of vocabularies, the plurality of vocabularies may be compared with the preset vocabulary table, and one vocabulary included in the preset vocabulary table is selected as an input vocabulary.
The preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before. The canonical dictionary may be a nationally published Xinhua dictionary.
Still taking the apearance as an example, if the adjacent key R is touched by mistake when inputting the character e when using the existing virtual keyboard, the vocabulary finally matched and output by the existing virtual keyboard is the apearance. By using the character input method provided by the embodiment, the matched vocabularies comprise the apparance and the apparance, and the vocabularies are respectively compared with the preset vocabulary table, so that the correct vocabulary is the apparance which is the vocabulary to be input, and automatic vocabulary correction is realized.
In all the above method embodiments, the electronic device may perform character input according to the technical scheme provided by the above method embodiment when it is determined that the current operation is a character input operation. The means for judging whether the current operation is a character operation may be: and judging whether a first time difference value of a single touch point is smaller than a first preset time or not and whether a second time difference value of two touch points is smaller than a second preset time or not, wherein the first time difference value is a difference value between the starting time and the ending time of the single touch point, the second time difference value is a difference value between the ending time of one touch point and the starting time of the other touch point, and the two touch points are touch points touched twice adjacently.
Under the condition that the first time difference value of a single touch point is smaller than the first preset time and the second time difference value of two touch points is smaller than the second preset time, the current operation is judged to be a character input operation, the step of obtaining the first position of the first touch point on the touch screen, determining the first input character corresponding to the first position is executed, and the character input process is further completed, as shown in fig. 5. Fig. 5 is a further flowchart of a character input method provided in the present application, wherein the implementation process of fig. 5 can refer to the implementation process in an embodiment corresponding to any one of the flowcharts of fig. 1 to 3, which is not described again.
And executing a preset command operation under the condition that the first time difference value of the single touch point is less than a first preset time and the second time difference value of the two touch points is not less than a second preset time, wherein the preset command operation can be a single-click operation. And executing a preset command operation under the condition that the first time difference value of the single touch point is not less than the first preset time, wherein the preset command operation can be a screen sliding operation.
In addition, in all the above method embodiments, during the character input process, the input characters may be displayed on the touch screen, and the display mode may be a semi-transparent display mode or an entity display mode, where the semi-transparent display mode indicates that the character display brightness is half of the actual display brightness, and the entity display mode indicates that the character display brightness is the actual display brightness.
While, for purposes of simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present application is not limited by the order of acts or acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Corresponding to the above method embodiment, the present application further provides a character input device, which is applied to an electronic device including a touch screen, and the structure diagram of the character input device is shown in fig. 6, including: a first character determining unit 11, a position acquiring unit 12, a positional relationship acquiring unit 13, and a second character determining unit 14. Wherein,
the first character determining unit 11 is configured to obtain a first position of a first touch point on the touch screen, and determine a first input character corresponding to the first position.
A position obtaining unit 12, configured to obtain a second position of a second touch point on the touch screen.
In this embodiment, the position of each touch point may be a position relative to an anchor point, or a position relative to a first touch point. The second touch point is a touch point formed on the touch screen at the current moment, and the first touch point is a touch point formed on the touch screen at the previous moment.
It should be noted that: the position of the first touch point may be a position relative to a center point of the touch screen or a position of the touch point relative to a certain corner of the touch screen or relative to an anchor point. The information about the anchor point is described in the embodiments of the method, which will not be described further.
And a position relation obtaining unit 13, configured to obtain a position relation between the second touch point and one point in the touch screen according to the second position.
In this embodiment, when the second position of the second touch point is a position relative to the positioning point, the position relationship obtaining unit 13 is specifically configured to obtain the position relationship between the second touch point and the positioning point in the touch screen according to the second position of the second touch point relative to the positioning point.
When the second position of the second touch point is a position relative to the first touch point, the position relationship obtaining unit 13 is specifically configured to obtain the position relationship between the second touch point and the first touch point in the touch screen according to the second position of the second touch point relative to the first touch point.
When the second position of the second touch point is a position relative to the first touch point, the position relationship obtaining unit 13 is specifically configured to obtain the position relationship between the second touch point and the first touch point in the touch screen according to the second position of the second touch point relative to the first touch point.
And a second character determining unit 14, configured to determine, according to the position relationship and the first input character, a second input character corresponding to the second position.
In this embodiment, the second character determining unit 14 is specifically configured to select a character associated with the first input character from a preset vocabulary table according to the position relationship and the first input character, and determine the character as the second input character, where the preset vocabulary table includes words already existing in the canonical dictionary, or the preset vocabulary table includes words already existing in the canonical dictionary and recorded words previously input by the user. The specific operation process is illustrated.
For example, the positioning point is a point formed by the H button on the touch screen, the first input character is a, the position relationship is directly above the positioning point, and the characters directly above the positioning point can be y and u according to the position relationship. The resulting character is combined with the first input character to yield a first string au and a first string ay. Inputting the first character string into a preset vocabulary table, checking whether a vocabulary comprising the first character string exists, and if so, determining the obtained character as the second input character.
Referring to fig. 7, which shows another schematic structural diagram of a character input device provided in the present application, on the basis of fig. 6, the character input device may further include: a matching unit 15 and a selecting unit 16. Wherein,
and the matching unit 15 is used for combining the matched characters according to the touch sequence of the touch points to obtain a plurality of vocabularies.
It is to be noted here that: in this embodiment, the matching unit 15 may not combine the characters at will, but combine the characters according to the touch sequence of the touch points. Wherein the touch order is the sequence of touches
A selecting unit 16, configured to select a correct vocabulary from the plurality of vocabularies as an input vocabulary. The selecting unit 16 may be specifically configured to compare a plurality of vocabularies with a preset vocabulary table, and select one vocabulary included in the preset vocabulary table as an input vocabulary; the preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before.
Still taking the apearance as an example, if the adjacent key R is touched by mistake when inputting the character e when using the existing virtual keyboard, the vocabulary finally matched and output by the existing virtual keyboard is the apearance. By using the character input device provided by the embodiment, the matched words comprise the apparance and the apparance, and the words are respectively compared with the preset vocabulary table, so that the correct word can be obtained as the apparance, and the apparance is the word to be input, thereby realizing automatic word correction.
In all the above device embodiments, the first character determining unit 11 is specifically configured to, when a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, obtain a first position of the first touch point on the touch screen, and determine a first input character corresponding to the first position, where the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of another touch point, and the two touch points are touch points touched twice in adjacent.
And in the case that the time difference value of the single touch point is not less than the preset time, the preset command operation is executed by the preset command execution unit 17, and the preset command operation may be a single click operation. In the case that the first time difference value of a single touch point is less than the first preset time, and the second time difference value of two touch points is not less than the second preset time, the preset command execution unit 17 executes a preset command operation, which may be a screen sliding operation.
In addition, the determined input character may be displayed on the touch screen by the display unit 18 in a manner of a semi-transparent display, in which the character display brightness is half of the actual display brightness, or a solid display, in which the character display brightness is the actual display brightness.
In the present embodiment, please refer to fig. 8 for a character input device including a preset command execution unit 15 and a display unit 16, wherein fig. 8 is a schematic structural diagram of a character input device provided in the present application based on fig. 6. Of course, fig. 8 may also be based on fig. 7, and this embodiment will not be described again.
The device described in this embodiment may be integrated into a virtual keyboard, and the virtual keyboard may be included in an electronic device, and the virtual keyboard may be connected to a touch screen in the electronic device.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The character input method, the character input device, the virtual keyboard and the electronic device provided by the application are introduced in detail, specific examples are applied in the text to explain the principle and the implementation of the application, and the description of the above embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (22)

1. A character input method is applied to an electronic device, the electronic device comprises a touch screen, and the character input method is characterized by comprising the following steps:
obtaining a first position of a first touch point on the touch screen, and determining a first input character corresponding to the first position;
obtaining a second position of a second touch point on the touch screen;
obtaining the position relation between the second touch point and one point in the touch screen according to the second position;
and determining a second input character corresponding to the second position according to the position relation and the first input character.
2. The character input method according to claim 1, wherein obtaining the positional relationship between the second touch point and one point in the touch screen according to the second position comprises: and acquiring the position relation between the second touch point and the positioning point in the touch screen according to the second position of the second touch point relative to the positioning point.
3. The character input method according to claim 1, wherein obtaining the positional relationship between the second touch point and one point in the touch screen according to the second position comprises: and acquiring the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
4. The character input method according to claim 1, wherein obtaining the positional relationship between the second touch point and one point in the touch screen according to the second position comprises: and acquiring the position relation between the second touch point and the first touch point according to the second position of the second touch point relative to the first touch point.
5. The character input method according to any one of claims 1 to 4, wherein determining a second input character corresponding to the second position from the positional relationship and the first input character comprises: and selecting a character associated with the first input character from a preset vocabulary table according to the position relation and the first input character, and determining the character as a second input character, wherein the preset vocabulary table comprises words existing in a standard dictionary, or the preset vocabulary table comprises words existing in the standard dictionary and recorded words input by the user before.
6. The character input method according to any one of claims 1 to 4, characterized by further comprising: combining the matched characters according to the touch sequence of the touch points to obtain a plurality of words;
and selecting a correct vocabulary from the plurality of vocabularies as an input vocabulary.
7. The character input method of claim 6, wherein selecting a correct word from the plurality of words as an input word comprises:
comparing the plurality of vocabularies with a preset vocabulary table respectively, and selecting one vocabulary included in the preset vocabulary table as an input vocabulary; the preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before.
8. The character input method according to any one of claims 1 to 4, wherein obtaining a first position of a first touch point on the touch screen and determining a first input character corresponding to the first position are performed in a case that a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, wherein the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between the end time of one touch point and the start time of the other touch point, and the two touch points are touch points touched twice in a neighboring touch.
9. The character input method according to any one of claims 1 to 4, wherein the preset command operation is performed in a case where a time difference value of a single touch point is not less than a preset time, or in a case where a first time difference value of a single touch point is less than a first preset time, and a second time difference value of two touch points is not less than a second preset time, wherein the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between an end time of one touch point and a start time of the other touch point, and the two touch points are touch points touched in two adjacent times.
10. The character input method according to any one of claims 1 to 4, characterized by further comprising:
displaying input characters on the touch screen.
11. A character input device for use in an electronic device, the electronic device including a touch screen, the device comprising:
the first character determining unit is used for obtaining a first position of a first touch point on the touch screen and determining a first input character corresponding to the first position;
the position acquisition unit is used for acquiring a second position of a second touch point on the touch screen;
the position relation obtaining unit is used for obtaining the position relation between the second touch point and one point in the touch screen according to the second position;
and the second character determining unit is used for determining a second input character corresponding to the second position according to the position relation and the first input character.
12. The character input device according to claim 11, wherein the positional relationship obtaining unit is specifically configured to obtain the positional relationship between the second touch point and a positioning point in the touch screen according to a second position of the second touch point relative to the positioning point.
13. The character input device according to claim 11, wherein the positional relationship acquisition unit is specifically configured to acquire the positional relationship between the second touch point and the first touch point according to a second position of the second touch point relative to the first touch point.
14. The character input device according to claim 11, wherein the positional relationship acquisition unit is specifically configured to acquire the positional relationship between the second touch point and the first touch point according to a second position of the second touch point with respect to the first touch point.
15. The character input apparatus according to any one of claims 11 to 14, wherein the second character determination unit is specifically configured to select a character associated with the first input character from a preset vocabulary including words already existing in a normative dictionary or including words already existing in the normative dictionary and recorded words previously input by the user based on the positional relationship and the first input character, and determine the character as the second input character.
16. The character input apparatus according to any one of claims 11 to 14, further comprising: the matching unit is used for combining the matched characters according to the touch sequence of the touch points to obtain a plurality of vocabularies;
and the selecting unit is used for selecting a correct vocabulary from the vocabularies as an input vocabulary.
17. The character input apparatus of claim 16, wherein the selecting unit is specifically configured to compare a plurality of words with a predetermined vocabulary, respectively, and select one of the words included in the predetermined vocabulary as the input word; the preset vocabulary table comprises words which are already in the standard dictionary, or the preset vocabulary table comprises words which are already in the standard dictionary and recorded words which are input by the user before.
18. The character input device according to any one of claims 11 to 14, wherein the first character determination unit is specifically configured to, when a first time difference value of a single touch point is smaller than a first preset time and a second time difference value of two touch points is smaller than a second preset time, obtain a first position of the first touch point on the touch screen, and determine the first input character corresponding to the first position, where the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between an end time of one touch point and a start time of another touch point, and the two touch points are touch points touched twice in adjacent.
19. The character input device according to any one of claims 11 to 14, further comprising a preset command executing unit configured to execute a preset command operation in a case where a time difference value of a single touch point is not less than a preset time, or in a case where a first time difference value of a single touch point is less than a first preset time and a second time difference value of two touch points is not less than a second preset time, wherein the first time difference value is a difference value between a start time and an end time of the single touch point, the second time difference value is a difference value between an end time of one touch point and a start time of the other touch point, and the two touch points are touch points touched by two adjacent times.
20. The character input apparatus according to any one of claims 11 to 14, further comprising: and the display unit is used for displaying input characters on the touch screen.
21. A virtual keyboard comprising the character input device of any one of claims 11 to 20.
22. An electronic device comprising a touch screen, further comprising the virtual keyboard of claim 21, the virtual keyboard coupled to the touch screen.
CN201210390927.1A 2012-10-15 2012-10-15 A kind of characters input method, device, dummy keyboard and electronic equipment Active CN103729132B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710828402.4A CN107479725B (en) 2012-10-15 2012-10-15 Character input method and device, virtual keyboard, electronic equipment and storage medium
CN201210390927.1A CN103729132B (en) 2012-10-15 2012-10-15 A kind of characters input method, device, dummy keyboard and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210390927.1A CN103729132B (en) 2012-10-15 2012-10-15 A kind of characters input method, device, dummy keyboard and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201710828402.4A Division CN107479725B (en) 2012-10-15 2012-10-15 Character input method and device, virtual keyboard, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN103729132A true CN103729132A (en) 2014-04-16
CN103729132B CN103729132B (en) 2017-09-29

Family

ID=50453227

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201710828402.4A Active CN107479725B (en) 2012-10-15 2012-10-15 Character input method and device, virtual keyboard, electronic equipment and storage medium
CN201210390927.1A Active CN103729132B (en) 2012-10-15 2012-10-15 A kind of characters input method, device, dummy keyboard and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201710828402.4A Active CN107479725B (en) 2012-10-15 2012-10-15 Character input method and device, virtual keyboard, electronic equipment and storage medium

Country Status (1)

Country Link
CN (2) CN107479725B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106610780A (en) * 2015-10-27 2017-05-03 中兴通讯股份有限公司 Text selection method and intelligent terminal
CN107015727A (en) * 2017-04-07 2017-08-04 深圳市金立通信设备有限公司 A kind of method and terminal of control character separator

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114610164A (en) * 2022-03-17 2022-06-10 联想(北京)有限公司 Information processing method and electronic device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5620267A (en) * 1993-10-15 1997-04-15 Keyboard Advancements, Inc. Keyboard with thumb activated control key
JPH103335A (en) * 1996-06-16 1998-01-06 Shinichiro Sakamoto Typing supporting goods for word processor, personal computer or typewriter
CN1439151A (en) * 2000-02-11 2003-08-27 卡尼斯塔公司 Method and apparatus for entering data using a virtual input device
US20050162402A1 (en) * 2004-01-27 2005-07-28 Watanachote Susornpol J. Methods of interacting with a computer using a finger(s) touch sensing input device with visual feedback
CN1746825A (en) * 2003-06-04 2006-03-15 黄健 Information input and inputting device by pure orientation method
US20070216658A1 (en) * 2006-03-17 2007-09-20 Nokia Corporation Mobile communication terminal
CN101685342A (en) * 2008-09-26 2010-03-31 联想(北京)有限公司 Method and device for realizing dynamic virtual keyboard
CN102023715A (en) * 2009-09-10 2011-04-20 张苏渝 Induction signal inputting method and apparatus
CN102637089A (en) * 2011-02-11 2012-08-15 索尼移动通信日本株式会社 Information input apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5953541A (en) * 1997-01-24 1999-09-14 Tegic Communications, Inc. Disambiguating system for disambiguating ambiguous input sequences by displaying objects associated with the generated input sequences in the order of decreasing frequency of use
US7957955B2 (en) * 2007-01-05 2011-06-07 Apple Inc. Method and system for providing word recommendations for text input
US8358277B2 (en) * 2008-03-18 2013-01-22 Microsoft Corporation Virtual keyboard based activation and dismissal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5620267A (en) * 1993-10-15 1997-04-15 Keyboard Advancements, Inc. Keyboard with thumb activated control key
JPH103335A (en) * 1996-06-16 1998-01-06 Shinichiro Sakamoto Typing supporting goods for word processor, personal computer or typewriter
CN1439151A (en) * 2000-02-11 2003-08-27 卡尼斯塔公司 Method and apparatus for entering data using a virtual input device
CN1746825A (en) * 2003-06-04 2006-03-15 黄健 Information input and inputting device by pure orientation method
US20050162402A1 (en) * 2004-01-27 2005-07-28 Watanachote Susornpol J. Methods of interacting with a computer using a finger(s) touch sensing input device with visual feedback
US20070216658A1 (en) * 2006-03-17 2007-09-20 Nokia Corporation Mobile communication terminal
CN101685342A (en) * 2008-09-26 2010-03-31 联想(北京)有限公司 Method and device for realizing dynamic virtual keyboard
CN102023715A (en) * 2009-09-10 2011-04-20 张苏渝 Induction signal inputting method and apparatus
CN102637089A (en) * 2011-02-11 2012-08-15 索尼移动通信日本株式会社 Information input apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106610780A (en) * 2015-10-27 2017-05-03 中兴通讯股份有限公司 Text selection method and intelligent terminal
CN107015727A (en) * 2017-04-07 2017-08-04 深圳市金立通信设备有限公司 A kind of method and terminal of control character separator

Also Published As

Publication number Publication date
CN103729132B (en) 2017-09-29
CN107479725A (en) 2017-12-15
CN107479725B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
TWI544366B (en) Voice input command
CN106484266B (en) Text processing method and device
US8918739B2 (en) Display-independent recognition of graphical user interface control
US20160328205A1 (en) Method and Apparatus for Voice Operation of Mobile Applications Having Unnamed View Elements
CN106201177B (en) A kind of operation execution method and mobile terminal
CN104462437B (en) The method and system of search are identified based on the multiple touch control operation of terminal interface
CN106201166A (en) A kind of multi-screen display method and terminal
US9405558B2 (en) Display-independent computerized guidance
CN103713845B (en) Method for screening candidate items and device thereof, text input method and input method system
CN112597065B (en) Page testing method and device
WO2015043352A1 (en) Method and apparatus for selecting test nodes on webpages
JP2016531352A (en) Method, device, program and device for updating input system
CN111880668A (en) Input display method and device and electronic equipment
CN113253883A (en) Application interface display method and device and electronic equipment
CN107479725B (en) Character input method and device, virtual keyboard, electronic equipment and storage medium
US20150199171A1 (en) Handwritten document processing apparatus and method
CN106845190B (en) Display control system and method
US20140181672A1 (en) Information processing method and electronic apparatus
US10114518B2 (en) Information processing system, information processing device, and screen display method
US20160292140A1 (en) Associative input method and terminal
CN112764551A (en) Vocabulary display method and device and electronic equipment
US20210165568A1 (en) Method and electronic device for configuring touch screen keyboard
CN106919558B (en) Translation method and translation device based on natural conversation mode for mobile equipment
CN111221504A (en) Synchronized operation display system and non-transitory computer readable medium
CN112685126B (en) Document content display method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant